-
Notifications
You must be signed in to change notification settings - Fork 46
Implemented TICON encoder with contextualized and isolated inference modes for all currently supported extractors #144
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Hello, |
…iles; now it is fully integrated with only minor changes in the original STAMP pipeline. **kwargs was added encoding/encoder/__init__.py in _generate_slide_embedding() and _save_features_() to enable saving additional information, e.g. tile_size_px, tile_size_um, and coords, which is necessary to use the contextualized slides for further processing in STAMP.
|
I would clarify this, correct me if I am wrong: author didn't say about thier public ckpt but it implicitly showed that this one is for tile-lvl. If you try to run their inference example then the output shape should be (B, N, D) not (B, D). Therefore we still use these features for bag instance training. The current ckpt is not Tangle version (slide-level encoder) tbh. |
|
Absolutely, both modes are tile-level…from my understanding the difference is that in isolated mode each tile is processed alone, whereas in the other mode “with slide-context”: “all tiles are processed together” and in the end you still have tile level features (“contextualized features”, not slide-level). |
|
Please add a mode where you aggregate the tile level features of TICON (with context), such that we act like its a slide-level encoder. I think this should fairly well. We should compare that to the normal stamp modeling on TICONs tile level output (sophia Transformer) und usual stamp pipeline. |
|
I would give some comments here:
|
2846072 to
b1b79ff
Compare
This adds the TICON encoder for slide-level contextualization of tile embeddings, following the TICON paper: (https://arxiv.org/abs/2512.21331 , Belagali et al.)
-->"with slide context":
all tile embeddings are processed together, allowing each tile to receive context from the full slide-level neighborhood (note: extractor of is automatically detected from .h5- metadata, no config-input necessary)
-->"without slide context/ isolated:
tiles are processed independently (note: extractor must be specified in config-input)
-->in each case, all currently supported extractors are available (UNI2, H-Optimus-1, GigaPath, CONCH1_5, Virchow2)