In deep learning for electron microscopy (EM), we classify use cases into three primary tasks based on model inputs and outputs:
For each task, we provide a specific use case demonstration. Our workflow makes it easy to adapt these use cases to your needs in a plug-and-play fashion: simply swap the annotated data to address a different problem. For example, a semantic segmentation model trained to segment cell nuclei can be adapted to segment mitochondria with sufficient training data.
TL;DR 🧬✨ We developed a regression model to quantify maturation states ("naked", "budding", "enveloped") of human cytomegalovirus (HCMV) during its final envelopment process i.e. secondary envelopment. Researchers can adapt the provided notebook for their own EM data analysis. Click the "Open in Studio" button to get started. 🚀.
If you have any questions or encounter any issues while dealing with this use case please feel free to contact Hannah Kniesel or Tristan Payer.
TL;DR 🧬✨ We choose the segmentation of certain cell organelles as a relevant Image to Image task. Segmentation is an important tool in EM image analysis as it contributes to a better visualisation of certain organelles and complex cell structures, which facilitates the interpretation of EM data. Segmentation allows for detailed analysis of organelle morphology, spatial relationships and distribution within cells. This is crucial for understanding intracellular organisation and its relationship to cell function. Due to the small available dataset sizes we deploy data augmentation methods, make use of pretrained weights and train an ensemble model, which has been shown to provide better generalizability even when trained on smaller dataset sizes [1]. Click the "Open in Studio" button to get started. 🚀.
[1] Shaga Devan, Kavitha, et al. "Weighted average ensemble-based semantic segmentation in biological electron microscopy images." Histochemistry and Cell Biology 158.5 (2022): 447-462.
If you have any questions or encounter any issues while dealing with this use case please feel free to contact Poonam.
TL;DR 🧬✨ We use deep learning for tomographic reconstruction of 2D STEM projections, following [1,2]. This approach enables 3D volume reconstruction, revealing detailed cellular structures and relationships not visible in 2D 🚀.
[1] Kniesel, Hannah, et al. "Clean implicit 3d structure from noisy 2d stem images." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
[2] Mildenhall, Ben, et al. "Nerf: Representing scenes as neural radiance fields for view synthesis." Communications of the ACM 65.1 (2021): 99-106.
If you have any questions or encounter any issues while dealing with this use case please feel free to contact Hannah Kniesel.
BibTex Code Here