Use Cases

In deep learning for electron microscopy (EM), we classify use cases into three primary tasks based on model inputs and outputs:

  1. Image to Value(s)
  2. Image to Image
  3. 2D to 3D

For each task, we provide a specific use case demonstration. Our workflow makes it easy to adapt these use cases to your needs in a plug-and-play fashion: simply swap the annotated data to address a different problem. For example, a semantic segmentation model trained to segment cell nuclei can be adapted to segment mitochondria with sufficient training data.

Categories of deep learning tasks in the context of EM.
Tasks in the Area of EM data analysis can be categorized by the requirements of the DL method into Image to Value(s), Image to Image and 2D to 3D. For each category, we introduce one exemplary notebooks, tackling EM specific challenges.
Highlight: Each use case has a primary focus and an exemplary application. By exchanging the underlying data (as described within each use case), the application can be changed. Please note that we only provide pretrained model weights and data on the exemplary application for testing purposes. We do not provide ready-trained solutions but a tool for adapting deep learning solutions and training models based on the specific needs of EM labs.

Image to Value(s)

Primary Focus: Explainable Object Counting in Microscopy Images

Application: Explainable Virus Capsid Quantification
Challenge: Deep Learning as Black Box
Required Labels: Location Labels

TL;DR 🧬✨ We developed a regression model to quantify maturation states ("naked", "budding", "enveloped") of human cytomegalovirus (HCMV) during its final envelopment process i.e. secondary envelopment. Researchers can adapt the provided notebook for their own EM data analysis. Click the "Open in Studio" button to get started. 🚀.

Open In Studio
Teaser explainable virus quantification
For the explainable virus quantification we train a regression model to predict the number of "naked", "budding" and "enveloped" virus capsids in the input image. We use GradCAM as an explainable AI technique, to make the model more trustworthy and the predictions easier to coprehend.

If you have any questions or encounter any issues while dealing with this use case please feel free to contact Hannah Kniesel or Tristan Payer.


Image to Image

Primary Focus: Semantic Segmentation

Application: Segmentation of Cellular Structures
Challenge: Robustness with small dataset sizes
Required Labels: Semantic Segmentation Masks

TL;DR 🧬✨ We choose the segmentation of certain cell organelles as a relevant Image to Image task. Segmentation is an important tool in EM image analysis as it contributes to a better visualisation of certain organelles and complex cell structures, which facilitates the interpretation of EM data. Segmentation allows for detailed analysis of organelle morphology, spatial relationships and distribution within cells. This is crucial for understanding intracellular organisation and its relationship to cell function. Due to the small available dataset sizes we deploy data augmentation methods, make use of pretrained weights and train an ensemble model, which has been shown to provide better generalizability even when trained on smaller dataset sizes [1]. Click the "Open in Studio" button to get started. 🚀.

Open In Studio
Depiction of ensemble model
For the segmentation of cellular structures we follow [1] and train a so called "ensemble" model. An ensemble model is a set of models which are used to make multiple predictions for the same input data. The predictions are then combined to retrieve a more robust prediction.

[1] Shaga Devan, Kavitha, et al. "Weighted average ensemble-based semantic segmentation in biological electron microscopy images." Histochemistry and Cell Biology 158.5 (2022): 447-462.

If you have any questions or encounter any issues while dealing with this use case please feel free to contact Poonam.


2D to 3D

Primary Focus: Tomographic Reconstruction

Application: Tomographic Reconstruction of STEM tilt series
Challenge: Evaluation with missing ground truth
Required Labels: None

TL;DR 🧬✨ We use deep learning for tomographic reconstruction of 2D STEM projections, following [1,2]. This approach enables 3D volume reconstruction, revealing detailed cellular structures and relationships not visible in 2D 🚀.

Open In Studio
Depiction of the tomographic reconstruction. (Show tilt series, model, reconstruction of Nanoparticles)
Based on a given tilt series, we are able to generate a 3D reconstruction using self-supervised deep learning.

[1] Kniesel, Hannah, et al. "Clean implicit 3d structure from noisy 2d stem images." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.

[2] Mildenhall, Ben, et al. "Nerf: Representing scenes as neural radiance fields for view synthesis." Communications of the ACM 65.1 (2021): 99-106.

If you have any questions or encounter any issues while dealing with this use case please feel free to contact Hannah Kniesel.


BibTeX

BibTex Code Here