Workflow

Our workflow standardizes the process of implementing deep learning (DL) use cases for electron microscopy (EM). It is designed for DL experts by streamlining training, testing, and inference through a PyTorch-based playground with a jupyter notebook based interface for easy use by EM experts. DL experts can easily contribute their own use cases using our template. This approach enables electron microscopists to work with a single, user-friendly implementation to get more familiar in the area of deep learning, while simplifying the development process for DL specialists.

deep learning workflow in the context of EM.
Figure 1: We propose a simple workflow for developing deep learning solutions for the supported analysis of EM data. The workflow is designed in such way, that it allows DL experts to implement and provide DL solutions and evaluation methods with minimal overhead, while EM experts are able to train their own models.

Image processing icons created by BomSymbols - Flaticon, Ai brain icons created by Eklip Studio - Flaticon, Evaluation icons created by justicon - Flaticon, Inference icons created by Freepik - Flaticon

In the following, we will introduce the steps of the workflow to EM specialists as well as DL specialists. Please open the corresponding tab when reading.

Development

In deep learning for electron microscopy (EM), the process of creating and optimizing models to address specific challenges within EM is known as development. This process is structured around three key steps:

  1. Data: Preparing high-quality, well-annotated datasets tailored to your lab’s needs.
  2. Model Training: Training models by optimizing architectures and parameters.
  3. Model Evaluation: Evaluating the model’s performance using task-specific metrics.

These steps ensure deep learning models are effectively adapted for EM tasks, providing solutions specific to your lab's requirements.

1. Data

Data preparation is a critical aspect of the deep learning pipeline. Recognizing that expertise in data collection and annotation primarily resides within EM labs, our workflow is designed to provide guidance for EM researchers to develop their own datasets in collaboration with DL experts.

While each use case in our workflow focuses on a primary task (e.g., counting objects in EM images), the workflow is flexible enough to allow you to swap the application area (e.g., quantifying mitochondria in EM images) without needing to modify the code—only the data needs to be replaced.

2. Model Training

During training, the model learns patterns from the data by adjusting its internal parameters (weights) based on the input-output relationships. This process is guided by a loss function, which measures the error between predicted and true values (labels/annotations). The model is iteratively updated to minimize this error. Validation, on the other hand, involves evaluating the model’s performance on a separate set of data (the validation set) that it hasn't seen during training. This helps to check how well the model generalizes to new, unseen data and aids in detecting issues such as overfitting. The training and validation processes together ensure that the model is well-suited for the task at hand and can deliver reliable results in real-world applications.

3. Model Evaluation

Model evaluation assesses whether the trained model meets the desired criteria and is ready for deployment or requires further refinement.

Inference

Inference is the process of using a thoroughly trained and tested deep learning model to make predictions on new, unseen data. It finally allows to use the model to support the analysis of EM data.

1. Define Data

To perform inference with a trained model, you first need to define the data you want the model to make predictions on. This could be a specific set of EM data that you wish to analyze. The data can be provided as a single file for prediction or as a folder containing multiple files, allowing for automatic processing of all the included data. This flexibility ensures the model can handle both individual cases and larger datasets efficiently.

2. Choose Model

The EM specialist is responsible for selecting a previously trained model for inference. It is essential that the model has been evaluated thoroughly according to the evaluation criteria provided by the DL expert, ensuring that the evaluation results are promising. Only models with strong performance, as indicated by the evaluation metrics, should be used for making predictions, ensuring reliable and accurate results.

2. Make Prediction

The model will be used to make predictions on the provided data. However, it's important to remember that no trained model is perfect, and human oversight remains essential. The results generated by the model should always be carefully checked for plausibility to ensure accuracy and reliability.

BibTeX

BibTex Code Here