Advanced Image Processing

Share:

EmailFacebookLinkedInXWhatsAppShare

Image analysis is a powerful tool in cell biology to collect quantitative measurements in time and space. Because microscopy imaging can easily produce terabytes of research data, accurate and automated analysis methods are key to successfully quantifying relevant information in such large image collections.

High-Performance Image Computation

Cell Nuclei Segmentation

Cell nuclei segmentation is typically the first critical step for microscopy image analysis. With accurate cell nuclei segmentation, multiple biological analyses can be subsequently performed, including cell-type classification, cell counting, and cell tracking, which provides valuable information for researchers.

We developed a Mask Regional Convolutional Neural Networks (RCNN) -based method for nuclei segmentation. Mask RCNN [1] is a state-of-the-art object segmentation framework that can identify not only the location of any object, but also its segmented mask.

schematic of key components of computational technique

Top: Key components of the Mask RCNN include a backbone network, region proposal network, object classification module, bounding box regression module, and mask segmentation module. Bottom: Mask RCNN post-processing. Nuclei are often connected and need to be split. We apply a watershed to split the nuclei based on a distance transform.

Example 1: Nuclei segmentation of an adult worm

{"preview_thumbnail":"/sites/default/files/styles/video_embed_wysiwyg_preview/public/video_thumbnails/VchE7sP1zqo.jpg?itok=dgGA1qt8","video_url":"https://youtu.be/VchE7sP1zqo","settings":{"responsive":1,"width":"854","height":"480","autoplay":0},"settings_summary":["Embedded Video (Responsive)."]}

  • Triple-view line confocal imaging of an adult worm. Sample size is ~870 mm x 53 mm x 48 mm.  Manual segmentation of all nuclei (n=2136) took several days/weeks.
  • With our Mask RCNN-based nuclei segmentation model, segmentation of all nuclei took < 1 hour on single NAVDIA Quadro P6000 GPU.
  • Compared with manually segmented nuclei, the accuracy of the Mask RCNN-based segmentation model is 94.42%.

Example 2: Nuclei segmentation of C. elegans embryos

Mask RCNN-based nuclei segmentation can be utilized for cell counting throughout the entire period of embryogenesis. Here, we integrated nuclei segmentation into a cell-tracking system to map the growth and migration of every cell in a live, developing worm embryo from fertilization to maturity.

Example 3: Evaluation of image quality of imaging systems

To quantify three imaging systems, we used Mask RCNN to segment and count the number of nuclei from 15 worm embryos. These three imaging systems included single-view light-sheet imaging (raw), single view light-sheet imaging followed by a one-step deep learning (DL) prediction (one-step DL), and single view light-sheet imaging followed by a two-step deep learning prediction (two-step DL). For the C. elegans embryonic system, the exact number of nuclei is known, as their positions and divisions were previously manually observed and scored by John Sulston with differential interference contrast (DIC) microscopy. Against the Sulston ground truth, the raw single confocal view found fewer than half of all nuclei. The two-step DL prediction fared much better, capturing the majority of the nuclei and outperforming the one-step DL prediction.

lateral slice through c. elegans embryo

Left: Lateral slice through C. elegans embryo showing GFP-tagged histone expression. From left: Raw single-view light-sheet data; after one-step DL prediction; after two-step DL prediction.  Right: Number of nuclei segmented by the three imaging systems.

Image Stitching

We developed an image-stitching package that allows simple and efficient alignment of multi-tile, multi-view, and multi-channel image datasets, which are acquired by light sheet microscopes. This package supports images from megabyte-sized images up to terabyte-sized images, which are produced when acquiring cleared tissue samples with light sheet microscopy.

Image data tiles

Top: Example of 2 x 7 image data tiles before stitching. Bottom: Stitched image data.

Rapid Image Deconvolution and Multiview Fusion

The contrast and resolution of images obtained with optical microscopes can be improved by deconvolution and computational fusion of multiple views of the same sample [2]. Because these methods are computationally expensive for large datasets, we have designed several software pipelines for different applications, including for rapid image deconvolution and/or multiview fusion.

Pipeline 1: Joint-view Deconvolution on Cleared-tissue Datasets

schematic of joint view deconvolution of image

Pipeline of joint-view deconvolution on large, cleared tissue data imaged with dual-view light-sheet microscopy (diSPIM). Raw data acquired by the cleared-tissue diSPIM are saved as multiple TIFF files (step 1). The XY slices are re-organized and re-saved as TIFF stacks, each corresponding to a distinct spatial tile/color/view (step 2). Tiles for each color/view are then stitched to reassemble blurred images of the sample (step 3). These TIFF stacks at each color and each view are deskewed, interpolated, rotated, cropped, and resaved as TIFF files (step 4). Files are  down-sampled and registered by a  global transformation matrix (step 5). The coarsely registered files are then split into multiple subvolumes (step 6). Fine registration and deconvolution are then performed on the paired subvolumes (step 7). Finally, stitching all deconvolved subvolumes results in the final reconstruction (step 8). 

 

Table 1: Computation time on a pair of 3800 x 3400 x 1200 data (28G)

Processing Type Single Workstation Time (hr) Biowulf Cluster Time (min)
Stitching Tiles 0.5  15
Deskew + Interpolation + Rotation 4 20
Subvolume Registration + Deconvolution 7 30
Stitching Subvolumes 5 15
Combined Processing Time ~17 90

Pipeline 2: Single-view Deconvolution on Cleared-tissue Dataset

schematic of single view deconvolution of image

Pipeline of single-view deconvolution on large, cleared tissue data imaged with diSPIM. Raw data acquired with diSPIM are saved as multiple TIFF files (step 1). The XY slices are re-organized and re-saved as TIFF stacks, each corresponding to a distinct spatial tile/color (step 2). Tiles for each color are then stitched to reassemble blurred, images of the sample (step 3). These TIFF stacks at each color are deskewed, interpolated, rotated, cropped, and resaved as TIFF files (step 4). The TIFF stacks are then split into multiple subvolumes (step 5). Deconvolution is then performed on the subvolumes (step 6). Finally, stitching all deconvolved subvolumes results in the final reconstruction (step 7). 

Table 2: Computation time on a pair of 3800 x 3400 x 1200 data (28G)

Processing Type Single Workstation Time (hr) Biowulf Cluster Time (min)
Stitching Tiles 0.5  15
Deskew + Interpolation + Rotation 2 20
Subvolume Registration + Deconvolution 6 20
Stitching Subvolumes 5 15
Combined Processing Time ~16 65

Machine Learning for Image Denoising, Resolution Enhancement, and Segmentation

Pipeline 3: Joint-view Deconvolution on Small Time Serial Data

We also developed a registration and joint-view deconvoltuion package for small data (no stitching/splitting required) with multiple time points.

zebrafish embryo

Example images of zebrafish embryos expressing Lyn-eGFP. Target images are shown in red, source images in green, and the overlay in yellow. Images are maximum intensity projections of lateral views.

Table 3: Computation time of 1020 x 2048 x 100 (400M), 300 time points, 2 colors

Processing Type Single Workstation Time  Biowulf Cluster Time (min)
Subvolume Registration + Deconvolution (at each time point) 4 min 4
Combined Processing Time 300 x 4 x 2 = 40 hr 120*

*With multiple available GPUs, all deconcovultion jobs could be finished within 2 hours.

Image Denoising and Resolution Enhancement

For super-resolution microscopy applications, we use 3D residual channel attention networks (RCAN) [3]. We first extended the original RCAN to handle 3D images; this method matches or exceeds the performance of previous networks in denoising fluorescence microscopy data. We can apply this capability for super-resolution imaging over thousands of image volumes (tens of thousands of images). This method allows for RCAN and other networks to extend resolution, providing better resolution enhancement than alternatives, especially along the axial dimension.  Finally, when we use stimulated emission depletion microscopy (STED) and expansion-microscopy ground truth to train RCAN models using multiple fixed- and live-cell samples, we demonstrate four-to five-fold improvement in volumetric resolution. 

 

RCAN denoise super-resolution data

RCAN denoise super-resolution data.

{"preview_thumbnail":"/sites/default/files/styles/video_embed_wysiwyg_preview/public/video_thumbnails/058lZs62J-8.jpg?itok=vpKd2D2z","video_url":"https://youtu.be/058lZs62J-8","settings":{"responsive":1,"width":"854","height":"480","autoplay":0},"settings_summary":["Embedded Video (Responsive)."]}

3D-RCAN denoising dual-color imaging of mitochondria (magenta) and lysosomes (cyan) in live U2OS cells. Left: Raw dual-color image (noisy). Right: 3D-RCAN output. The deep learning denoised image allows for the quantification and tracking of mitochondrial and mitochondria–lysosome interactions, which is not possible in the raw images.

{"preview_thumbnail":"/sites/default/files/styles/video_embed_wysiwyg_preview/public/video_thumbnails/CqyvdH5qOAk.jpg?itok=cINKNEPR","video_url":"https://youtu.be/CqyvdH5qOAk","settings":{"responsive":1,"width":"854","height":"480","autoplay":0},"settings_summary":["Embedded Video (Responsive)."]}

3D-RCAN enables transformation of confocal images to STED images. A model was trained with pairs of confocal and STED images. In the video, the raw resonant confocal data (left) has poorly defined nuclei and chromosomes; these structures were clearly resolved in the RCAN predictions (right).

{"preview_thumbnail":"/sites/default/files/styles/video_embed_wysiwyg_preview/public/video_thumbnails/SFa45nAoMjU.jpg?itok=HjC9O7V7","video_url":"https://youtu.be/SFa45nAoMjU","settings":{"responsive":1,"width":"854","height":"480","autoplay":0},"settings_summary":["Embedded Video (Responsive)."]}

3D-RCAN enables transformation of Instant Structured Illumination Microscope (iSIM) to expansion images. Dynamics and organization of the actin and microtubule cytoskeleton in Jurkat T cells are much better resolved with RCAN-predicted expansion images.

  1. Wu, Y., Han, X., Su, Y. et al. Multiview confocal super-resolution microscopy. Nature 600, 279–284 (2021). https://doi.org/10.1038/s41586-021-04110-0 
  2. Guo, M., Li, Y., Su, Y. et al. Rapid image deconvolution and multiview fusion for optical microscopy. Nat Biotechnol 38, 1337–1346 (2020). https://doi.org/10.1038/s41587-020-0560-x 
  3. Chen, J., Sasaki, H., Lai, H. et al. Three-dimensional residual channel attention networks denoise and sharpen fluorescence microscopy image volumes. Nat Methods 18, 678–687 (2021). https://doi.org/10.1038/s41592-021-01155-
EmailFacebookLinkedInXWhatsAppShare