COL@Duke - Virtual Fluoroscence -->

Virtual Fluoroscence

Jul 29, 2020. | By: Colin L. Cooke, Fanjie Kong, Amey Chaware, Kevin C. Zhou, Kanghyun Kim , Rong Xu, D. Michael Ando, Samuel J. Wang, Pavan Chandra Konda and Roarke Horstmeyer.

Introducing a new method of data-driven microscope design for virtual fluorescence microscopy. Our results show that by including a model of illumination within the first layers of a deep convolutional neural network, it is possible to learn task-specific LED patterns that substantially improve the ability to infer fluorescence image information from unstained transmission microscopy images. We validated our method on two different experimental setups, with different magnifications and different sample types, to show a consistent improvement in performance as compared to conventional illumination methods. Additionally, to understand the importance of learned illumination on inference task, we varied the dynamic range of the fluorescent image targets (from one to seven bits), and showed that the margin of improvement for learned patterns increased with the information content of the target. This work demonstrates the power of programmable optical elements at enabling better machine learning algorithm performance and at providing physical insight into next generation of machine-controlled imaging systems.

Click here to visit the project page!

Recent Posts

About

This page is an educational and research resource of the Computational Optics Lab at Duke University, with the goal of providing an open platform to share research at the intersection of deep learning and imaging system design.

Lab Address

Computational Optics Lab
Duke University
Fitzpatrick Center (CIEMAS) 2569
Durham, NC 27708