COL@Duke - Projects

Our Projects


Parallelized Diffuse Correlation Spectrocopy

Feb 22, 2021. | By: Wenhui Liu, Ruobing Qian, Shiqi Xu, Pavan Chandra Konda, Joakim Jönsson, Mark Harfouche, Dawid Borycki, Colin Cooke, Edouard Berrocal, Qionghai Dai, Haoqian Wang, Roarke W. Horstmeyer

Diffuse correlation spectroscopy (DCS) is a well-established method that measures rapid changes in scattered coherent light to identify blood flow and functional dynamics within tissue. While its sensitivity to minute scatterer displacements leads to a number of unique advantages, conventional DCS systems become photon-limited when attempting to probe deep into tissue, which leads to long measurement windows (~1 sec). Here, we present a high-sensitivity DCS system with 1024 parallel detection channels integrated within a single-photon avalanche diode (SPAD) array, and demonstrate the ability to detect mm-scale perturbations up to 1 cm deep within a tissue-like phantom at up to 33 Hz sampling rate. We also show that this highly parallelized strategy can measure the human pulse at high fidelity and detect behaviorally-induced physiological variations from above the human prefrontal cortex. By greatly improving detection sensitivity and speed, highly parallelized DCS opens up new experiments for high-speed biological signal measurement.

Click here to visit the project page, for the video presentation click here

[Read More]

Virtual Fluoroscence

Jul 29, 2020. | By: Colin L. Cooke, Fanjie Kong, Amey Chaware, Kevin C. Zhou, Kanghyun Kim , Rong Xu, D. Michael Ando, Samuel J. Wang, Pavan Chandra Konda and Roarke Horstmeyer.

Introducing a new method of data-driven microscope design for virtual fluorescence microscopy. Our results show that by including a model of illumination within the first layers of a deep convolutional neural network, it is possible to learn task-specific LED patterns that substantially improve the ability to infer fluorescence image information from unstained transmission microscopy images. We validated our method on two different experimental setups, with different magnifications and different sample types, to show a consistent improvement in performance as compared to conventional illumination methods. Additionally, to understand the importance of learned illumination on inference task, we varied the dynamic range of the fluorescent image targets (from one to seven bits), and showed that the margin of improvement for learned patterns increased with the information content of the target. This work demonstrates the power of programmable optical elements at enabling better machine learning algorithm performance and at providing physical insight into next generation of machine-controlled imaging systems.

Click here to visit the project page!

[Read More]

Overlapped Imaging

Jul 29, 2020. | By: Xing Yao, Haoran Xi, Kevin C. Zhou, Amey Chhaware, Colin Cooke, Yuting Li, Pavan Chandra Konda and Roarke Horstmeyer

It is challenging and time-consuming for trained technicians to visually examine blood smears to reach a diagnostic conclusion. A good example is the identification of infection with the malaria parasite, which can take upwards of ten minutes of concentrated searching per patient. While digital microscope image capture and analysis software can help automate this process, the limited field-of-view of high-resolution objective lenses still makes this a challenging task. Here, we present an imaging system that simultaneously captures multiple images across a large effective field-of-view, overlaps these images onto a common detector, and then automatically classifies the overlapped image’s contents to increase malaria parasite detection throughput. We show that malaria parasite classification accuracy decreases approximately linearly as a function of image overlap number. For our experimentally captured data, we observe a classification accuracy decrease from approximately 0.9-0.95 for a single non-overlapped image, to approximately 0.7 for a 7X overlapped image. We demonstrate that it is possible to overlap seven unique images onto a common sensor within a compact, inexpensive microscope hardware design, utilizing off-the-shelf micro-objective lenses, while still offering relatively accurate classification of the presence or absence of the parasite within the acquired dataset. With additional development, this approach may offer a 7X potential speed-up for automated disease diagnosis from microscope image data over large fields-of-view.

Click here to visit the project page!

[Read More]

Multi Element microscope optimization

Jul 28, 2020. | By: Kanghyun Kim, Pavan Chandra Konda, Colin L.Cooke, Ron Appel and Roarke Horstmeyer.

Standard microscopes offer a variety of settings to help improve the visibility of different specimens to the end microscope user. Increasingly, however, digital microscopes are used to capture images for automated interpretation by computer algorithms (e.g., for feature classification, detection or segmentation), often without any human involvement. In this work, we investigate an approach to jointly optimize multiple microscope settings, together with a classification network, for improved performance with such automated tasks. We explore the interplay between optimization of programmable illumination and pupil transmission, using experimentally imaged blood smears for automated malaria parasite detection, to show that multi-element “learned sensing” outperforms its single-element counterpart. While not necessarily ideal for human interpretation, the network’s resulting low-resolution microscope images (20X-comparable) offer a machine learning network sufficient contrast to match the classification performance of corresponding high-resolution imagery (100X-comparable), pointing a path towards accurate automation over large fields-of-view.

Click here to visit the project page!

[Read More]

Towards an Intelligent Microscope

Mar 30, 2020. | By: Amey Chaware, Colin L.Cooke, Kanghyun Kim and Roarke Horstmeyer.

Recent machine learning techniques have dramatically changed how we process digital images. However, the way in which we capture images is still largely driven by human intuition and experience. This restriction is in part due to the many available degrees of freedom that alter the image acquisition process (lens focus, exposure, filtering, etc). Here we focus on one such degree of freedom - illumination within a microscope - which can drastically alter information captured by the image sensor. We present a reinforcement learning system that adaptively explores optimal patterns to illuminate specimens for immediate classification. The agent uses a recurrent latent space to encode a large set of variably-illuminated samples and illumination patterns. We train our agent using a reward that balances classification confidence with image acquisition cost. By synthesizing knowledge over multiple snapshots, the agent can classify on the basis of all previous images with higher accuracy than from naively illuminated images, thus demonstrating a smarter way to physically capture task-specific information. Click here to visit the project page!

[Read More]

Learned Sensing

Mar 30, 2020. | By: Alex Muthumbi, Amey Chaware, Kanghyun Kim, Kevin C. Zhou, Pavan Chandra Konda and Roarke Horstmeyer.

Since its invention, the microscope has been optimized for interpretation by a human observer. With the recent development of deep learning algorithms for automated image analysis, there is now a clear need to re-design the microscope’s hardware for specific interpretation tasks. To increase the speed and accuracy of automated image classification, this work presents a method to co-optimize how a sample is illuminated in a microscope, along with a pipeline to automatically classify the resulting image, using a deep neural network. By adding a “physical layer” to a deep classification network, we are able to jointly optimize for specific illumination patterns that highlight the most important sample features for the particular learning task at hand, which may not be obvious under standard illumination. We demonstrate how our learned sensing approach for illumination design can automatically identify malaria-infected cells with up to 5-10% greater accuracy than standard and alternative microscope lighting designs. We show that this joint hardware-software design procedure generalizes to offer accurate diagnoses for two different blood smear types, and experimentally show how our new procedure can translate across different experimental setups while maintaining high accuracy. Click here to visit the project page!

[Read More]

Deep Prior Diffraction Tomography

Mar 30, 2020. | By: Kevin C.Zhou and Roarke Horstmeyer

We present a tomographic imaging technique, termed Deep Prior Diffraction Tomography (DP-DT), to reconstruct the 3D refractive index (RI) of thick biological samples at high resolution from a sequence of low-resolution images collected under angularly varying illumination. DP-DT processes the multi-angle data using a phase retrieval algorithm that is extended by a deep image prior (DIP), which reparameterizes the 3D sample reconstruction with an untrained, deep generative 3D convolutional neural network (CNN). We show that DP-DT effectively addresses the missing cone problem, which otherwise degrades the resolution and quality of standard 3D reconstruction algorithms. As DP-DT does not require pre-captured data or pre-training, it is not biased towards any particular dataset. Hence, it is a general technique that can be applied to a wide variety of 3D samples, including scenarios in which large datasets for supervised training would be infeasible or expensive. We applied DP-DT to obtain 3D RI maps of bead phantoms and complex biological specimens, both in simulation and experiment, and show that DP-DT produces higher-quality results than standard regularization techniques. We further demonstrate the generality of DP-DT, using two different scattering models, the first Born and multi-slice models. Our results point to the potential benefits of DP-DT for other 3D imaging modalities, including X-ray computed tomography, magnetic resonance imaging, and electron microscopy.

Click here to visit the project page!

[Read More]

Recent Posts

About

This page is an educational and research resource of the Computational Optics Lab at Duke University, with the goal of providing an open platform to share research at the intersection of deep learning and imaging system design.

Lab Address

Computational Optics Lab
Duke University
Fitzpatrick Center (CIEMAS) 2569
Durham, NC 27708