期刊文献+
共找到6篇文章
< 1 >
每页显示 20 50 100
Snapshot multispectral imaging through defocusing and a Fourier imager network
1
作者 Xilin Yang Michael John Fanous +6 位作者 hanlong chen Ryan Lee Paloma Casteleiro Costa Yuhang Li Luzhe Huang Yijie Zhang Aydogan Ozcan 《Advanced Photonics Nexus》 2025年第5期24-35,共12页
Multispectral imaging,which simultaneously captures the spatial and spectral information of a scene,is widely used across diverse fields,including remote sensing,biomedical imaging,and agricultural monitoring.We intro... Multispectral imaging,which simultaneously captures the spatial and spectral information of a scene,is widely used across diverse fields,including remote sensing,biomedical imaging,and agricultural monitoring.We introduce a snapshot multispectral imaging approach employing a standard monochrome image sensor with no additional spectral filters or customized components.Our system leverages the inherent chromatic aberration of wavelength-dependent defocusing as a natural source of physical encoding of multispectral information;this encoded image information is rapidly decoded via a deep learning-based multispectral Fourier imager network(mFIN).We experimentally tested our method with six illumination bands and demonstrated an overall accuracy of 98.25%for predicting the illumination channels at the input and achieved a robust multispectral image reconstruction on various test objects.This deep learning-powered framework achieves high-quality multispectral image reconstruction using snapshot image acquisition with a monochrome image sensor and could be useful for applications in biomedicine,industrial quality control,and agriculture,among others. 展开更多
关键词 computational imaging multispectral imaging deep learning image reconstruction Fourier imager network
在线阅读 下载PDF
Fourier Imager Network(FIN):A deep neural network for hologram reconstruction with superior external generalization 被引量:14
2
作者 hanlong chen LUZHE HUANG +1 位作者 TAIRAN LIU AYDOGAN OZCAN 《Light: Science & Applications》 SCIE EI CAS CSCD 2022年第9期2225-2234,共10页
Deep learning-based image reconstruction methods have achieved remarkable success in phase recovery and holographic imaging.However,the generalization of their image reconstruction performance to new types of samples ... Deep learning-based image reconstruction methods have achieved remarkable success in phase recovery and holographic imaging.However,the generalization of their image reconstruction performance to new types of samples never seen by the network remains a challenge.Here we introduce a deep learning framework,termed Fourier Imager Network(FIN),that can perform end-to-end phase recovery and image reconstruction from raw holograms of new types of samples,exhibiting unprecedented success in external generalization.FIN architecture is based on spatial Fourier transform modules that process the spatial frequencies of its inputs using learnable filters and a global receptive field.Compared with existing convolutional deep neural networks used for hologram reconstruction,FIN exhibits superior generalization to new types of samples,while also being much faster in its image inference speed,completing the hologram reconstruction task in~0.04 s per 1 mm^(2) of the sample area.We experimentally validated the performance of FIN by training it using human lung tissue samples and blindly testing it on human prostate,salivary gland tissue and Pap smear samples,proving its superior external generalization and image reconstruction speed.Beyond holographic microscopy and quantitative phase imaging,FIN and the underlying neural network architecture might open up various new opportunities to design broadly generalizable deep learning models in computational imaging and machine vision fields. 展开更多
关键词 field. GENERALIZATION HOLOGRAPHIC
原文传递
Recurrent neural network-based volumetric fluorescence microscopy 被引量:9
3
作者 Luzhe Huang hanlong chen +2 位作者 Yilin Luo Yair Rivenson Aydogan Ozcan 《Light: Science & Applications》 SCIE EI CAS CSCD 2021年第4期620-635,共16页
Volumetric imaging of samples using fluorescence microscopy plays an important role in various fields including physical,medical and life sciences.Here we report a deep learning-based volumetric image inference framew... Volumetric imaging of samples using fluorescence microscopy plays an important role in various fields including physical,medical and life sciences.Here we report a deep learning-based volumetric image inference framework that uses 2D images that are sparsely captured by a standard wide-field fluorescence microscope at arbitrary axial positions within the sample volume.Through a recurrent convolutional neural network,which we term as Recurrent-MZ,2D fluorescence information from a few axial planes within the sample is explicitly incorporated to digitally reconstruct the sample volume over an extended depth-of-field.Using experiments on C.elegans and nanobead samples,Recurrent-MZ is demonstrated to significantly increase the depth-of-field of a 63×/1.4NA objective lens,also providing a 30-fold reduction in the number of axial scans required to image the same sample volume.We further illustrated the generalization of this recurrent network for 3D imaging by showing its resilience to varying imaging conditions,including e.g.,different sequences of input images,covering various axial permutations and unknown axial positioning errors.We also demonstrated wide-field to confocal cross-modality image transformations using Recurrent-MZ framework and performed 3D image reconstruction of a sample using a few wide-field 2D fluorescence images as input,matching confocal microscopy images of the same sample volume.Recurrent-MZ demonstrates the first application of recurrent neural networks in microscopic image reconstruction and provides a flexible and rapid volumetric imaging framework,overcoming the limitations of current 3D scanning microscopy tools. 展开更多
关键词 NEURAL NETWORK NETWORKS
原文传递
All-optical image denoising using a diffractive visual processor 被引量:4
4
作者 Çağatay Işıl Tianyi Gan +9 位作者 Fazil Onuralp Ardic Koray Mentesoglu Jagrit Digani Huseyin Karaca hanlong chen Jingxi Li Deniz Mengu Mona Jarrahi Kaan Akşit Aydogan Ozcan 《Light: Science & Applications》 SCIE EI CSCD 2024年第3期429-445,共17页
Image denoising,one of the essential inverse problems,targets to remove noise/artifacts from input images.In general,digital image denoising algorithms,executed on computers,present latency due to several iterations i... Image denoising,one of the essential inverse problems,targets to remove noise/artifacts from input images.In general,digital image denoising algorithms,executed on computers,present latency due to several iterations implemented in,e.g.,graphics processing units(GPUs).While deep learning-enabled methods can operate non-iteratively,they also introduce latency and impose a significant computational burden,leading to increased power consumption.Here,we introduce an analog diffractive image denoiser to all-optically and non-iteratively clean various forms of noise and artifacts from input images–implemented at the speed of light propagation within a thin diffractive visual processor that axially spans<250×λ,whereλis the wavelength of light.This all-optical image denoiser comprises passive transmissive layers optimized using deep learning to physically scatter the optical modes that represent various noise features,causing them to miss the output image Field-of-View(FoV)while retaining the object features of interest.Our results show that these diffractive denoisers can efficiently remove salt and pepper noise and image rendering-related spatial artifacts from input phase or intensity images while achieving an output power efficiency of~30–40%.We experimentally demonstrated the effectiveness of this analog denoiser architecture using a 3D-printed diffractive visual processor operating at the terahertz spectrum.Owing to their speed,power-efficiency,and minimal computational overhead,all-optical diffractive denoisers can be transformative for various image display and projection systems,including,e.g.,holographic displays. 展开更多
关键词 REMOVE RENDERING HOLOGRAPHIC
原文传递
Subwavelength imaging using a solid‑immersion diffractive optical processor 被引量:2
5
作者 Jingtian Hu Kun Liao +14 位作者 Niyazi Ulas Dinc Carlo Gigli Bijie Bai Tianyi Gan Xurong Li hanlong chen Xilin Yang Yuhang Li Cağatay Işıl Md Sadman Sakib Rahman Jingxi Li Xiaoyong Hu Mona Jarrahi Demetri Psaltis Aydogan Ozcan 《eLight》 2024年第1期203-222,共20页
Phase imaging is widely used in biomedical imaging,sensing,and material characterization,among other fields.However,direct imaging of phase objects with subwavelength resolution remains a challenge.Here,we demonstrate... Phase imaging is widely used in biomedical imaging,sensing,and material characterization,among other fields.However,direct imaging of phase objects with subwavelength resolution remains a challenge.Here,we demonstrate subwavelength imaging of phase and amplitude objects based on all-optical diffractive encoding and decoding.To resolve subwavelength features of an object,the diffractive imager uses a thin,high-index solid-immersion layer to transmit high-frequency information of the object to a spatially-optimized diffractive encoder,which converts/encodes high-frequency information of the input into low-frequency spatial modes for transmission through air.The subsequent diffractive decoder layers(in air)are jointly designed with the encoder using deep-learning-based optimization,and communicate with the encoder layer to create magnified images of input objects at its output,revealing subwavelength features that would otherwise be washed away due to diffraction limit.We demonstrate that this all-optical collaboration between a diffractive solid-immersion encoder and the following decoder layers in air can resolve subwavelength phase and amplitude features of input objects in a highly compact design.To experimentally demonstrate its proof-of-concept,we used terahertz radiation and developed a fabrication method for creating monolithic multi-layer diffractive processors.Through these monolithically fabricated diffractive encoder-decoder pairs,we demonstrated phase-to-intensity(P→I)transformations and all-optically reconstructed subwavelength phase features of input objects(with linewidths of~λ/3.4,whereλis the illumination wavelength)by directly transforming them into magnified intensity features at the output.This solid-immersion-based diffractive imager,with its compact and cost-effective design,can find wide-ranging applications in bioimaging,endoscopy,sensing and materials characterization. 展开更多
关键词 Diffractive processors Solid immersion imaging Phase-to-intensity transformations
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部