Optical coherence tomography(OCT),particularly Swept-Source OCT,is widely employed in medical diagnostics and industrial inspections owing to its high-resolution imaging capabilities.However,Swept-Source OCT 3D imagin...Optical coherence tomography(OCT),particularly Swept-Source OCT,is widely employed in medical diagnostics and industrial inspections owing to its high-resolution imaging capabilities.However,Swept-Source OCT 3D imaging often suffers from stripe artifacts caused by unstable light sources,system noise,and environmental interference,posing challenges to real-time processing of large-scale datasets.To address this issue,this study introduces a real-time reconstruction system that integrates stripe-artifact suppression and parallel computing using a graphics processing unit.This approach employs a frequency-domain filtering algorithm with adaptive anti-suppression parameters,dynamically adjusted through an image quality evaluation function and optimized using a convolutional neural network for complex frequency-domain feature learning.Additionally,a graphics processing unit integrated 3D reconstruction framework is developed,enhancing data processing throughput and real-time performance via a dual-queue decoupling mechanism.Experimental results demonstrate significant improvements in structural similarity(0.92),peak signal-to-noise ratio(31.62 dB),and stripe suppression ratio(15.73 dB)compared with existing methods.On the RTX 4090 platform,the proposed system achieved an end-to-end delay of 94.36 milliseconds,a frame rate of 10.3 frames per second,and a throughput of 121.5 million voxels per second,effectively suppressing artifacts while preserving image details and enhancing real-time 3D reconstruction performance.展开更多
In deep learning super-resolution microscopy,concerns exist about the generation of artifacts,and methods for artifact suppression are lacking.We developed a self-adaptive fine-tuning method that dynamically adjusts t...In deep learning super-resolution microscopy,concerns exist about the generation of artifacts,and methods for artifact suppression are lacking.We developed a self-adaptive fine-tuning method that dynamically adjusts the parameters of the models to minimize the loss function,which includes direct quantification of artifacts from live-cell imaging.Integrating self-adaptive fine-tuning with super-resolution models enables significant arti-fact reduction in the visualization of nanoscale organelle interactions at high spatial-temporal resolution.展开更多
文摘Optical coherence tomography(OCT),particularly Swept-Source OCT,is widely employed in medical diagnostics and industrial inspections owing to its high-resolution imaging capabilities.However,Swept-Source OCT 3D imaging often suffers from stripe artifacts caused by unstable light sources,system noise,and environmental interference,posing challenges to real-time processing of large-scale datasets.To address this issue,this study introduces a real-time reconstruction system that integrates stripe-artifact suppression and parallel computing using a graphics processing unit.This approach employs a frequency-domain filtering algorithm with adaptive anti-suppression parameters,dynamically adjusted through an image quality evaluation function and optimized using a convolutional neural network for complex frequency-domain feature learning.Additionally,a graphics processing unit integrated 3D reconstruction framework is developed,enhancing data processing throughput and real-time performance via a dual-queue decoupling mechanism.Experimental results demonstrate significant improvements in structural similarity(0.92),peak signal-to-noise ratio(31.62 dB),and stripe suppression ratio(15.73 dB)compared with existing methods.On the RTX 4090 platform,the proposed system achieved an end-to-end delay of 94.36 milliseconds,a frame rate of 10.3 frames per second,and a throughput of 121.5 million voxels per second,effectively suppressing artifacts while preserving image details and enhancing real-time 3D reconstruction performance.
基金supported by the National Natural Science Foundation of China(grant nos.T2225020 and 92254306 to W.J.,grant no.32027901 to T.X.,grants nos.92354307,91954201,31971289 to G.Y.,grant nos.32322050 and 32170704 to L.G.)the National Key Research and Development Program of China(grant nos.2022YFC3400600 and 2021YFA1301500 to W.J.)+2 种基金the National Science and Technology Innovation 2030 Major Program(grant no.2022ZD0211900 to L.G.)the Chinese Academy of Sciences Project for Young Scientists in Basic Research(grant no.YSBR-104 to W.J.)The Strategic Priority Research Program of the Chinese Academy of Sciences(grant no.XDB37040104 to W.J.and grant no.XDB37040402 to G.Y.).
文摘In deep learning super-resolution microscopy,concerns exist about the generation of artifacts,and methods for artifact suppression are lacking.We developed a self-adaptive fine-tuning method that dynamically adjusts the parameters of the models to minimize the loss function,which includes direct quantification of artifacts from live-cell imaging.Integrating self-adaptive fine-tuning with super-resolution models enables significant arti-fact reduction in the visualization of nanoscale organelle interactions at high spatial-temporal resolution.