Event cameras,with their significantly higher dynamic range and sensitivity to intensity variations compared to frame cameras,provide new possibilities for 3D reconstruction in high-dynamic-range(HDR)scenes.However,th...Event cameras,with their significantly higher dynamic range and sensitivity to intensity variations compared to frame cameras,provide new possibilities for 3D reconstruction in high-dynamic-range(HDR)scenes.However,the binary event data stream produced by event cameras presents significant challenges for achieving high-precision and efficient 3D reconstruction.In addressing these issues,we observe that the binary projection inherent to Gray-code-based 3D reconstruction naturally aligns with the event camera's imaging mechanism.However,achieving high-accuracy 3D reconstruction using a Gray code remains hindered by two key factors:inaccurate boundary extraction and the degradation of high-order dense Gray code patterns due to spatial blurring.For the first challenge,we propose an inverted Gray code strategy to improve region segmentation and recognition,achieving more precise and easily identifiable Gray code boundaries.For the second challenge,we introduce a spatial-shifting Gray-code encoding method.By spatially shifting Gray code patterns with lower encoding density,a combined encoding is achieved,enhancing the depth resolution and measurement accuracy.Experimental validation across general and HDR scenes demonstrates the effectiveness of the proposed methods.展开更多
Non-line-of-sight[NLOS]imaging is an emerging technique for detecting objects behind obstacles or around corners.Recent studies on passive NLOS mainly focus on steady-state measurement and reconstruction methods,which...Non-line-of-sight[NLOS]imaging is an emerging technique for detecting objects behind obstacles or around corners.Recent studies on passive NLOS mainly focus on steady-state measurement and reconstruction methods,which show limitations in recognition of moving targets.To the best of our knowledge,we propose a novel event-based passive NLOS imaging method.We acquire asynchronous event-based data of the diffusion spot on the relay surface,which contains detailed dynamic information of the NLOS target,and efficiently ease the degradation caused by target movement.In addition,we demonstrate the event-based cues based on the derivation of an event-NLOS forward model.Furthermore,we propose the first event-based NLOS imaging data set,EM-NLOS,and the movement feature is extracted by time-surface representation.We compare the reconstructions through event-based data with frame-based data.The event-based method performs well on peak signal-to-noise ratio and learned perceptual image patch similarity,which is 20%and 10%better than the frame-based method.展开更多
Traditional cameras inevitably suffer from motion blur when facing high-speed moving objects.Event cameras,as high temporal resolution bionic cameras,record intensity changes in an asynchronous manner,and their record...Traditional cameras inevitably suffer from motion blur when facing high-speed moving objects.Event cameras,as high temporal resolution bionic cameras,record intensity changes in an asynchronous manner,and their recorded high temporal resolution information can effectively solve the problem of time information loss in motion blur.Existing event-based deblurring methods still face challenges when facing high-speed moving objects.We conducted an in-depth study of the imaging principle of event cameras.We found that the event stream contains excessive noise.The valid information is sparse.Invalid event features hinder the expression of valid features due to the uncertainty of the global threshold.To address this problem,a denoising-based long and short-term memory module(DTM)is designed in this paper.The DTM suppressed the original event information by noise reduction process.Invalid features in the event stream and solves the problem of sparse valid information in the event stream,and it also combines with the long short-term memory module(LSTM),which further enhances the event feature information in the time scale.In addition,through the in-depth understanding of the unique characteristics of event features,it is found that the high-frequency information recorded by event features does not effectively guide the fusion feature deblurring process in the spatial-domain-based feature processing,and for this reason,we introduce the residual fast fourier transform module(RES-FFT)to further enhance the high-frequency characteristics of the fusion features by performing the feature extraction of the fusion features from the perspective of the frequency domain.Ultimately,our proposed event image fusion network based on event denoising and frequency domain feature enhancement(DNEFNET)achieved Peak Signal-to-Noise Ratio(PSNR)/Structural Similarity Index Measure(SSIM)scores of 35.55/0.972 on the GoPro dataset and 38.27/0.975 on the REBlur dataset,achieving the state of the art(SOTA)effect.展开更多
Event cameras detect intensity changes rather than absolute intensity,recording variations as a stream of“event.”Intensity reconstruction from these sparse events remains a significant challenge.Previous approaches ...Event cameras detect intensity changes rather than absolute intensity,recording variations as a stream of“event.”Intensity reconstruction from these sparse events remains a significant challenge.Previous approaches focused on transforming motion-induced events into videos or achieving intensity imaging for static scenes through modulation devices at the acquisition end.In this paper,we present inter-event interval microscopy(IEIM),a paradigm-shifting technique enabling static and dynamic fluorescence imaging through photon flux-to-temporal encoding,which integrates a pulse-light modulation device into a microscope equipped with an event camera.We also develop the inter-event interval(IEI)reconstruction algorithm for IEIM,which quantifies time intervals between consecutive events at each pixel.With a fixed threshold in the event camera,this time interval can directly encode intensity.The integration of pulse modulation enables IEIM to achieve static and dynamic fluorescence imaging with a fixed event camera.We evaluate the state-of-the-art performance of IEIM using simulated and real-world data under both static and dynamic scenes.We also demonstrate that IEIM achieves high-dynamic,high-speed imaging at 800 Hz in mimetic dynamic mice brain tissues.Furthermore,we show that IEIM enables imaging the movements of in vivo freshwater euglenae at 500 Hz.展开更多
Dear Editor,This letter proposes a novel dynamic vision-enabled intelligent micro-vibration estimation method with spatiotemporal pattern consistency.Inspired by biological vision,dynamic vision data are collected by ...Dear Editor,This letter proposes a novel dynamic vision-enabled intelligent micro-vibration estimation method with spatiotemporal pattern consistency.Inspired by biological vision,dynamic vision data are collected by the event camera,which is able to capture the micro-vibration information of mechanical equipment,due to the significant advantage of extremely high temporal sampling frequency.展开更多
Bio-inspired visual systems have garnered significant attention in robotics owing to their energy efficiency,rapid dynamic response,and environmental adaptability.Among these,event cameras-bio-inspired sensors that as...Bio-inspired visual systems have garnered significant attention in robotics owing to their energy efficiency,rapid dynamic response,and environmental adaptability.Among these,event cameras-bio-inspired sensors that asynchronously report pixel-level brightness changes called’events’,stand out because of their ability to capture dynamic changes with minimal energy consumption,making them suitable for challenging conditions,such as low light or high-speed motion.However,current mapping and localization methods for event cameras depend primarily on point and line features,which struggle in sparse or low-feature environments and are unsuitable for static or slow-motion scenarios.We addressed these challenges by proposing a bio-inspired vision mapping and localization method using active LED markers(ALMs)combined with reprojection error optimization and asynchronous Kalman fusion.Our approach replaces traditional features with ALMs,thereby enabling accurate tracking under dynamic and low-feature conditions.The global mapping accuracy significantly improved by minimizing the reprojection error,with corner errors reduced from 16.8 cm to 3.1 cm after 400 iterations.The asynchronous Kalman fusion of multiple camera pose estimations from ALMs ensures precise localization with a high temporal efficiency.This method achieved a mean translation error of 0.078 m and a rotational error of 5.411°while evaluating dynamic motion.In addition,the method supported an output rate of 4.5 kHz while maintaining high localization accuracy in UAV spiral flight experiments.These results demonstrate the potential of the proposed approach for real-time robot localization in challenging environments.展开更多
基金supported by the Sichuan Science and Technology Program(No.2023NSFSC0496)the National Natural Science Foundation of China(No.62075143)。
文摘Event cameras,with their significantly higher dynamic range and sensitivity to intensity variations compared to frame cameras,provide new possibilities for 3D reconstruction in high-dynamic-range(HDR)scenes.However,the binary event data stream produced by event cameras presents significant challenges for achieving high-precision and efficient 3D reconstruction.In addressing these issues,we observe that the binary projection inherent to Gray-code-based 3D reconstruction naturally aligns with the event camera's imaging mechanism.However,achieving high-accuracy 3D reconstruction using a Gray code remains hindered by two key factors:inaccurate boundary extraction and the degradation of high-order dense Gray code patterns due to spatial blurring.For the first challenge,we propose an inverted Gray code strategy to improve region segmentation and recognition,achieving more precise and easily identifiable Gray code boundaries.For the second challenge,we introduce a spatial-shifting Gray-code encoding method.By spatially shifting Gray code patterns with lower encoding density,a combined encoding is achieved,enhancing the depth resolution and measurement accuracy.Experimental validation across general and HDR scenes demonstrates the effectiveness of the proposed methods.
基金supported by the National Natural Science Foundation of China(No.62031018)。
文摘Non-line-of-sight[NLOS]imaging is an emerging technique for detecting objects behind obstacles or around corners.Recent studies on passive NLOS mainly focus on steady-state measurement and reconstruction methods,which show limitations in recognition of moving targets.To the best of our knowledge,we propose a novel event-based passive NLOS imaging method.We acquire asynchronous event-based data of the diffusion spot on the relay surface,which contains detailed dynamic information of the NLOS target,and efficiently ease the degradation caused by target movement.In addition,we demonstrate the event-based cues based on the derivation of an event-NLOS forward model.Furthermore,we propose the first event-based NLOS imaging data set,EM-NLOS,and the movement feature is extracted by time-surface representation.We compare the reconstructions through event-based data with frame-based data.The event-based method performs well on peak signal-to-noise ratio and learned perceptual image patch similarity,which is 20%and 10%better than the frame-based method.
文摘Traditional cameras inevitably suffer from motion blur when facing high-speed moving objects.Event cameras,as high temporal resolution bionic cameras,record intensity changes in an asynchronous manner,and their recorded high temporal resolution information can effectively solve the problem of time information loss in motion blur.Existing event-based deblurring methods still face challenges when facing high-speed moving objects.We conducted an in-depth study of the imaging principle of event cameras.We found that the event stream contains excessive noise.The valid information is sparse.Invalid event features hinder the expression of valid features due to the uncertainty of the global threshold.To address this problem,a denoising-based long and short-term memory module(DTM)is designed in this paper.The DTM suppressed the original event information by noise reduction process.Invalid features in the event stream and solves the problem of sparse valid information in the event stream,and it also combines with the long short-term memory module(LSTM),which further enhances the event feature information in the time scale.In addition,through the in-depth understanding of the unique characteristics of event features,it is found that the high-frequency information recorded by event features does not effectively guide the fusion feature deblurring process in the spatial-domain-based feature processing,and for this reason,we introduce the residual fast fourier transform module(RES-FFT)to further enhance the high-frequency characteristics of the fusion features by performing the feature extraction of the fusion features from the perspective of the frequency domain.Ultimately,our proposed event image fusion network based on event denoising and frequency domain feature enhancement(DNEFNET)achieved Peak Signal-to-Noise Ratio(PSNR)/Structural Similarity Index Measure(SSIM)scores of 35.55/0.972 on the GoPro dataset and 38.27/0.975 on the REBlur dataset,achieving the state of the art(SOTA)effect.
基金National Natural Science Foundation of China(62371006,62401477,U24B20140)Natural Science Foundation of Beijing Municipality(3242008).
文摘Event cameras detect intensity changes rather than absolute intensity,recording variations as a stream of“event.”Intensity reconstruction from these sparse events remains a significant challenge.Previous approaches focused on transforming motion-induced events into videos or achieving intensity imaging for static scenes through modulation devices at the acquisition end.In this paper,we present inter-event interval microscopy(IEIM),a paradigm-shifting technique enabling static and dynamic fluorescence imaging through photon flux-to-temporal encoding,which integrates a pulse-light modulation device into a microscope equipped with an event camera.We also develop the inter-event interval(IEI)reconstruction algorithm for IEIM,which quantifies time intervals between consecutive events at each pixel.With a fixed threshold in the event camera,this time interval can directly encode intensity.The integration of pulse modulation enables IEIM to achieve static and dynamic fluorescence imaging with a fixed event camera.We evaluate the state-of-the-art performance of IEIM using simulated and real-world data under both static and dynamic scenes.We also demonstrate that IEIM achieves high-dynamic,high-speed imaging at 800 Hz in mimetic dynamic mice brain tissues.Furthermore,we show that IEIM enables imaging the movements of in vivo freshwater euglenae at 500 Hz.
文摘Dear Editor,This letter proposes a novel dynamic vision-enabled intelligent micro-vibration estimation method with spatiotemporal pattern consistency.Inspired by biological vision,dynamic vision data are collected by the event camera,which is able to capture the micro-vibration information of mechanical equipment,due to the significant advantage of extremely high temporal sampling frequency.
基金Supported by Beijing Natural Science Foundation(Grant No.L231004)Young Elite Scientists Sponsorship Program by CAST(Grant No.2022QNRC001)+2 种基金Fundamental Research Funds for the Central Universities(Grant No.2025JBMC039)National Key Research and Development Program(Grant No.2022YFC2805200)National Natural Science Foundation of China(Grant No.52371338).
文摘Bio-inspired visual systems have garnered significant attention in robotics owing to their energy efficiency,rapid dynamic response,and environmental adaptability.Among these,event cameras-bio-inspired sensors that asynchronously report pixel-level brightness changes called’events’,stand out because of their ability to capture dynamic changes with minimal energy consumption,making them suitable for challenging conditions,such as low light or high-speed motion.However,current mapping and localization methods for event cameras depend primarily on point and line features,which struggle in sparse or low-feature environments and are unsuitable for static or slow-motion scenarios.We addressed these challenges by proposing a bio-inspired vision mapping and localization method using active LED markers(ALMs)combined with reprojection error optimization and asynchronous Kalman fusion.Our approach replaces traditional features with ALMs,thereby enabling accurate tracking under dynamic and low-feature conditions.The global mapping accuracy significantly improved by minimizing the reprojection error,with corner errors reduced from 16.8 cm to 3.1 cm after 400 iterations.The asynchronous Kalman fusion of multiple camera pose estimations from ALMs ensures precise localization with a high temporal efficiency.This method achieved a mean translation error of 0.078 m and a rotational error of 5.411°while evaluating dynamic motion.In addition,the method supported an output rate of 4.5 kHz while maintaining high localization accuracy in UAV spiral flight experiments.These results demonstrate the potential of the proposed approach for real-time robot localization in challenging environments.