Traditional cameras inevitably suffer from motion blur when facing high-speed moving objects.Event cameras,as high temporal resolution bionic cameras,record intensity changes in an asynchronous manner,and their record...Traditional cameras inevitably suffer from motion blur when facing high-speed moving objects.Event cameras,as high temporal resolution bionic cameras,record intensity changes in an asynchronous manner,and their recorded high temporal resolution information can effectively solve the problem of time information loss in motion blur.Existing event-based deblurring methods still face challenges when facing high-speed moving objects.We conducted an in-depth study of the imaging principle of event cameras.We found that the event stream contains excessive noise.The valid information is sparse.Invalid event features hinder the expression of valid features due to the uncertainty of the global threshold.To address this problem,a denoising-based long and short-term memory module(DTM)is designed in this paper.The DTM suppressed the original event information by noise reduction process.Invalid features in the event stream and solves the problem of sparse valid information in the event stream,and it also combines with the long short-term memory module(LSTM),which further enhances the event feature information in the time scale.In addition,through the in-depth understanding of the unique characteristics of event features,it is found that the high-frequency information recorded by event features does not effectively guide the fusion feature deblurring process in the spatial-domain-based feature processing,and for this reason,we introduce the residual fast fourier transform module(RES-FFT)to further enhance the high-frequency characteristics of the fusion features by performing the feature extraction of the fusion features from the perspective of the frequency domain.Ultimately,our proposed event image fusion network based on event denoising and frequency domain feature enhancement(DNEFNET)achieved Peak Signal-to-Noise Ratio(PSNR)/Structural Similarity Index Measure(SSIM)scores of 35.55/0.972 on the GoPro dataset and 38.27/0.975 on the REBlur dataset,achieving the state of the art(SOTA)effect.展开更多
Non-line-of-sight[NLOS]imaging is an emerging technique for detecting objects behind obstacles or around corners.Recent studies on passive NLOS mainly focus on steady-state measurement and reconstruction methods,which...Non-line-of-sight[NLOS]imaging is an emerging technique for detecting objects behind obstacles or around corners.Recent studies on passive NLOS mainly focus on steady-state measurement and reconstruction methods,which show limitations in recognition of moving targets.To the best of our knowledge,we propose a novel event-based passive NLOS imaging method.We acquire asynchronous event-based data of the diffusion spot on the relay surface,which contains detailed dynamic information of the NLOS target,and efficiently ease the degradation caused by target movement.In addition,we demonstrate the event-based cues based on the derivation of an event-NLOS forward model.Furthermore,we propose the first event-based NLOS imaging data set,EM-NLOS,and the movement feature is extracted by time-surface representation.We compare the reconstructions through event-based data with frame-based data.The event-based method performs well on peak signal-to-noise ratio and learned perceptual image patch similarity,which is 20%and 10%better than the frame-based method.展开更多
为了解决事件行人重识别领域(event-based person ReID)中事件流噪声问题和类间不平衡问题,提出了一种基于时空协同滤波的事件行人重识别方法(SCF-Net)。该方法包含时空协同滤波器和局部代理稀疏性学习模块两个部分。时空协同滤波器通...为了解决事件行人重识别领域(event-based person ReID)中事件流噪声问题和类间不平衡问题,提出了一种基于时空协同滤波的事件行人重识别方法(SCF-Net)。该方法包含时空协同滤波器和局部代理稀疏性学习模块两个部分。时空协同滤波器通过利用真实事件之间的时空协同特性来区分真实事件和噪声事件,并滤除噪声事件,以消除事件流中噪声的影响。局部代理稀疏性学习模块考虑了行人特征之间的差异性,通过将行人实例特征映射到局部代理域,并强制各代理互相远离,在特征空间中得到了清晰的类别边界。在Event-ReID数据集上的实验表明,与目前先进的事件行人重识别方法相比,SCF-Net方法取得了较大的性能提升,mAP指标提升了6.9%,Rank-1指标提升了4.4%。展开更多
事件相机是一种新型仿生视觉传感器,具有高动态范围、低延迟和无运动模糊等优点。本文提出了一种基于事件和图像的数据融合算法,名为EI-Fusion(Event and Image Fusion,事件图像融合),实现了事件相机与传统帧式相机的互补,有效提高了复...事件相机是一种新型仿生视觉传感器,具有高动态范围、低延迟和无运动模糊等优点。本文提出了一种基于事件和图像的数据融合算法,名为EI-Fusion(Event and Image Fusion,事件图像融合),实现了事件相机与传统帧式相机的互补,有效提高了复杂光照条件下的图像质量。此外,本文设计了一个基于光流跟踪的3-DoF姿态估计系统,并将融合结果作为输入以进一步评估该算法在姿态估计应用中的表现。实验结果表明,EI-Fusion的平均APE(Absolute Pose Error,绝对位姿误差)相较于原始图像降低了69%,大幅提升了姿态估计框架在昏暗场景中的性能。展开更多
目的 传统视觉场景识别(visual place recognition,VPR)算法的性能依赖光学图像的成像质量,因此高速和高动态范围场景导致的图像质量下降会进一步影响视觉场景识别算法的性能。针对此问题,提出一种融合事件相机的视觉场景识别算法,利用...目的 传统视觉场景识别(visual place recognition,VPR)算法的性能依赖光学图像的成像质量,因此高速和高动态范围场景导致的图像质量下降会进一步影响视觉场景识别算法的性能。针对此问题,提出一种融合事件相机的视觉场景识别算法,利用事件相机的低延时和高动态范围的特性,提升视觉场景识别算法在高速和高动态范围等极端场景下的识别性能。方法 本文提出的方法首先使用图像特征提取模块提取质量良好的参考图像的特征,然后使用多模态特征融合模块提取查询图像及其曝光区间事件信息的多模态融合特征,最后通过特征匹配查找与查询图像最相似的参考图像。结果 在MVSEC(multi-vehicle stereo event camera dataset)和RobotCar两个数据集上的实验表明,本文方法对比现有视觉场景识别算法在高速和高动态范围场景下具有明显优势。在高速高动态范围场景下,本文方法在MVSEC数据集上相较对比算法最优值在召回率与精度上分别提升5.39%和8.55%,在RobotCar数据集上相较对比算法最优值在召回率与精度上分别提升3.36%与4.41%。结论 本文提出了融合事件相机的视觉场景识别算法,利用了事件相机在高速和高动态范围场景的成像优势,有效提升了视觉场景识别算法在高速和高动态范围场景下的场景识别性能。展开更多
文摘Traditional cameras inevitably suffer from motion blur when facing high-speed moving objects.Event cameras,as high temporal resolution bionic cameras,record intensity changes in an asynchronous manner,and their recorded high temporal resolution information can effectively solve the problem of time information loss in motion blur.Existing event-based deblurring methods still face challenges when facing high-speed moving objects.We conducted an in-depth study of the imaging principle of event cameras.We found that the event stream contains excessive noise.The valid information is sparse.Invalid event features hinder the expression of valid features due to the uncertainty of the global threshold.To address this problem,a denoising-based long and short-term memory module(DTM)is designed in this paper.The DTM suppressed the original event information by noise reduction process.Invalid features in the event stream and solves the problem of sparse valid information in the event stream,and it also combines with the long short-term memory module(LSTM),which further enhances the event feature information in the time scale.In addition,through the in-depth understanding of the unique characteristics of event features,it is found that the high-frequency information recorded by event features does not effectively guide the fusion feature deblurring process in the spatial-domain-based feature processing,and for this reason,we introduce the residual fast fourier transform module(RES-FFT)to further enhance the high-frequency characteristics of the fusion features by performing the feature extraction of the fusion features from the perspective of the frequency domain.Ultimately,our proposed event image fusion network based on event denoising and frequency domain feature enhancement(DNEFNET)achieved Peak Signal-to-Noise Ratio(PSNR)/Structural Similarity Index Measure(SSIM)scores of 35.55/0.972 on the GoPro dataset and 38.27/0.975 on the REBlur dataset,achieving the state of the art(SOTA)effect.
基金supported by the National Natural Science Foundation of China(No.62031018)。
文摘Non-line-of-sight[NLOS]imaging is an emerging technique for detecting objects behind obstacles or around corners.Recent studies on passive NLOS mainly focus on steady-state measurement and reconstruction methods,which show limitations in recognition of moving targets.To the best of our knowledge,we propose a novel event-based passive NLOS imaging method.We acquire asynchronous event-based data of the diffusion spot on the relay surface,which contains detailed dynamic information of the NLOS target,and efficiently ease the degradation caused by target movement.In addition,we demonstrate the event-based cues based on the derivation of an event-NLOS forward model.Furthermore,we propose the first event-based NLOS imaging data set,EM-NLOS,and the movement feature is extracted by time-surface representation.We compare the reconstructions through event-based data with frame-based data.The event-based method performs well on peak signal-to-noise ratio and learned perceptual image patch similarity,which is 20%and 10%better than the frame-based method.
文摘为了解决事件行人重识别领域(event-based person ReID)中事件流噪声问题和类间不平衡问题,提出了一种基于时空协同滤波的事件行人重识别方法(SCF-Net)。该方法包含时空协同滤波器和局部代理稀疏性学习模块两个部分。时空协同滤波器通过利用真实事件之间的时空协同特性来区分真实事件和噪声事件,并滤除噪声事件,以消除事件流中噪声的影响。局部代理稀疏性学习模块考虑了行人特征之间的差异性,通过将行人实例特征映射到局部代理域,并强制各代理互相远离,在特征空间中得到了清晰的类别边界。在Event-ReID数据集上的实验表明,与目前先进的事件行人重识别方法相比,SCF-Net方法取得了较大的性能提升,mAP指标提升了6.9%,Rank-1指标提升了4.4%。