In the dynamic scene of autonomous vehicles,the depth estimation of monocular cameras often faces the problem of inaccurate edge depth estimation.To solve this problem,we propose an unsupervised monocular depth estima...In the dynamic scene of autonomous vehicles,the depth estimation of monocular cameras often faces the problem of inaccurate edge depth estimation.To solve this problem,we propose an unsupervised monocular depth estimation model based on edge enhancement,which is specifically aimed at the depth perception challenge in dynamic scenes.The model consists of two core networks:a deep prediction network and a motion estimation network,both of which adopt an encoder-decoder architecture.The depth prediction network is based on the U-Net structure of ResNet18,which is responsible for generating the depth map of the scene.The motion estimation network is based on the U-Net structure of Flow-Net,focusing on the motion estimation of dynamic targets.In the decoding stage of the motion estimation network,we innovatively introduce an edge-enhanced decoder,which integrates a convolutional block attention module(CBAM)in the decoding process to enhance the recognition ability of the edge features of moving objects.In addition,we also designed a strip convolution module to improve the model’s capture efficiency of discrete moving targets.To further improve the performance of the model,we propose a novel edge regularization method based on the Laplace operator,which effectively accelerates the convergence process of themodel.Experimental results on the KITTI and Cityscapes datasets show that compared with the current advanced dynamic unsupervised monocular model,the proposed model has a significant improvement in depth estimation accuracy and convergence speed.Specifically,the rootmean square error(RMSE)is reduced by 4.8%compared with the DepthMotion algorithm,while the training convergence speed is increased by 36%,which shows the superior performance of the model in the depth estimation task in dynamic scenes.展开更多
Speedometer identification has been researched for many years.The common approaches to that problem are usually based on image subtraction,which does not adapt to image offsets caused by camera vibration.To cope with ...Speedometer identification has been researched for many years.The common approaches to that problem are usually based on image subtraction,which does not adapt to image offsets caused by camera vibration.To cope with the rapidity,robust and accurate requirements of this kind of work in dynamic scene,a fast speedometer identification algorithm is proposed,it utilizes phase correlation method based on regional entire template translation to estimate the offset between images.In order to effectively reduce unnecessary computation and false detection rate,an improved linear Hough transform method with two optimization strategies is presented for pointer line detection.Based on VC++ 6.0 software platform with OpenCV library,the algorithm performance under experiments has shown that it celerity and precision.展开更多
In dynamic scenarios,visual simultaneous localization and mapping(SLAM)algorithms often incorrectly incorporate dynamic points during camera pose computation,leading to reduced accuracy and robustness.This paper prese...In dynamic scenarios,visual simultaneous localization and mapping(SLAM)algorithms often incorrectly incorporate dynamic points during camera pose computation,leading to reduced accuracy and robustness.This paper presents a dynamic SLAM algorithm that leverages object detection and regional dynamic probability.Firstly,a parallel thread employs the YOLOX object detectionmodel to gather 2D semantic information and compensate for missed detections.Next,an improved K-means++clustering algorithm clusters bounding box regions,adaptively determining the threshold for extracting dynamic object contours as dynamic points change.This process divides the image into low dynamic,suspicious dynamic,and high dynamic regions.In the tracking thread,the dynamic point removal module assigns dynamic probability weights to the feature points in these regions.Combined with geometric methods,it detects and removes the dynamic points.The final evaluation on the public TUM RGB-D dataset shows that the proposed dynamic SLAM algorithm surpasses most existing SLAM algorithms,providing better pose estimation accuracy and robustness in dynamic environments.展开更多
Self-supervised monocular depth estimation has emerged as a major research focus in recent years,primarily due to the elimination of ground-truth depth dependence.However,the prevailing architectures in this domain su...Self-supervised monocular depth estimation has emerged as a major research focus in recent years,primarily due to the elimination of ground-truth depth dependence.However,the prevailing architectures in this domain suffer from inherent limitations:existing pose network branches infer camera ego-motion exclusively under static-scene and Lambertian-surface assumptions.These assumptions are often violated in real-world scenarios due to dynamic objects,non-Lambertian reflectance,and unstructured background elements,leading to pervasive artifacts such as depth discontinuities(“holes”),structural collapse,and ambiguous reconstruction.To address these challenges,we propose a novel framework that integrates scene dynamic pose estimation into the conventional self-supervised depth network,enhancing its ability to model complex scene dynamics.Our contributions are threefold:(1)a pixel-wise dynamic pose estimation module that jointly resolves the pose transformations of moving objects and localized scene perturbations;(2)a physically-informed loss function that couples dynamic pose and depth predictions,designed to mitigate depth errors arising from high-speed distant objects and geometrically inconsistent motion profiles;(3)an efficient SE(3)transformation parameterization that streamlines network complexity and temporal pre-processing.Extensive experiments on the KITTI and NYU-V2 benchmarks show that our framework achieves state-of-the-art performance in both quantitative metrics and qualitative visual fidelity,significantly improving the robustness and generalization of monocular depth estimation under dynamic conditions.展开更多
The Internet of Vehicles (IoV) has become an important direction in the field of intelligent transportation, in which vehicle positioning is a crucial part. SLAM (Simultaneous Localization and Mapping) technology play...The Internet of Vehicles (IoV) has become an important direction in the field of intelligent transportation, in which vehicle positioning is a crucial part. SLAM (Simultaneous Localization and Mapping) technology plays a crucial role in vehicle localization and navigation. Traditional Simultaneous Localization and Mapping (SLAM) systems are designed for use in static environments, and they can result in poor performance in terms of accuracy and robustness when used in dynamic environments where objects are in constant movement. To address this issue, a new real-time visual SLAM system called MG-SLAM has been developed. Based on ORB-SLAM2, MG-SLAM incorporates a dynamic target detection process that enables the detection of both known and unknown moving objects. In this process, a separate semantic segmentation thread is required to segment dynamic target instances, and the Mask R-CNN algorithm is applied on the Graphics Processing Unit (GPU) to accelerate segmentation. To reduce computational cost, only key frames are segmented to identify known dynamic objects. Additionally, a multi-view geometry method is adopted to detect unknown moving objects. The results demonstrate that MG-SLAM achieves higher precision, with an improvement from 0.2730 m to 0.0135 m in precision. Moreover, the processing time required by MG-SLAM is significantly reduced compared to other dynamic scene SLAM algorithms, which illustrates its efficacy in locating objects in dynamic scenes.展开更多
This paper explores the key techniques and challenges in dynamic scene reconstruction with neural radiance fields(NeRF).As an emerging computer vision method,the NeRF has wide application potential,especially in excel...This paper explores the key techniques and challenges in dynamic scene reconstruction with neural radiance fields(NeRF).As an emerging computer vision method,the NeRF has wide application potential,especially in excelling at 3D reconstruction.We first introduce the basic principles and working mechanisms of NeRFs,followed by an in-depth discussion of the technical challenges faced by 3D reconstruction in dynamic scenes,including problems in perspective and illumination changes of moving objects,recognition and modeling of dynamic objects,real-time requirements,data acquisition and calibration,motion estimation,and evaluation mechanisms.We also summarize current state-of-theart approaches to address these challenges,as well as future research trends.The goal is to provide researchers with an in-depth understanding of the application of NeRFs in dynamic scene reconstruction,as well as insights into the key issues faced and future directions.展开更多
Deblurring images of dynamic scenes is a challenging task because blurring occurs due to a combination of many factors.In recent years,the use of multi-scale pyramid methods to recover high-resolution sharp images has...Deblurring images of dynamic scenes is a challenging task because blurring occurs due to a combination of many factors.In recent years,the use of multi-scale pyramid methods to recover high-resolution sharp images has been extensively studied.We have made improvements to the lack of detail recovery in the cascade structure through a network using progressive integration of data streams.Our new multi-scale structure and edge feature perception design deals with changes in blurring at different spatial scales and enhances the sensitivity of the network to blurred edges.The coarse-to-fine architecture restores the image structure,first performing global adjustments,and then performing local refinement.In this way,not only is global correlation considered,but also residual information is used to significantly improve image restoration and enhance texture details.Experimental results show quantitative and qualitative improvements over existing methods.展开更多
Moving object detection in dynamic scenes is a basic task in a surveillance system for sensor data collection. In this paper, we present a powerful back- ground subtraction algorithm called Gaussian-kernel density est...Moving object detection in dynamic scenes is a basic task in a surveillance system for sensor data collection. In this paper, we present a powerful back- ground subtraction algorithm called Gaussian-kernel density estimator (G-KDE) that improves the accuracy and reduces the computational load. The main innovation is that we divide the changes of background into continuous and stable changes to deal with dynamic scenes and moving objects that first merge into the background, and separately model background using both KDE model and Gaussian models. To get a temporal- spatial background model, the sample selection is based on the concept of region average at the update stage. In the detection stage, neighborhood information content (NIC) is implemented which suppresses the false detection due to small and un-modeled movements in the scene. The experimental results which are generated on three separate sequences indicate that this method is well suited for precise detection of moving objects in complex scenes and it can be efficiently used in various detection systems.展开更多
Reconstructing dynamic scenes with commodity depth cameras has many applications in computer graphics,computer vision,and robotics.However,due to the presence of noise and erroneous observations from data capturing de...Reconstructing dynamic scenes with commodity depth cameras has many applications in computer graphics,computer vision,and robotics.However,due to the presence of noise and erroneous observations from data capturing devices and the inherently ill-posed nature of non-rigid registration with insufficient information,traditional approaches often produce low-quality geometry with holes,bumps,and misalignments.We propose a novel 3D dynamic reconstruction system,named HDR-Net-Fusion,which learns to simultaneously reconstruct and refine the geometry on the fly with a sparse embedded deformation graph of surfels,using a hierarchical deep reinforcement(HDR)network.The latter comprises two parts:a global HDR-Net which rapidly detects local regions with large geometric errors,and a local HDR-Net serving as a local patch refinement operator to promptly complete and enhance such regions.Training the global HDR-Net is formulated as a novel reinforcement learning problem to implicitly learn the region selection strategy with the goal of improving the overall reconstruction quality.The applicability and efficiency of our approach are demonstrated using a large-scale dynamic reconstruction dataset.Our method can reconstruct geometry with higher quality than traditional methods.展开更多
Due to the existing limited dynamic range a camera cannot reveal all the details in a high-dynamic range scene. In order to solve this problem,this paper presents a multi-exposure fusion method for getting high qualit...Due to the existing limited dynamic range a camera cannot reveal all the details in a high-dynamic range scene. In order to solve this problem,this paper presents a multi-exposure fusion method for getting high quality images in high dynamic range scene. First,a set of multi-exposure images is obtained by multiple exposures in a same scene and their brightness condition is analyzed. Then,multi-exposure images under the same scene are decomposed using dual-tree complex wavelet transform( DT-CWT),and their low and high frequency components are obtained. Weight maps according to the brightness condition are assigned to the low components for fusion. Maximizing the region Sum Modified-Laplacian( SML) is adopted for high-frequency components fusing. Finally,the fused image is acquired by subjecting the low and high frequency coefficients to inverse DT-CWT.Experimental results show that the proposed approach generates high quality results with uniform distributed brightness and rich details. The proposed method is efficient and robust in varies scenes.展开更多
For the issue of low positioning accuracy in dynamic environments with traditional simultaneous localisation and mapping(SLAM),a dynamic point removal strategy combining object detection and optical flow tracking has ...For the issue of low positioning accuracy in dynamic environments with traditional simultaneous localisation and mapping(SLAM),a dynamic point removal strategy combining object detection and optical flow tracking has been proposed.To fully utilise the semantic information,an ellipsoid model of the detected semantic objects was first constructed based on the plane and point cloud constraints,which assists in loop closure detection.Bilateral semantic map matching was achieved through the Kuhn-Munkres(KM)algorithm maximum weight assignment,and the pose transformation between local and global maps was determined by the random sample consensus(RANSAC)algorithm.Finally,a stable semantic SLAM system suitable for dy-namic environments was constructed.The effectiveness of achieving the system's positioning accuracy under dynamic inter-ference and large visual-inertial loop closure was verified by the experiment.展开更多
Simultaneous localisation and mapping(SLAM)are the basis for many robotic applications.As the front end of SLAM,visual odometry is mainly used to estimate camera pose.In dynamic scenes,classical methods are deteriorat...Simultaneous localisation and mapping(SLAM)are the basis for many robotic applications.As the front end of SLAM,visual odometry is mainly used to estimate camera pose.In dynamic scenes,classical methods are deteriorated by dynamic objects and cannot achieve satisfactory results.In order to improve the robustness of visual odometry in dynamic scenes,this paper proposed a dynamic region detection method based on RGBD images.Firstly,all feature points on the RGB image are classified as dynamic and static using a triangle constraint and the epipolar geometric constraint successively.Meanwhile,the depth image is clustered using the K-Means method.The classified feature points are mapped to the clustered depth image,and a dynamic or static label is assigned to each cluster according to the number of dynamic feature points.Subsequently,a dynamic region mask for the RGB image is generated based on the dynamic clusters in the depth image,and the feature points covered by the mask are all removed.The remaining static feature points are applied to estimate the camera pose.Finally,some experimental results are provided to demonstrate the feasibility and performance.展开更多
Background modeling and subtraction is a fundamental problem in video analysis. Many algorithms have been developed to date, but there are still some challenges in complex environments, especially dynamic scenes in wh...Background modeling and subtraction is a fundamental problem in video analysis. Many algorithms have been developed to date, but there are still some challenges in complex environments, especially dynamic scenes in which backgrounds are themselves moving, such as rippling water and swaying trees. In this paper, a novel background modeling method is proposed for dynamic scenes by combining both tensor representation and swarm intelligence. We maintain several video patches, which are naturally represented as higher order tensors,to represent the patterns of background, and utilize tensor low-rank approximation to capture the dynamic nature. Furthermore, we introduce an ant colony algorithm to improve the performance. Experimental results show that the proposed method is robust and adaptive in dynamic environments, and moving objects can be perfectly separated from the complex dynamic background.展开更多
Visual SLAM methods usually presuppose that the scene is static, so the SLAM algorithm formobile robots in dynamic scenes often results in a signicant decrease in accuracy due to thein°uence of dynamic objects. I...Visual SLAM methods usually presuppose that the scene is static, so the SLAM algorithm formobile robots in dynamic scenes often results in a signicant decrease in accuracy due to thein°uence of dynamic objects. In this paper, feature points are divided into dynamic and staticfrom semantic information and multi-view geometry information, and then static region featurepoints are added to the pose-optimization, and static scene maps are established for dynamicscenes. Finally, experiments are conducted in dynamic scenes using the KITTI dataset, and theresults show that the proposed algorithm has higher accuracy in highly dynamic scenes comparedto the visual SLAM baseline.展开更多
基金funded by the Yangtze River Delta Science and Technology Innovation Community Joint Research Project(2023CSJGG1600)the Natural Science Foundation of Anhui Province(2208085MF173)Wuhu“ChiZhu Light”Major Science and Technology Project(2023ZD01,2023ZD03).
文摘In the dynamic scene of autonomous vehicles,the depth estimation of monocular cameras often faces the problem of inaccurate edge depth estimation.To solve this problem,we propose an unsupervised monocular depth estimation model based on edge enhancement,which is specifically aimed at the depth perception challenge in dynamic scenes.The model consists of two core networks:a deep prediction network and a motion estimation network,both of which adopt an encoder-decoder architecture.The depth prediction network is based on the U-Net structure of ResNet18,which is responsible for generating the depth map of the scene.The motion estimation network is based on the U-Net structure of Flow-Net,focusing on the motion estimation of dynamic targets.In the decoding stage of the motion estimation network,we innovatively introduce an edge-enhanced decoder,which integrates a convolutional block attention module(CBAM)in the decoding process to enhance the recognition ability of the edge features of moving objects.In addition,we also designed a strip convolution module to improve the model’s capture efficiency of discrete moving targets.To further improve the performance of the model,we propose a novel edge regularization method based on the Laplace operator,which effectively accelerates the convergence process of themodel.Experimental results on the KITTI and Cityscapes datasets show that compared with the current advanced dynamic unsupervised monocular model,the proposed model has a significant improvement in depth estimation accuracy and convergence speed.Specifically,the rootmean square error(RMSE)is reduced by 4.8%compared with the DepthMotion algorithm,while the training convergence speed is increased by 36%,which shows the superior performance of the model in the depth estimation task in dynamic scenes.
基金Supported by the National Natural Science Foundation of China (61004139)Beijing Municipal Natural Science Foundation(4101001)2008 Yangtze Fund Scholar and Innovative Research Team Development Schemes of Ministry of Education
文摘Speedometer identification has been researched for many years.The common approaches to that problem are usually based on image subtraction,which does not adapt to image offsets caused by camera vibration.To cope with the rapidity,robust and accurate requirements of this kind of work in dynamic scene,a fast speedometer identification algorithm is proposed,it utilizes phase correlation method based on regional entire template translation to estimate the offset between images.In order to effectively reduce unnecessary computation and false detection rate,an improved linear Hough transform method with two optimization strategies is presented for pointer line detection.Based on VC++ 6.0 software platform with OpenCV library,the algorithm performance under experiments has shown that it celerity and precision.
基金the National Natural Science Foundation of China(No.62063006)to the Guangxi Natural Science Foundation under Grant(Nos.2023GXNSFAA026025,AA24010001)+3 种基金to the Innovation Fund of Chinese Universities Industry-University-Research(ID:2023RY018)to the Special Guangxi Industry and Information Technology Department,Textile and Pharmaceutical Division(ID:2021 No.231)to the Special Research Project of Hechi University(ID:2021GCC028)to the Key Laboratory of AI and Information Processing,Education Department of Guangxi Zhuang Autonomous Region(Hechi University),No.2024GXZDSY009。
文摘In dynamic scenarios,visual simultaneous localization and mapping(SLAM)algorithms often incorrectly incorporate dynamic points during camera pose computation,leading to reduced accuracy and robustness.This paper presents a dynamic SLAM algorithm that leverages object detection and regional dynamic probability.Firstly,a parallel thread employs the YOLOX object detectionmodel to gather 2D semantic information and compensate for missed detections.Next,an improved K-means++clustering algorithm clusters bounding box regions,adaptively determining the threshold for extracting dynamic object contours as dynamic points change.This process divides the image into low dynamic,suspicious dynamic,and high dynamic regions.In the tracking thread,the dynamic point removal module assigns dynamic probability weights to the feature points in these regions.Combined with geometric methods,it detects and removes the dynamic points.The final evaluation on the public TUM RGB-D dataset shows that the proposed dynamic SLAM algorithm surpasses most existing SLAM algorithms,providing better pose estimation accuracy and robustness in dynamic environments.
基金supported in part by the National Natural Science Foundation of China under Grants 62071345。
文摘Self-supervised monocular depth estimation has emerged as a major research focus in recent years,primarily due to the elimination of ground-truth depth dependence.However,the prevailing architectures in this domain suffer from inherent limitations:existing pose network branches infer camera ego-motion exclusively under static-scene and Lambertian-surface assumptions.These assumptions are often violated in real-world scenarios due to dynamic objects,non-Lambertian reflectance,and unstructured background elements,leading to pervasive artifacts such as depth discontinuities(“holes”),structural collapse,and ambiguous reconstruction.To address these challenges,we propose a novel framework that integrates scene dynamic pose estimation into the conventional self-supervised depth network,enhancing its ability to model complex scene dynamics.Our contributions are threefold:(1)a pixel-wise dynamic pose estimation module that jointly resolves the pose transformations of moving objects and localized scene perturbations;(2)a physically-informed loss function that couples dynamic pose and depth predictions,designed to mitigate depth errors arising from high-speed distant objects and geometrically inconsistent motion profiles;(3)an efficient SE(3)transformation parameterization that streamlines network complexity and temporal pre-processing.Extensive experiments on the KITTI and NYU-V2 benchmarks show that our framework achieves state-of-the-art performance in both quantitative metrics and qualitative visual fidelity,significantly improving the robustness and generalization of monocular depth estimation under dynamic conditions.
基金funded by the Natural Science Foundation of the Jiangsu Higher Education Institutions of China(grant number 22KJD440001)Changzhou Science&Technology Program(grant number CJ20220232).
文摘The Internet of Vehicles (IoV) has become an important direction in the field of intelligent transportation, in which vehicle positioning is a crucial part. SLAM (Simultaneous Localization and Mapping) technology plays a crucial role in vehicle localization and navigation. Traditional Simultaneous Localization and Mapping (SLAM) systems are designed for use in static environments, and they can result in poor performance in terms of accuracy and robustness when used in dynamic environments where objects are in constant movement. To address this issue, a new real-time visual SLAM system called MG-SLAM has been developed. Based on ORB-SLAM2, MG-SLAM incorporates a dynamic target detection process that enables the detection of both known and unknown moving objects. In this process, a separate semantic segmentation thread is required to segment dynamic target instances, and the Mask R-CNN algorithm is applied on the Graphics Processing Unit (GPU) to accelerate segmentation. To reduce computational cost, only key frames are segmented to identify known dynamic objects. Additionally, a multi-view geometry method is adopted to detect unknown moving objects. The results demonstrate that MG-SLAM achieves higher precision, with an improvement from 0.2730 m to 0.0135 m in precision. Moreover, the processing time required by MG-SLAM is significantly reduced compared to other dynamic scene SLAM algorithms, which illustrates its efficacy in locating objects in dynamic scenes.
基金supported by ZTE Industry-UniversityInstitute Cooperation Funds under Grant No.2023ZTE03-04.
文摘This paper explores the key techniques and challenges in dynamic scene reconstruction with neural radiance fields(NeRF).As an emerging computer vision method,the NeRF has wide application potential,especially in excelling at 3D reconstruction.We first introduce the basic principles and working mechanisms of NeRFs,followed by an in-depth discussion of the technical challenges faced by 3D reconstruction in dynamic scenes,including problems in perspective and illumination changes of moving objects,recognition and modeling of dynamic objects,real-time requirements,data acquisition and calibration,motion estimation,and evaluation mechanisms.We also summarize current state-of-theart approaches to address these challenges,as well as future research trends.The goal is to provide researchers with an in-depth understanding of the application of NeRFs in dynamic scene reconstruction,as well as insights into the key issues faced and future directions.
基金National Natural Science Foundation of China(61772319,62002200,61976125,61976124)Shandong Natural Science Foundation of China(ZR2017MF049)。
文摘Deblurring images of dynamic scenes is a challenging task because blurring occurs due to a combination of many factors.In recent years,the use of multi-scale pyramid methods to recover high-resolution sharp images has been extensively studied.We have made improvements to the lack of detail recovery in the cascade structure through a network using progressive integration of data streams.Our new multi-scale structure and edge feature perception design deals with changes in blurring at different spatial scales and enhances the sensitivity of the network to blurred edges.The coarse-to-fine architecture restores the image structure,first performing global adjustments,and then performing local refinement.In this way,not only is global correlation considered,but also residual information is used to significantly improve image restoration and enhance texture details.Experimental results show quantitative and qualitative improvements over existing methods.
文摘Moving object detection in dynamic scenes is a basic task in a surveillance system for sensor data collection. In this paper, we present a powerful back- ground subtraction algorithm called Gaussian-kernel density estimator (G-KDE) that improves the accuracy and reduces the computational load. The main innovation is that we divide the changes of background into continuous and stable changes to deal with dynamic scenes and moving objects that first merge into the background, and separately model background using both KDE model and Gaussian models. To get a temporal- spatial background model, the sample selection is based on the concept of region average at the update stage. In the detection stage, neighborhood information content (NIC) is implemented which suppresses the false detection due to small and un-modeled movements in the scene. The experimental results which are generated on three separate sequences indicate that this method is well suited for precise detection of moving objects in complex scenes and it can be efficiently used in various detection systems.
基金This work was supported by the National Natural Science Foundation of China(Grant Nos.61902210 and 61521002).
文摘Reconstructing dynamic scenes with commodity depth cameras has many applications in computer graphics,computer vision,and robotics.However,due to the presence of noise and erroneous observations from data capturing devices and the inherently ill-posed nature of non-rigid registration with insufficient information,traditional approaches often produce low-quality geometry with holes,bumps,and misalignments.We propose a novel 3D dynamic reconstruction system,named HDR-Net-Fusion,which learns to simultaneously reconstruct and refine the geometry on the fly with a sparse embedded deformation graph of surfels,using a hierarchical deep reinforcement(HDR)network.The latter comprises two parts:a global HDR-Net which rapidly detects local regions with large geometric errors,and a local HDR-Net serving as a local patch refinement operator to promptly complete and enhance such regions.Training the global HDR-Net is formulated as a novel reinforcement learning problem to implicitly learn the region selection strategy with the goal of improving the overall reconstruction quality.The applicability and efficiency of our approach are demonstrated using a large-scale dynamic reconstruction dataset.Our method can reconstruct geometry with higher quality than traditional methods.
基金Supported by the National Natural Science Foundation of China(No.61308099,61304032)
文摘Due to the existing limited dynamic range a camera cannot reveal all the details in a high-dynamic range scene. In order to solve this problem,this paper presents a multi-exposure fusion method for getting high quality images in high dynamic range scene. First,a set of multi-exposure images is obtained by multiple exposures in a same scene and their brightness condition is analyzed. Then,multi-exposure images under the same scene are decomposed using dual-tree complex wavelet transform( DT-CWT),and their low and high frequency components are obtained. Weight maps according to the brightness condition are assigned to the low components for fusion. Maximizing the region Sum Modified-Laplacian( SML) is adopted for high-frequency components fusing. Finally,the fused image is acquired by subjecting the low and high frequency coefficients to inverse DT-CWT.Experimental results show that the proposed approach generates high quality results with uniform distributed brightness and rich details. The proposed method is efficient and robust in varies scenes.
基金supported in part by the Natural Science Foundation of Shandong Province(No.ZR2024MF036)the National Key Research and Development Plan of China(No.2020AAA0109000)the National Natural Science Foundation of China(Nos.61973184,61803227,61603214,and 61573213).
文摘For the issue of low positioning accuracy in dynamic environments with traditional simultaneous localisation and mapping(SLAM),a dynamic point removal strategy combining object detection and optical flow tracking has been proposed.To fully utilise the semantic information,an ellipsoid model of the detected semantic objects was first constructed based on the plane and point cloud constraints,which assists in loop closure detection.Bilateral semantic map matching was achieved through the Kuhn-Munkres(KM)algorithm maximum weight assignment,and the pose transformation between local and global maps was determined by the random sample consensus(RANSAC)algorithm.Finally,a stable semantic SLAM system suitable for dy-namic environments was constructed.The effectiveness of achieving the system's positioning accuracy under dynamic inter-ference and large visual-inertial loop closure was verified by the experiment.
基金supported in part by the National Natural Science Foundation of China(Grant No.U1913201,U22B2041)Natural Science Foundation of Liaoning Province(Grant No.2019-ZD-0169).
文摘Simultaneous localisation and mapping(SLAM)are the basis for many robotic applications.As the front end of SLAM,visual odometry is mainly used to estimate camera pose.In dynamic scenes,classical methods are deteriorated by dynamic objects and cannot achieve satisfactory results.In order to improve the robustness of visual odometry in dynamic scenes,this paper proposed a dynamic region detection method based on RGBD images.Firstly,all feature points on the RGB image are classified as dynamic and static using a triangle constraint and the epipolar geometric constraint successively.Meanwhile,the depth image is clustered using the K-Means method.The classified feature points are mapped to the clustered depth image,and a dynamic or static label is assigned to each cluster according to the number of dynamic feature points.Subsequently,a dynamic region mask for the RGB image is generated based on the dynamic clusters in the depth image,and the feature points covered by the mask are all removed.The remaining static feature points are applied to estimate the camera pose.Finally,some experimental results are provided to demonstrate the feasibility and performance.
基金supported by National Natural Science Foundation of China (Grant Nos. 11301137 and 11371036)the National Science Foundation of Hebei Province of China (Grant No. A2014205100
文摘Background modeling and subtraction is a fundamental problem in video analysis. Many algorithms have been developed to date, but there are still some challenges in complex environments, especially dynamic scenes in which backgrounds are themselves moving, such as rippling water and swaying trees. In this paper, a novel background modeling method is proposed for dynamic scenes by combining both tensor representation and swarm intelligence. We maintain several video patches, which are naturally represented as higher order tensors,to represent the patterns of background, and utilize tensor low-rank approximation to capture the dynamic nature. Furthermore, we introduce an ant colony algorithm to improve the performance. Experimental results show that the proposed method is robust and adaptive in dynamic environments, and moving objects can be perfectly separated from the complex dynamic background.
基金the National Natural Science Foundation of China(U21A20487)Shenzhen Technology Project(JCYJ20180507182610734)and CAS Key Technology Talent Program.
文摘Visual SLAM methods usually presuppose that the scene is static, so the SLAM algorithm formobile robots in dynamic scenes often results in a signicant decrease in accuracy due to thein°uence of dynamic objects. In this paper, feature points are divided into dynamic and staticfrom semantic information and multi-view geometry information, and then static region featurepoints are added to the pose-optimization, and static scene maps are established for dynamicscenes. Finally, experiments are conducted in dynamic scenes using the KITTI dataset, and theresults show that the proposed algorithm has higher accuracy in highly dynamic scenes comparedto the visual SLAM baseline.