The semantic segmentation of a bird’s-eye view(BEV)is crucial for environment perception in autonomous driving,which includes the static elements of the scene,such as drivable areas,and dynamic elements such as cars....The semantic segmentation of a bird’s-eye view(BEV)is crucial for environment perception in autonomous driving,which includes the static elements of the scene,such as drivable areas,and dynamic elements such as cars.This paper proposes an end-to-end deep learning architecture based on 3D convolution to predict the semantic segmentation of a BEV,as well as voxel semantic segmentation,from monocular images.The voxelization of scenes and feature transformation from the perspective space to camera space are the key approaches of this model to boost the prediction accuracy.The effectiveness of the proposed method was demonstrated by training and evaluating the model on the NuScenes dataset.A comparison with other state-of-the-art methods showed that the proposed approach outperformed other approaches in the semantic segmentation of a BEV.It also implements voxel semantic segmentation,which cannot be achieved by the state-of-the-art methods.展开更多
The perception of Bird's Eye View(BEV)has become a widely adopted approach in 3D object detection due to its spatial and dimensional consistency.However,the increasing complexity of neural network architectures ha...The perception of Bird's Eye View(BEV)has become a widely adopted approach in 3D object detection due to its spatial and dimensional consistency.However,the increasing complexity of neural network architectures has resulted in higher training memory,thereby limiting the scalability of model training.To address these challenges,we propose a novel model,RevFB-BEV,which is based on the Reversible Swin Transformer(RevSwin)with Forward-Backward View Transformation(FBVT)and LiDAR Guided Back Projection(LGBP).This approach includes the RevSwin backbone network,which employs a reversible architecture to minimise training memory by recomputing intermediate parameters.Moreover,we introduce the FBVT module that refines BEV features extracted from forward projection,yielding denser and more precise camera BEV representations.The LGBP module further utilises LiDAR BEV guidance for back projection to achieve more accurate camera BEV features.Extensive experiments on the nuScenes dataset demonstrate notable performance improvements,with our model achieving over a 4 x reduction in training memory and a more than 12x decrease in single-backbone training memory.These efficiency gains become even more pronounced with deeper network architectures.Additionally,RevFB-BEV achieves 68.1 mAP(mean Average Precision)on the validation set and 68.9 mAP on the test set,which is nearly on par with the baseline BEVFusion,underscoring its effectiveness in resource-constrained scenarios.展开更多
基金the National Natural Science Founda-tion of China(No.52072243)the Sichuan Science and Technology Program(No.2020YFSY0058)。
文摘The semantic segmentation of a bird’s-eye view(BEV)is crucial for environment perception in autonomous driving,which includes the static elements of the scene,such as drivable areas,and dynamic elements such as cars.This paper proposes an end-to-end deep learning architecture based on 3D convolution to predict the semantic segmentation of a BEV,as well as voxel semantic segmentation,from monocular images.The voxelization of scenes and feature transformation from the perspective space to camera space are the key approaches of this model to boost the prediction accuracy.The effectiveness of the proposed method was demonstrated by training and evaluating the model on the NuScenes dataset.A comparison with other state-of-the-art methods showed that the proposed approach outperformed other approaches in the semantic segmentation of a BEV.It also implements voxel semantic segmentation,which cannot be achieved by the state-of-the-art methods.
基金supported by the Baima Lake Laboratory Joint Funds of the Zhejiang Provincial Natural Science Foundation of China under Grant LBMHD25F030001in part by NSFC No.62088101+1 种基金The authors certify that there are no competing financial interests or personal relationships influencing this work.Financial support originated exclusively from public research funds:(1)Baima Lake Laboratory Joint Funds(Zhejiang Provincial NSF,China)Grant LBMHD25F030001National Natural Science Foundation of China Grant 62088101 for the'Autonomous Intelligent Unmanned Systems'project.
文摘The perception of Bird's Eye View(BEV)has become a widely adopted approach in 3D object detection due to its spatial and dimensional consistency.However,the increasing complexity of neural network architectures has resulted in higher training memory,thereby limiting the scalability of model training.To address these challenges,we propose a novel model,RevFB-BEV,which is based on the Reversible Swin Transformer(RevSwin)with Forward-Backward View Transformation(FBVT)and LiDAR Guided Back Projection(LGBP).This approach includes the RevSwin backbone network,which employs a reversible architecture to minimise training memory by recomputing intermediate parameters.Moreover,we introduce the FBVT module that refines BEV features extracted from forward projection,yielding denser and more precise camera BEV representations.The LGBP module further utilises LiDAR BEV guidance for back projection to achieve more accurate camera BEV features.Extensive experiments on the nuScenes dataset demonstrate notable performance improvements,with our model achieving over a 4 x reduction in training memory and a more than 12x decrease in single-backbone training memory.These efficiency gains become even more pronounced with deeper network architectures.Additionally,RevFB-BEV achieves 68.1 mAP(mean Average Precision)on the validation set and 68.9 mAP on the test set,which is nearly on par with the baseline BEVFusion,underscoring its effectiveness in resource-constrained scenarios.