To obtain high-resolution of the subsurface structure, we modeled multidepth slanted airgun sources to attenuate the source ghost. By firing the guns in sequence according to their relative depths, such a source can b...To obtain high-resolution of the subsurface structure, we modeled multidepth slanted airgun sources to attenuate the source ghost. By firing the guns in sequence according to their relative depths, such a source can build constructive primaries and destructive ghosts. To evaluate the attenuation of ghosts, the normalized squared error of the spectrum of the actual vs the expected signature is computed. We used a typical 680 cu.in airgun string and found via simulations that a depth interval of 1 or 1.5 m between airguns is optimum when considering deghosting performance and operational feasibility. When more subarrays are combined, preliminary simulations are necessary to determine the optimum depth combination. The frequency notches introduced by the excess use of subarrays may negatively affect the deghosting performance. Two or three slanted subarrays can be combined to remove the ghost effect. The sequence combination may partly affect deghosting but this can be eliminated by matched filtering. Directivity comparison shows that a multi-depth slanted source can significantly attenuate the notches and widen the energy transmission stability area.展开更多
基于BEV(bird’s eye view)多传感器融合的自动驾驶感知算法近年来取得重大进展,持续促进自动驾驶的发展。在多传感器融合感知算法研究中,多视角图像向BEV视角的转换和多模态特征融合一直是BEV感知算法的重点和难点。笔者提出MSEPE-CRN(...基于BEV(bird’s eye view)多传感器融合的自动驾驶感知算法近年来取得重大进展,持续促进自动驾驶的发展。在多传感器融合感知算法研究中,多视角图像向BEV视角的转换和多模态特征融合一直是BEV感知算法的重点和难点。笔者提出MSEPE-CRN(multi-scale feature fusion and edge and point enhancement-camera radar net),一种用于3D目标检测的相机与毫米波雷达融合感知算法,利用边缘特征和点云提高深度预测的精度,实现多视角图像向BEV特征的精确转换。同时,引入多尺度可变形大核注意力机制进行模态融合,解决因不同传感器特征差异过大导致的错位。在nuScenes开源数据集上的实验结果表明,与基准网络相比,mAP提升2.17%、NDS提升1.93%、mATE提升2.58%、mAOE提升8.08%、mAVE提升2.13%,该算法可有效提高车辆对路面上运动障碍物的感知能力,具有实用价值。展开更多
多聚焦图像3维形貌重建旨在利用不同聚焦水平的图像序列恢复场景的3维结构信息.现有的3维形貌重建方法大多从单一尺度对图像序列的聚焦水平进行评价,通过引入正则化或后处理方法引导重建过程,由于深度信息选择空间的局限性往往导致重建...多聚焦图像3维形貌重建旨在利用不同聚焦水平的图像序列恢复场景的3维结构信息.现有的3维形貌重建方法大多从单一尺度对图像序列的聚焦水平进行评价,通过引入正则化或后处理方法引导重建过程,由于深度信息选择空间的局限性往往导致重建结果无法有效收敛.针对上述问题,提出一种多尺度代价聚合的多聚焦图像3维形貌重建框架(multi-scale cost aggregation framework for 3D shape reconstruction from multi-focus images,MSCAS),该框架首先引入非降采样的多尺度变换增加输入图像序列的深度信息选择空间,然后联合尺度内序列关联与尺度间信息约束进行代价聚合,通过这种扩张-聚合模式实现了场景深度表征信息的倍增与跨尺度和跨序列表征信息的有效融合.作为一种通用框架,MSCAS框架可实现已有模型设计类方法和深度学习类方法的嵌入进而实现性能提升.实验结果表明:MSCAS框架在嵌入模型设计类SFF方法后4组数据集中的均方根误差(root mean squared error,RMSE)平均下降14.91个百分点,结构相似度(structural similarity index measure,SSIM)平均提升56.69个百分点,嵌入深度学习类SFF方法后4组数据集中的RMSE平均下降1.55个百分点,SSIM平均提升1.61个百分点.验证了MSCAS框架的有效性和通用性.展开更多
基金financially supported by the national 863 program(2013AA064202)Marine subject interdisciplinary and guidance fund of Zhejiang University(188040+193414Y01)
文摘To obtain high-resolution of the subsurface structure, we modeled multidepth slanted airgun sources to attenuate the source ghost. By firing the guns in sequence according to their relative depths, such a source can build constructive primaries and destructive ghosts. To evaluate the attenuation of ghosts, the normalized squared error of the spectrum of the actual vs the expected signature is computed. We used a typical 680 cu.in airgun string and found via simulations that a depth interval of 1 or 1.5 m between airguns is optimum when considering deghosting performance and operational feasibility. When more subarrays are combined, preliminary simulations are necessary to determine the optimum depth combination. The frequency notches introduced by the excess use of subarrays may negatively affect the deghosting performance. Two or three slanted subarrays can be combined to remove the ghost effect. The sequence combination may partly affect deghosting but this can be eliminated by matched filtering. Directivity comparison shows that a multi-depth slanted source can significantly attenuate the notches and widen the energy transmission stability area.
文摘基于BEV(bird’s eye view)多传感器融合的自动驾驶感知算法近年来取得重大进展,持续促进自动驾驶的发展。在多传感器融合感知算法研究中,多视角图像向BEV视角的转换和多模态特征融合一直是BEV感知算法的重点和难点。笔者提出MSEPE-CRN(multi-scale feature fusion and edge and point enhancement-camera radar net),一种用于3D目标检测的相机与毫米波雷达融合感知算法,利用边缘特征和点云提高深度预测的精度,实现多视角图像向BEV特征的精确转换。同时,引入多尺度可变形大核注意力机制进行模态融合,解决因不同传感器特征差异过大导致的错位。在nuScenes开源数据集上的实验结果表明,与基准网络相比,mAP提升2.17%、NDS提升1.93%、mATE提升2.58%、mAOE提升8.08%、mAVE提升2.13%,该算法可有效提高车辆对路面上运动障碍物的感知能力,具有实用价值。
文摘多聚焦图像3维形貌重建旨在利用不同聚焦水平的图像序列恢复场景的3维结构信息.现有的3维形貌重建方法大多从单一尺度对图像序列的聚焦水平进行评价,通过引入正则化或后处理方法引导重建过程,由于深度信息选择空间的局限性往往导致重建结果无法有效收敛.针对上述问题,提出一种多尺度代价聚合的多聚焦图像3维形貌重建框架(multi-scale cost aggregation framework for 3D shape reconstruction from multi-focus images,MSCAS),该框架首先引入非降采样的多尺度变换增加输入图像序列的深度信息选择空间,然后联合尺度内序列关联与尺度间信息约束进行代价聚合,通过这种扩张-聚合模式实现了场景深度表征信息的倍增与跨尺度和跨序列表征信息的有效融合.作为一种通用框架,MSCAS框架可实现已有模型设计类方法和深度学习类方法的嵌入进而实现性能提升.实验结果表明:MSCAS框架在嵌入模型设计类SFF方法后4组数据集中的均方根误差(root mean squared error,RMSE)平均下降14.91个百分点,结构相似度(structural similarity index measure,SSIM)平均提升56.69个百分点,嵌入深度学习类SFF方法后4组数据集中的RMSE平均下降1.55个百分点,SSIM平均提升1.61个百分点.验证了MSCAS框架的有效性和通用性.