Moving object segmentation (MOS) is one of the essential functions of the vision system of all robots,including medical robots. Deep learning-based MOS methods, especially deep end-to-end MOS methods, are actively inv...Moving object segmentation (MOS) is one of the essential functions of the vision system of all robots,including medical robots. Deep learning-based MOS methods, especially deep end-to-end MOS methods, are actively investigated in this field. Foreground segmentation networks (FgSegNets) are representative deep end-to-endMOS methods proposed recently. This study explores a new mechanism to improve the spatial feature learningcapability of FgSegNets with relatively few brought parameters. Specifically, we propose an enhanced attention(EA) module, a parallel connection of an attention module and a lightweight enhancement module, with sequentialattention and residual attention as special cases. We also propose integrating EA with FgSegNet_v2 by taking thelightweight convolutional block attention module as the attention module and plugging EA module after the twoMaxpooling layers of the encoder. The derived new model is named FgSegNet_v2 EA. The ablation study verifiesthe effectiveness of the proposed EA module and integration strategy. The results on the CDnet2014 dataset,which depicts human activities and vehicles captured in different scenes, show that FgSegNet_v2 EA outperformsFgSegNet_v2 by 0.08% and 14.5% under the settings of scene dependent evaluation and scene independent evaluation, respectively, which indicates the positive effect of EA on improving spatial feature learning capability ofFgSegNet_v2.展开更多
针对当前电力设备红外图像分辨率低和温度分布模糊问题,提出一种基于局部和全局信息注意力生成对抗网络(local and global information attention generative adversarial network,LGIA-GAN)的超分辨率重建方法。首先,使用门控权重单元...针对当前电力设备红外图像分辨率低和温度分布模糊问题,提出一种基于局部和全局信息注意力生成对抗网络(local and global information attention generative adversarial network,LGIA-GAN)的超分辨率重建方法。首先,使用门控权重单元融合多种卷积输出构建细节增强融合卷积,增加重要信息在输出特征图的占比;其次,搭建双注意力模块,对图像长距离像素依赖关系建模并捕获空间和通道维度信息;然后,构造生成对抗网络,使网络关注电力设备红外图像局部纹理细节和全局轮廓信息;最后,通过实验证明,LGIA-GAN在数据集上的峰值信噪比和结构相似度分别为30.266dB和0.9197,重建时间为0.120s,明显优于其他几种GAN算法,并在主观视觉上重建效果更好。所提方法能够有效提升电力设备热成像分辨率,对电力设备故障诊断具有支撑作用。展开更多
Recently,speech enhancement methods based on Generative Adversarial Networks have achieved good performance in time-domain noisy signals.However,the training of Generative Adversarial Networks has such problems as con...Recently,speech enhancement methods based on Generative Adversarial Networks have achieved good performance in time-domain noisy signals.However,the training of Generative Adversarial Networks has such problems as convergence difficulty,model collapse,etc.In this work,an end-to-end speech enhancement model based on Wasserstein Generative Adversarial Networks is proposed,and some improvements have been made in order to get faster convergence speed and better generated speech quality.Specifically,in the generator coding part,each convolution layer adopts different convolution kernel sizes to conduct convolution operations for obtaining speech coding information from multiple scales;a gated linear unit is introduced to alleviate the vanishing gradient problem with the increase of network depth;the gradient penalty of the discriminator is replaced with spectral normalization to accelerate the convergence rate of themodel;a hybrid penalty termcomposed of L1 regularization and a scale-invariant signal-to-distortion ratio is introduced into the loss function of the generator to improve the quality of generated speech.The experimental results on both TIMIT corpus and Tibetan corpus show that the proposed model improves the speech quality significantly and accelerates the convergence speed of the model.展开更多
经典AOD-Net(All in One Dehazing Network)去雾后的图像存在细节清晰度不足、明暗反差过大和画面昏暗等问题。为了解决这些图像去雾问题,提出一种在AOD-Net基础上改进的多尺度算法。改进的网络结构采用深度可分离卷积替换传统卷积方式...经典AOD-Net(All in One Dehazing Network)去雾后的图像存在细节清晰度不足、明暗反差过大和画面昏暗等问题。为了解决这些图像去雾问题,提出一种在AOD-Net基础上改进的多尺度算法。改进的网络结构采用深度可分离卷积替换传统卷积方式,减少了冗余参数量,加快了计算速度并有效地减少了模型的内存占用量,从而提高了算法去雾效率;同时采用多尺度结构在不同尺度上对雾图进行分析和处理,更好地捕捉图像的细节信息,提升了网络对图像细节的处理能力,解决了原算法去雾时存在的细节模糊问题;最后在网络结构中加入金字塔池化模块,用于整合图像不同区域的上下文信息,扩展了网络的感知范围,从而提高网络模型获取有雾图像全局信息的能力,进而改善图像色调失真、细节丢失等问题。此外,引入一个低照度增强模块,通过明确预测噪声实现去噪的目标,从而恢复曝光不足的图像。在低光去雾图像中,峰值信噪比(PSNR)和结构相似性(SSIM)指标均有显著提升,处理后的图片具有更高的整体自然度。实验结果表明:与经典AOD-Net去雾的结果相比,改进算法能够更好地恢复图像的细节和结构,使得去雾后的图像更自然,饱和度和对比度也更加平衡;在RESIDE的SOTS数据集中的室外和室内场景,相较于经典AOD-Net,改进算法的PSNR分别提升了4.5593 dB和4.0656 dB,SSIM分别提升了0.0476和0.0874。展开更多
基金the National Natural Science Foundation of China(No.61702323)。
文摘Moving object segmentation (MOS) is one of the essential functions of the vision system of all robots,including medical robots. Deep learning-based MOS methods, especially deep end-to-end MOS methods, are actively investigated in this field. Foreground segmentation networks (FgSegNets) are representative deep end-to-endMOS methods proposed recently. This study explores a new mechanism to improve the spatial feature learningcapability of FgSegNets with relatively few brought parameters. Specifically, we propose an enhanced attention(EA) module, a parallel connection of an attention module and a lightweight enhancement module, with sequentialattention and residual attention as special cases. We also propose integrating EA with FgSegNet_v2 by taking thelightweight convolutional block attention module as the attention module and plugging EA module after the twoMaxpooling layers of the encoder. The derived new model is named FgSegNet_v2 EA. The ablation study verifiesthe effectiveness of the proposed EA module and integration strategy. The results on the CDnet2014 dataset,which depicts human activities and vehicles captured in different scenes, show that FgSegNet_v2 EA outperformsFgSegNet_v2 by 0.08% and 14.5% under the settings of scene dependent evaluation and scene independent evaluation, respectively, which indicates the positive effect of EA on improving spatial feature learning capability ofFgSegNet_v2.
文摘针对当前电力设备红外图像分辨率低和温度分布模糊问题,提出一种基于局部和全局信息注意力生成对抗网络(local and global information attention generative adversarial network,LGIA-GAN)的超分辨率重建方法。首先,使用门控权重单元融合多种卷积输出构建细节增强融合卷积,增加重要信息在输出特征图的占比;其次,搭建双注意力模块,对图像长距离像素依赖关系建模并捕获空间和通道维度信息;然后,构造生成对抗网络,使网络关注电力设备红外图像局部纹理细节和全局轮廓信息;最后,通过实验证明,LGIA-GAN在数据集上的峰值信噪比和结构相似度分别为30.266dB和0.9197,重建时间为0.120s,明显优于其他几种GAN算法,并在主观视觉上重建效果更好。所提方法能够有效提升电力设备热成像分辨率,对电力设备故障诊断具有支撑作用。
基金supported by the National Science Foundation under Grant No.62066039.
文摘Recently,speech enhancement methods based on Generative Adversarial Networks have achieved good performance in time-domain noisy signals.However,the training of Generative Adversarial Networks has such problems as convergence difficulty,model collapse,etc.In this work,an end-to-end speech enhancement model based on Wasserstein Generative Adversarial Networks is proposed,and some improvements have been made in order to get faster convergence speed and better generated speech quality.Specifically,in the generator coding part,each convolution layer adopts different convolution kernel sizes to conduct convolution operations for obtaining speech coding information from multiple scales;a gated linear unit is introduced to alleviate the vanishing gradient problem with the increase of network depth;the gradient penalty of the discriminator is replaced with spectral normalization to accelerate the convergence rate of themodel;a hybrid penalty termcomposed of L1 regularization and a scale-invariant signal-to-distortion ratio is introduced into the loss function of the generator to improve the quality of generated speech.The experimental results on both TIMIT corpus and Tibetan corpus show that the proposed model improves the speech quality significantly and accelerates the convergence speed of the model.
文摘经典AOD-Net(All in One Dehazing Network)去雾后的图像存在细节清晰度不足、明暗反差过大和画面昏暗等问题。为了解决这些图像去雾问题,提出一种在AOD-Net基础上改进的多尺度算法。改进的网络结构采用深度可分离卷积替换传统卷积方式,减少了冗余参数量,加快了计算速度并有效地减少了模型的内存占用量,从而提高了算法去雾效率;同时采用多尺度结构在不同尺度上对雾图进行分析和处理,更好地捕捉图像的细节信息,提升了网络对图像细节的处理能力,解决了原算法去雾时存在的细节模糊问题;最后在网络结构中加入金字塔池化模块,用于整合图像不同区域的上下文信息,扩展了网络的感知范围,从而提高网络模型获取有雾图像全局信息的能力,进而改善图像色调失真、细节丢失等问题。此外,引入一个低照度增强模块,通过明确预测噪声实现去噪的目标,从而恢复曝光不足的图像。在低光去雾图像中,峰值信噪比(PSNR)和结构相似性(SSIM)指标均有显著提升,处理后的图片具有更高的整体自然度。实验结果表明:与经典AOD-Net去雾的结果相比,改进算法能够更好地恢复图像的细节和结构,使得去雾后的图像更自然,饱和度和对比度也更加平衡;在RESIDE的SOTS数据集中的室外和室内场景,相较于经典AOD-Net,改进算法的PSNR分别提升了4.5593 dB和4.0656 dB,SSIM分别提升了0.0476和0.0874。