To overcome the obstacles of poor feature extraction and little prior information on the appearance of infrared dim small targets,we propose a multi-domain attention-guided pyramid network(MAGPNet).Specifically,we des...To overcome the obstacles of poor feature extraction and little prior information on the appearance of infrared dim small targets,we propose a multi-domain attention-guided pyramid network(MAGPNet).Specifically,we design three modules to ensure that salient features of small targets can be acquired and retained in the multi-scale feature maps.To improve the adaptability of the network for targets of different sizes,we design a kernel aggregation attention block with a receptive field attention branch and weight the feature maps under different perceptual fields with attention mechanism.Based on the research on human vision system,we further propose an adaptive local contrast measure module to enhance the local features of infrared small targets.With this parameterized component,we can implement the information aggregation of multi-scale contrast saliency maps.Finally,to fully utilize the information within spatial and channel domains in feature maps of different scales,we propose the mixed spatial-channel attention-guided fusion module to achieve high-quality fusion effects while ensuring that the small target features can be preserved at deep layers.Experiments on public datasets demonstrate that our MAGPNet can achieve a better performance over other state-of-the-art methods in terms of the intersection of union,Precision,Recall,and F-measure.In addition,we conduct detailed ablation studies to verify the effectiveness of each component in our network.展开更多
Background Owing to the rapid development of deep networks, single-image deraining tasks have progressed significantly. Various architectures have been designed to recursively or directly remove rain, and most rain st...Background Owing to the rapid development of deep networks, single-image deraining tasks have progressed significantly. Various architectures have been designed to recursively or directly remove rain, and most rain streaks can be removed using existing deraining methods. However, many of them cause detail loss, resulting in visual artifacts. Method To resolve this issue, we propose a novel unrolling rain-guided detail recovery network(URDRN) for single-image deraining based on the observation that the most degraded areas of a background image tend to be the most rain-corrupted regions. Furthermore, to address the problem that most existing deep-learningbased methods trivialize the observation model and simply learn end-to-end mapping, the proposed URDRN unrolls a single-image deraining task into two subproblems: rain extraction and detail recovery. Result Specifically, first, a context aggregation attention network is introduced to effectively extract rain streaks;thereafter, a rain attention map is generated as an indicator to guide the detail recovery process. For the detail recovery sub-network, with the guidance of the rain attention map, a simple encoder–decoder model is sufficient to recover the lost details.Experiments on several well-known benchmark datasets show that the proposed approach can achieve performance similar to those of other state-of-the-art methods.展开更多
基金the Industry-University-Research Cooperation Fund Project of the Eighth Research Institute of China Aerospace Science and Technology Corporation(No.USCAST2021-5)。
文摘To overcome the obstacles of poor feature extraction and little prior information on the appearance of infrared dim small targets,we propose a multi-domain attention-guided pyramid network(MAGPNet).Specifically,we design three modules to ensure that salient features of small targets can be acquired and retained in the multi-scale feature maps.To improve the adaptability of the network for targets of different sizes,we design a kernel aggregation attention block with a receptive field attention branch and weight the feature maps under different perceptual fields with attention mechanism.Based on the research on human vision system,we further propose an adaptive local contrast measure module to enhance the local features of infrared small targets.With this parameterized component,we can implement the information aggregation of multi-scale contrast saliency maps.Finally,to fully utilize the information within spatial and channel domains in feature maps of different scales,we propose the mixed spatial-channel attention-guided fusion module to achieve high-quality fusion effects while ensuring that the small target features can be preserved at deep layers.Experiments on public datasets demonstrate that our MAGPNet can achieve a better performance over other state-of-the-art methods in terms of the intersection of union,Precision,Recall,and F-measure.In addition,we conduct detailed ablation studies to verify the effectiveness of each component in our network.
基金Supported by the Project of Guangzhou Science and Technology (202102020591,202007010004,202007040005)。
文摘Background Owing to the rapid development of deep networks, single-image deraining tasks have progressed significantly. Various architectures have been designed to recursively or directly remove rain, and most rain streaks can be removed using existing deraining methods. However, many of them cause detail loss, resulting in visual artifacts. Method To resolve this issue, we propose a novel unrolling rain-guided detail recovery network(URDRN) for single-image deraining based on the observation that the most degraded areas of a background image tend to be the most rain-corrupted regions. Furthermore, to address the problem that most existing deep-learningbased methods trivialize the observation model and simply learn end-to-end mapping, the proposed URDRN unrolls a single-image deraining task into two subproblems: rain extraction and detail recovery. Result Specifically, first, a context aggregation attention network is introduced to effectively extract rain streaks;thereafter, a rain attention map is generated as an indicator to guide the detail recovery process. For the detail recovery sub-network, with the guidance of the rain attention map, a simple encoder–decoder model is sufficient to recover the lost details.Experiments on several well-known benchmark datasets show that the proposed approach can achieve performance similar to those of other state-of-the-art methods.