Aiming at the scale adaptation of automatic driving target detection algorithms in low illumination environments and the shortcomings in target occlusion processing,this paper proposes a YOLO-LKSDS automatic driving d...Aiming at the scale adaptation of automatic driving target detection algorithms in low illumination environments and the shortcomings in target occlusion processing,this paper proposes a YOLO-LKSDS automatic driving detection model.Firstly,the Contrast-Limited Adaptive Histogram Equalisation(CLAHE)image enhancement algorithm is improved to increase the image contrast and enhance the detailed features of the target;then,on the basis of the YOLOv5 model,the Kmeans++clustering algorithm is introduced to obtain a suitable anchor frame,and SPPELAN spatial pyramid pooling is improved to enhance the accuracy and robustness of the model for multi-scale target detection.Finally,an improved SEAM(Separated and Enhancement Attention Module)attention mechanism is combined with the DIOU-NMS algorithm to optimize the model’s performance when dealing with occlusion and dense scenes.Compared with the original model,the improved YOLO-LKSDS model achieves a 13.3%improvement in accuracy,a 1.7%improvement in mAP,and 240,000 fewer parameters on the BDD100K dataset.In order to validate the generalization of the improved algorithm,we selected the KITTI dataset for experimentation,which shows that YOLOv5’s accuracy improves by 21.1%,recall by 36.6%,and mAP50 by 29.5%,respectively,on the KITTI dataset.The deployment of this paper’s algorithm is verified by an edge computing platform,where the average speed of detection reaches 24.4 FPS while power consumption remains below 9 W,demonstrating high real-time capability and energy efficiency.展开更多
Images taken in dim environments frequently exhibit issues like insufficient brightness,noise,color shifts,and loss of detail.These problems pose significant challenges to dark image enhancement tasks.Current approach...Images taken in dim environments frequently exhibit issues like insufficient brightness,noise,color shifts,and loss of detail.These problems pose significant challenges to dark image enhancement tasks.Current approaches,while effective in global illumination modeling,often struggle to simultaneously suppress noise and preserve structural details,especially under heterogeneous lighting.Furthermore,misalignment between luminance and color channels introduces additional challenges to accurate enhancement.In response to the aforementioned difficulties,we introduce a single-stage framework,M2ATNet,using the multi-scale multi-attention and Transformer architecture.First,to address the problems of texture blurring and residual noise,we design a multi-scale multi-attention denoising module(MMAD),which is applied separately to the luminance and color channels to enhance the structural and texture modeling capabilities.Secondly,to solve the non-alignment problem of the luminance and color channels,we introduce the multi-channel feature fusion Transformer(CFFT)module,which effectively recovers the dark details and corrects the color shifts through cross-channel alignment and deep feature interaction.To guide the model to learn more stably and efficiently,we also fuse multiple types of loss functions to form a hybrid loss term.We extensively evaluate the proposed method on various standard datasets,including LOL-v1,LOL-v2,DICM,LIME,and NPE.Evaluation in terms of numerical metrics and visual quality demonstrate that M2ATNet consistently outperforms existing advanced approaches.Ablation studies further confirm the critical roles played by the MMAD and CFFT modules to detail preservation and visual fidelity under challenging illumination-deficient environments.展开更多
Low-light image enhancement is one of the most active research areas in the field of computer vision in recent years.In the low-light image enhancement process,loss of image details and increase in noise occur inevita...Low-light image enhancement is one of the most active research areas in the field of computer vision in recent years.In the low-light image enhancement process,loss of image details and increase in noise occur inevitably,influencing the quality of enhanced images.To alleviate this problem,a low-light image enhancement model called RetinexNet model based on Retinex theory was proposed in this study.The model was composed of an image decomposition module and a brightness enhancement module.In the decomposition module,a convolutional block attention module(CBAM)was incorporated to enhance feature representation capacity of the network,focusing on crucial features and suppressing irrelevant ones.A multifeature fusion denoising module was designed within the brightness enhancement module,circumventing the issue of feature loss during downsampling.The proposed model outperforms the existing algorithms in terms of PSNR and SSIM metrics on the publicly available datasets LOL and MIT-Adobe FiveK,as well as gives superior results in terms of NIQE metrics on the publicly available dataset LIME.展开更多
Infrared and visible light image fusion technology integrates feature information from two different modalities into a fused image to obtain more comprehensive information.However,in low-light scenarios,the illuminati...Infrared and visible light image fusion technology integrates feature information from two different modalities into a fused image to obtain more comprehensive information.However,in low-light scenarios,the illumination degradation of visible light images makes it difficult for existing fusion methods to extract texture detail information from the scene.At this time,relying solely on the target saliency information provided by infrared images is far from sufficient.To address this challenge,this paper proposes a lightweight infrared and visible light image fusion method based on low-light enhancement,named LLE-Fuse.The method is based on the improvement of the MobileOne Block,using the Edge-MobileOne Block embedded with the Sobel operator to perform feature extraction and downsampling on the source images.The intermediate features at different scales obtained are then fused by a cross-modal attention fusion module.In addition,the Contrast Limited Adaptive Histogram Equalization(CLAHE)algorithm is used for image enhancement of both infrared and visible light images,guiding the network model to learn low-light enhancement capabilities through enhancement loss.Upon completion of network training,the Edge-MobileOne Block is optimized into a direct connection structure similar to MobileNetV1 through structural reparameterization,effectively reducing computational resource consumption.Finally,after extensive experimental comparisons,our method achieved improvements of 4.6%,40.5%,156.9%,9.2%,and 98.6%in the evaluation metrics Standard Deviation(SD),Visual Information Fidelity(VIF),Entropy(EN),and Spatial Frequency(SF),respectively,compared to the best results of the compared algorithms,while only being 1.5 ms/it slower in computation speed than the fastest method.展开更多
Low-light images often have defects such as low visibility,low contrast,high noise,and high color distortion compared with well-exposed images.If the low-light region of an image is enhanced directly,the noise will in...Low-light images often have defects such as low visibility,low contrast,high noise,and high color distortion compared with well-exposed images.If the low-light region of an image is enhanced directly,the noise will inevitably blur the whole image.Besides,according to the retina-and-cortex(retinex)theory of color vision,the reflectivity of different image regions may differ,limiting the enhancement performance of applying uniform operations to the entire image.Therefore,we design a Hierarchical Flow Learning(HFL)framework,which consists of a Hierarchical Image Network(HIN)and a normalized invertible Flow Learning Network(FLN).HIN can extract hierarchical structural features from low-light images,while FLN maps the distribution of normally exposed images to a Gaussian distribution using the learned hierarchical features of low-light images.In subsequent testing,the reversibility of FLN allows inferring and obtaining enhanced low-light images.Specifically,the HIN extracts as much image information as possible from three scales,local,regional,and global,using a Triple-branch Hierarchical Fusion Module(THFM)and a Dual-Dconv Cross Fusion Module(DCFM).The THFM aggregates regional and global features to enhance the overall brightness and quality of low-light images by perceiving and extracting more structure information,whereas the DCFM uses the properties of the activation function and local features to enhance images at the pixel-level to reduce noise and improve contrast.In addition,in this paper,the model was trained using a negative log-likelihood loss function.Qualitative and quantitative experimental results demonstrate that our HFL can better handle many quality degradation types in low-light images compared with state-of-the-art solutions.The HFL model enhances low-light images with better visibility,less noise,and improved contrast,suitable for practical scenarios such as autonomous driving,medical imaging,and nighttime surveillance.Outperforming them by PSNR=27.26 dB,SSIM=0.93,and LPIPS=0.10 on benchmark dataset LOL-v1.The source code of HFL is available at https://github.com/Smile-QT/HFL-for-LIE.展开更多
This research addresses the critical challenge of enhancing satellite images captured under low-light conditions,which suffer from severely degraded quality,including a lack of detail,poor contrast,and low usability.O...This research addresses the critical challenge of enhancing satellite images captured under low-light conditions,which suffer from severely degraded quality,including a lack of detail,poor contrast,and low usability.Overcoming this limitation is essential for maximizing the value of satellite imagery in downstream computer vision tasks(e.g.,spacecraft on-orbit connection,spacecraft surface repair,space debris capture)that rely on clear visual information.Our key novelty lies in an unsupervised generative adversarial network featuring two main contributions:(1)an improved U-Net(IU-Net)generator with multi-scale feature fusion in the contracting path for richer semantic feature extraction,and(2)a Global Illumination Attention Module(GIA)at the end of the contracting path to couple local and global information,significantly improving detail recovery and illumination adjustment.The proposed algorithm operates in an unsupervised manner.It is trained and evaluated on our self-constructed,unpaired Spacecraft Dataset for Detection,Enforcement,and Parts Recognition(SDDEP),designed specifically for low-light enhancement tasks.Extensive experiments demonstrate that our method outperforms the baseline EnlightenGAN,achieving improvements of 2.7%in structural similarity(SSIM),4.7%in peak signal-to-noise ratio(PSNR),6.3%in learning perceptual image patch similarity(LPIPS),and 53.2%in DeltaE 2000.Qualitatively,the enhanced images exhibit higher overall and local brightness,improved contrast,and more natural visual effects.展开更多
Visible-infrared object detection leverages the day-night stable object perception capability of infrared images to enhance detection robustness in low-light environments by fusing the complementary information of vis...Visible-infrared object detection leverages the day-night stable object perception capability of infrared images to enhance detection robustness in low-light environments by fusing the complementary information of visible and infrared images.However,the inherent differences in the imaging mechanisms of visible and infrared modalities make effective cross-modal fusion challenging.Furthermore,constrained by the physical characteristics of sensors and thermal diffusion effects,infrared images generally suffer from blurred object contours and missing details,making it difficult to extract object features effectively.To address these issues,we propose an infrared-visible image fusion network that realizesmultimodal information fusion of infrared and visible images through a carefully designedmultiscale fusion strategy.First,we design an adaptive gray-radiance enhancement(AGRE)module to strengthen the detail representation in infrared images,improving their usability in complex lighting scenarios.Next,we introduce a channelspatial feature interaction(CSFI)module,which achieves efficient complementarity between the RGB and infrared(IR)modalities via dynamic channel switching and a spatial attention mechanism.Finally,we propose a multi-scale enhanced cross-attention fusion(MSECA)module,which optimizes the fusion ofmulti-level features through dynamic convolution and gating mechanisms and captures long-range complementary relationships of cross-modal features on a global scale,thereby enhancing the expressiveness of the fused features.Experiments on the KAIST,M3FD,and FLIR datasets demonstrate that our method delivers outstanding performance in daytime and nighttime scenarios.On the KAIST dataset,the miss rate drops to 5.99%,and further to 4.26% in night scenes.On the FLIR and M3FD datasets,it achieves AP50 scores of 79.4% and 88.9%,respectively.展开更多
Recently,a multitude of techniques that fuse deep learning with Retinex theory have been utilized in the field of low-light image enhancement,yielding remarkable outcomes.Due to the intricate nature of imaging scenari...Recently,a multitude of techniques that fuse deep learning with Retinex theory have been utilized in the field of low-light image enhancement,yielding remarkable outcomes.Due to the intricate nature of imaging scenarios,including fluctuating noise levels and unpredictable environmental elements,these techniques do not fully resolve these challenges.We introduce an innovative strategy that builds upon Retinex theory and integrates a novel deep network architecture,merging the Convolutional Block Attention Module(CBAM)with the Transformer.Our model is capable of detecting more prominent features across both channel and spatial domains.We have conducted extensive experiments across several datasets,namely LOLv1,LOLv2-real,and LOLv2-sync.The results show that our approach surpasses other methods when evaluated against critical metrics such as Peak Signal-to-Noise Ratio(PSNR)and Structural Similarity Index(SSIM).Moreover,we have visually assessed images enhanced by various techniques and utilized visual metrics like LPIPS for comparison,and the experimental data clearly demonstrate that our approach excels visually over other methods as well.展开更多
This study aimed to identify the physiological mechanisms enabling low-N-tolerant maize cultivar to maintain higher photosynthesis and yield under low-N,low-light,and combined stress.In a three-year field trial of low...This study aimed to identify the physiological mechanisms enabling low-N-tolerant maize cultivar to maintain higher photosynthesis and yield under low-N,low-light,and combined stress.In a three-year field trial of low-N-tolerant and low-N-sensitive maize cultivars under two N fertilization(normal N:240 kg N ha^(−1);low-N:150 kg N ha^(−1))and two light conditions(normal light;low-light:35%light reduction),the tolerant cultivar showed higher net photosynthetic rate than the sensitive one.Random Forest analysis and Structural Equation Modeling identified PSI donor-side limitation(elevated Y_(ND))as the key photosynthetic constraint.The tolerant cultivar maintained higher D1 and PsaA protein levels and preferentially allocated photosynthetic N to electron transport.This strategy reduced Y_(ND)and sustained photosystem stability,thus improving carboxylation efficiency and resulting in higher photosynthesis.展开更多
Under low-illumination conditions, the quality of image signals deteriorates significantly, typically characterized by a peak signal-to-noise ratio (PSNR) below 10 dB, which severely limits the usability of the images...Under low-illumination conditions, the quality of image signals deteriorates significantly, typically characterized by a peak signal-to-noise ratio (PSNR) below 10 dB, which severely limits the usability of the images. Supervised methods, which utilize paired high-low light images as training sets, can enhance the PSNR to around 20 dB, significantly improving image quality. However, such data is challenging to obtain. In recent years, unsupervised low-light image enhancement (LIE) methods based on the Retinex framework have been proposed, but they generally lag behind supervised methods by 5–10 dB in performance. In this paper, we introduce the Denoising-Distilled Retine (DDR) method, an unsupervised approach that integrates denoising priors into a Retinex-based training framework. By explicitly incorporating denoising, the DDR method effectively addresses the challenges of noise and artifacts in low-light images, thereby enhancing the performance of the Retinex framework. The model achieved a PSNR of 19.82 dB on the LOL dataset, which is comparable to the performance of supervised methods. Furthermore, by applying knowledge distillation, the DDR method optimizes the model for real-time processing of low-light images, achieving a processing speed of 199.7 fps without incurring additional computational costs. While the DDR method has demonstrated superior performance in terms of image quality and processing speed, there is still room for improvement in terms of robustness across different color spaces and under highly resource-constrained conditions. Future research will focus on enhancing the model’s generalizability and adaptability to address these challenges. Our rigorous testing on public datasets further substantiates the DDR method’s state-of-the-art performance in both image quality and processing speed.展开更多
Enhancing low-light images with color distortion and uneven multi-light source distribution presents challenges. Most advanced methods for low-light image enhancement are based on the Retinex model using deep learning...Enhancing low-light images with color distortion and uneven multi-light source distribution presents challenges. Most advanced methods for low-light image enhancement are based on the Retinex model using deep learning. Retinexformer introduces channel self-attention mechanisms in the IG-MSA. However, it fails to effectively capture long-range spatial dependencies, leaving room for improvement. Based on the Retinexformer deep learning framework, we designed the Retinexformer+ network. The “+” signifies our advancements in extracting long-range spatial dependencies. We introduced multi-scale dilated convolutions in illumination estimation to expand the receptive field. These convolutions effectively capture the weakening semantic dependency between pixels as distance increases. In illumination restoration, we used Unet++ with multi-level skip connections to better integrate semantic information at different scales. The designed Illumination Fusion Dual Self-Attention (IF-DSA) module embeds multi-scale dilated convolutions to achieve spatial self-attention. This module captures long-range spatial semantic relationships within acceptable computational complexity. Experimental results on the Low-Light (LOL) dataset show that Retexformer+ outperforms other State-Of-The-Art (SOTA) methods in both quantitative and qualitative evaluations, with the computational complexity increased to an acceptable 51.63 G FLOPS. On the LOL_v1 dataset, RetinexFormer+ shows an increase of 1.15 in Peak Signal-to-Noise Ratio (PSNR) and a decrease of 0.39 in Root Mean Square Error (RMSE). On the LOL_v2_real dataset, the PSNR increases by 0.42 and the RMSE decreases by 0.18. Experimental results on the Exdark dataset show that Retexformer+ can effectively enhance real-scene images and maintain their semantic information.展开更多
Low-light image enhancement methods have limitations in addressing issues such as color distortion,lack of vibrancy,and uneven light distribution and often require paired training data.To address these issues,we propo...Low-light image enhancement methods have limitations in addressing issues such as color distortion,lack of vibrancy,and uneven light distribution and often require paired training data.To address these issues,we propose a two-stage unsupervised low-light image enhancement algorithm called Retinex and Exposure Fusion Network(RFNet),which can overcome the problems of over-enhancement of the high dynamic range and under-enhancement of the low dynamic range in existing enhancement algorithms.This algorithm can better manage the challenges brought about by complex environments in real-world scenarios by training with unpaired low-light images and regular-light images.In the first stage,we design a multi-scale feature extraction module based on Retinex theory,capable of extracting details and structural information at different scales to generate high-quality illumination and reflection images.In the second stage,an exposure image generator is designed through the camera response mechanism function to acquire exposure images containing more dark features,and the generated images are fused with the original input images to complete the low-light image enhancement.Experiments show the effectiveness and rationality of each module designed in this paper.And the method reconstructs the details of contrast and color distribution,outperforms the current state-of-the-art methods in both qualitative and quantitative metrics,and shows excellent performance in the real world.展开更多
A new image enhancement algorithm based on Retinex theory is proposed to solve the problem of bad visual effect of an image in low-light conditions. First, an image is converted from the RGB color space to the HSV col...A new image enhancement algorithm based on Retinex theory is proposed to solve the problem of bad visual effect of an image in low-light conditions. First, an image is converted from the RGB color space to the HSV color space to get the V channel. Next, the illuminations are respectively estimated by the guided filtering and the variational framework on the V channel and combined into a new illumination by average gradient. The new reflectance is calculated using V channel and the new illumination. Then a new V channel obtained by multiplying the new illumination and reflectance is processed with contrast limited adaptive histogram equalization(CLAHE). Finally, the new image in HSV space is converted back to RGB space to obtain the enhanced image. Experimental results show that the proposed method has better subjective quality and objective quality than existing methods.展开更多
Unmanned aerial vehicle (UAV) target tracking tasks can currently be successfully completed in daytime situations with enough lighting, but they are unable to do so in nighttime scenes with inadequate lighting, poor c...Unmanned aerial vehicle (UAV) target tracking tasks can currently be successfully completed in daytime situations with enough lighting, but they are unable to do so in nighttime scenes with inadequate lighting, poor contrast, and low signal-to-noise ratio. This letter presents an enhanced low-light enhancer for UAV nighttime tracking based on Zero-DCE++ due to its ad-vantages of low processing cost and quick inference. We developed a light-weight UCBAM capable of integrating channel information and spatial features and offered a fully considered curve projection model in light of the low signal-to-noise ratio of night scenes. This method significantly improved the tracking performance of the UAV tracker in night situations when tested on the public UAVDark135 and compared to other cutting-edge low-light enhancers. By applying our work to different trackers, this search shows how broadly applicable it is.展开更多
This paper expounds upon a novel target detection methodology distinguished by its elevated discriminatory efficacy,specifically tailored for environments characterized by markedly low luminance levels.Conventional me...This paper expounds upon a novel target detection methodology distinguished by its elevated discriminatory efficacy,specifically tailored for environments characterized by markedly low luminance levels.Conventional methodologies struggle with the challenges posed by luminosity fluctuations,especially in settings characterized by diminished radiance,further exacerbated by the utilization of suboptimal imaging instrumentation.The envisioned approach mandates a departure from the conventional YOLOX model,which exhibits inadequacies in mitigating these challenges.To enhance the efficacy of this approach in low-light conditions,the dehazing algorithm undergoes refinement,effecting a discerning regulation of the transmission rate at the pixel level,reducing it to values below 0.5,thereby resulting in an augmentation of image contrast.Subsequently,the coiflet wavelet transform is employed to discern and isolate high-discriminatory attributes by dismantling low-frequency image attributes and extracting high-frequency attributes across divergent axes.The utilization of CycleGAN serves to elevate the features of low-light imagery across an array of stylistic variances.Advanced computational methodologies are then employed to amalgamate and conflate intricate attributes originating from images characterized by distinct stylistic orientations,thereby augmenting the model’s erudition potential.Empirical validation conducted on the PASCAL VOC and MS COCO 2017 datasets substantiates pronounced advancements.The refined low-light enhancement algorithm yields a discernible 5.9%augmentation in the target detection evaluation index when compared to the original imagery.Mean Average Precision(mAP)undergoes enhancements of 9.45%and 0.052%in low-light visual renditions relative to conventional YOLOX outcomes.The envisaged approach presents a myriad of advantages over prevailing benchmark methodologies in the realm of target detection within environments marked by an acute scarcity of luminosity.展开更多
Due to their excellent carrier mobility,high absorption coefficient and narrow bandgap,most 2D IVA metal chalcogenide semiconductors(GIVMCs,metal=Ge,Sn,Pb;chalcogen=S,Se)are regarded as promising candidates for realiz...Due to their excellent carrier mobility,high absorption coefficient and narrow bandgap,most 2D IVA metal chalcogenide semiconductors(GIVMCs,metal=Ge,Sn,Pb;chalcogen=S,Se)are regarded as promising candidates for realizing high-performance photodetectors.We synthesized high-quality two-dimensional(2D)tin sulfide(SnS)nanosheets using the physical vapor deposition(PVD)method and fabricated a 2D SnS visible-light photodetector.The photodetector exhibits a high photoresponsivity of 161 A·W-1 and possesses an external quantum efficiency of 4.45×10^(4)%,as well as a detectivity of 1.15×10^(9) Jones under 450 nm blue light illumination.Moreover,under poor illumination at optical densities down to 2 mW·cm^(-2),the responsivity of the device is higher than that at stronger optical densities.We suggest that a photogating effect in the 2D SnS photodetector is mainly responsible for its low-light responsivity.Defects and impurities in 2D SnS can trap carriers and form localized electric fields,which can delay the recombination process of electron-hole pairs,prolong carrier lifetimes,and thus improve the low-light responsivity.This work provides design strategies for detecting low levels of light using photodetectors made of 2D materials.展开更多
Low-light images suffer from low quality due to poor lighting conditions,noise pollution,and improper settings of cameras.To enhance low-light images,most existing methods rely on normal-light images for guidance but ...Low-light images suffer from low quality due to poor lighting conditions,noise pollution,and improper settings of cameras.To enhance low-light images,most existing methods rely on normal-light images for guidance but the collection of suitable normal-light images is difficult.In contrast,a self-supervised method breaks free from the reliance on normal-light data,resulting in more convenience and better generalization.Existing self-supervised methods primarily focus on illumination adjustment and design pixel-based adjustment methods,resulting in remnants of other degradations,uneven brightness and artifacts.In response,this paper proposes a self-supervised enhancement method,termed as SLIE.It can handle multiple degradations including illumination attenuation,noise pollution,and color shift,all in a self-supervised manner.Illumination attenuation is estimated based on physical principles and local neighborhood information.The removal and correction of noise and color shift removal are solely realized with noisy images and images with color shifts.Finally,the comprehensive and fully self-supervised approach can achieve better adaptability and generalization.It is applicable to various low light conditions,and can reproduce the original color of scenes in natural light.Extensive experiments conducted on four public datasets demonstrate the superiority of SLIE to thirteen state-of-the-art methods.Our code is available at https://github.com/hanna-xu/SLIE.展开更多
A novel frame shift and integral technique for the enhancement of low light level moving image sequence is introduced. According to the technique, motion parameters of target are measured by algorithm based on differe...A novel frame shift and integral technique for the enhancement of low light level moving image sequence is introduced. According to the technique, motion parameters of target are measured by algorithm based on difference processing. To obtain spatial relativity, images are shifted according to the motion parameters. As a result, the processing of integral and average can be applied to images that have been shifted. The technique of frame shift and integral that includes the algorithm of motion parameter determination is discussed, experiments with low light level moving image sequences are also described. The experiment results show the effectiveness and the robustness of the parameter determination algorithm, and the improvement in the signal-to-noise ratio (SNR) of low light level moving images.展开更多
On-orbit service is important for maintaining the sustainability of the space environment.A space-based visible camera is an economical and lightweight sensor for situational awareness during on-orbit service.However,...On-orbit service is important for maintaining the sustainability of the space environment.A space-based visible camera is an economical and lightweight sensor for situational awareness during on-orbit service.However,it can be easily affected by the low illumination environment.Recently,deep learning has achieved remarkable success in image enhancement of natural images,but it is seldom applied in space due to the data bottleneck.In this study,we first propose a dataset of BeiDou navigation satellites for on-orbit low-light image enhancement(LLIE).In the automatic data collection scheme,we focus on reducing the domain gap and improving the diversity of the dataset.We collect hardware-in-the-loop images based on a robotic simulation testbed imitating space lighting conditions.To evenly sample poses of different orientations and distances without collision,we propose a collision-free workspace and pose-stratified sampling.Subsequently,we develop a novel diffusion model.To enhance the image contrast without over-exposure and blurred details,we design fused attention guidance to highlight the structure and the dark region.Finally,a comparison of our method with previous methods indicates that our method has better on-orbit LLIE performance.展开更多
Poor illumination greatly affects the quality of obtained images.In this paper,a novel convolutional neural network named DEANet is proposed on the basis of Retinex for low-light image enhancement.DEANet combines the ...Poor illumination greatly affects the quality of obtained images.In this paper,a novel convolutional neural network named DEANet is proposed on the basis of Retinex for low-light image enhancement.DEANet combines the frequency and content information of images and is divided into three subnetworks:decomposition,enhancement,and adjustment networks,which perform image decomposition;denoising,contrast enhancement,and detail preservation;and image adjustment and generation,respectively.The model is trained on the public LOL dataset,and the experimental results show that it outperforms the existing state-of-the-art methods regarding visual effects and image quality.展开更多
基金supported by the Key R&D Program of Shaanxi Province(No.2025CYYBXM-078).
文摘Aiming at the scale adaptation of automatic driving target detection algorithms in low illumination environments and the shortcomings in target occlusion processing,this paper proposes a YOLO-LKSDS automatic driving detection model.Firstly,the Contrast-Limited Adaptive Histogram Equalisation(CLAHE)image enhancement algorithm is improved to increase the image contrast and enhance the detailed features of the target;then,on the basis of the YOLOv5 model,the Kmeans++clustering algorithm is introduced to obtain a suitable anchor frame,and SPPELAN spatial pyramid pooling is improved to enhance the accuracy and robustness of the model for multi-scale target detection.Finally,an improved SEAM(Separated and Enhancement Attention Module)attention mechanism is combined with the DIOU-NMS algorithm to optimize the model’s performance when dealing with occlusion and dense scenes.Compared with the original model,the improved YOLO-LKSDS model achieves a 13.3%improvement in accuracy,a 1.7%improvement in mAP,and 240,000 fewer parameters on the BDD100K dataset.In order to validate the generalization of the improved algorithm,we selected the KITTI dataset for experimentation,which shows that YOLOv5’s accuracy improves by 21.1%,recall by 36.6%,and mAP50 by 29.5%,respectively,on the KITTI dataset.The deployment of this paper’s algorithm is verified by an edge computing platform,where the average speed of detection reaches 24.4 FPS while power consumption remains below 9 W,demonstrating high real-time capability and energy efficiency.
基金funded by the National Natural Science Foundation of China,grant numbers 52374156 and 62476005。
文摘Images taken in dim environments frequently exhibit issues like insufficient brightness,noise,color shifts,and loss of detail.These problems pose significant challenges to dark image enhancement tasks.Current approaches,while effective in global illumination modeling,often struggle to simultaneously suppress noise and preserve structural details,especially under heterogeneous lighting.Furthermore,misalignment between luminance and color channels introduces additional challenges to accurate enhancement.In response to the aforementioned difficulties,we introduce a single-stage framework,M2ATNet,using the multi-scale multi-attention and Transformer architecture.First,to address the problems of texture blurring and residual noise,we design a multi-scale multi-attention denoising module(MMAD),which is applied separately to the luminance and color channels to enhance the structural and texture modeling capabilities.Secondly,to solve the non-alignment problem of the luminance and color channels,we introduce the multi-channel feature fusion Transformer(CFFT)module,which effectively recovers the dark details and corrects the color shifts through cross-channel alignment and deep feature interaction.To guide the model to learn more stably and efficiently,we also fuse multiple types of loss functions to form a hybrid loss term.We extensively evaluate the proposed method on various standard datasets,including LOL-v1,LOL-v2,DICM,LIME,and NPE.Evaluation in terms of numerical metrics and visual quality demonstrate that M2ATNet consistently outperforms existing advanced approaches.Ablation studies further confirm the critical roles played by the MMAD and CFFT modules to detail preservation and visual fidelity under challenging illumination-deficient environments.
文摘Low-light image enhancement is one of the most active research areas in the field of computer vision in recent years.In the low-light image enhancement process,loss of image details and increase in noise occur inevitably,influencing the quality of enhanced images.To alleviate this problem,a low-light image enhancement model called RetinexNet model based on Retinex theory was proposed in this study.The model was composed of an image decomposition module and a brightness enhancement module.In the decomposition module,a convolutional block attention module(CBAM)was incorporated to enhance feature representation capacity of the network,focusing on crucial features and suppressing irrelevant ones.A multifeature fusion denoising module was designed within the brightness enhancement module,circumventing the issue of feature loss during downsampling.The proposed model outperforms the existing algorithms in terms of PSNR and SSIM metrics on the publicly available datasets LOL and MIT-Adobe FiveK,as well as gives superior results in terms of NIQE metrics on the publicly available dataset LIME.
基金This researchwas Sponsored by Xinjiang Uygur Autonomous Region Tianshan Talent Programme Project(2023TCLJ02)Natural Science Foundation of Xinjiang Uygur Autonomous Region(2022D01C349).
文摘Infrared and visible light image fusion technology integrates feature information from two different modalities into a fused image to obtain more comprehensive information.However,in low-light scenarios,the illumination degradation of visible light images makes it difficult for existing fusion methods to extract texture detail information from the scene.At this time,relying solely on the target saliency information provided by infrared images is far from sufficient.To address this challenge,this paper proposes a lightweight infrared and visible light image fusion method based on low-light enhancement,named LLE-Fuse.The method is based on the improvement of the MobileOne Block,using the Edge-MobileOne Block embedded with the Sobel operator to perform feature extraction and downsampling on the source images.The intermediate features at different scales obtained are then fused by a cross-modal attention fusion module.In addition,the Contrast Limited Adaptive Histogram Equalization(CLAHE)algorithm is used for image enhancement of both infrared and visible light images,guiding the network model to learn low-light enhancement capabilities through enhancement loss.Upon completion of network training,the Edge-MobileOne Block is optimized into a direct connection structure similar to MobileNetV1 through structural reparameterization,effectively reducing computational resource consumption.Finally,after extensive experimental comparisons,our method achieved improvements of 4.6%,40.5%,156.9%,9.2%,and 98.6%in the evaluation metrics Standard Deviation(SD),Visual Information Fidelity(VIF),Entropy(EN),and Spatial Frequency(SF),respectively,compared to the best results of the compared algorithms,while only being 1.5 ms/it slower in computation speed than the fastest method.
基金supported by the National Natural Science Foundation of China(Grant Nos.61971078,61501070)the Scientific Research Foundation of Chongqing University of Technology(Grant No.0121230236)the Science and Technology Research Program of Chongqing Municipal Education Commission(Grant No.KJ202301165).
文摘Low-light images often have defects such as low visibility,low contrast,high noise,and high color distortion compared with well-exposed images.If the low-light region of an image is enhanced directly,the noise will inevitably blur the whole image.Besides,according to the retina-and-cortex(retinex)theory of color vision,the reflectivity of different image regions may differ,limiting the enhancement performance of applying uniform operations to the entire image.Therefore,we design a Hierarchical Flow Learning(HFL)framework,which consists of a Hierarchical Image Network(HIN)and a normalized invertible Flow Learning Network(FLN).HIN can extract hierarchical structural features from low-light images,while FLN maps the distribution of normally exposed images to a Gaussian distribution using the learned hierarchical features of low-light images.In subsequent testing,the reversibility of FLN allows inferring and obtaining enhanced low-light images.Specifically,the HIN extracts as much image information as possible from three scales,local,regional,and global,using a Triple-branch Hierarchical Fusion Module(THFM)and a Dual-Dconv Cross Fusion Module(DCFM).The THFM aggregates regional and global features to enhance the overall brightness and quality of low-light images by perceiving and extracting more structure information,whereas the DCFM uses the properties of the activation function and local features to enhance images at the pixel-level to reduce noise and improve contrast.In addition,in this paper,the model was trained using a negative log-likelihood loss function.Qualitative and quantitative experimental results demonstrate that our HFL can better handle many quality degradation types in low-light images compared with state-of-the-art solutions.The HFL model enhances low-light images with better visibility,less noise,and improved contrast,suitable for practical scenarios such as autonomous driving,medical imaging,and nighttime surveillance.Outperforming them by PSNR=27.26 dB,SSIM=0.93,and LPIPS=0.10 on benchmark dataset LOL-v1.The source code of HFL is available at https://github.com/Smile-QT/HFL-for-LIE.
基金supported by Anhui Province University Key Science and Technology Project(2024AH053415)Anhui Province University Major Science and Technology Project(2024AH040229).
文摘This research addresses the critical challenge of enhancing satellite images captured under low-light conditions,which suffer from severely degraded quality,including a lack of detail,poor contrast,and low usability.Overcoming this limitation is essential for maximizing the value of satellite imagery in downstream computer vision tasks(e.g.,spacecraft on-orbit connection,spacecraft surface repair,space debris capture)that rely on clear visual information.Our key novelty lies in an unsupervised generative adversarial network featuring two main contributions:(1)an improved U-Net(IU-Net)generator with multi-scale feature fusion in the contracting path for richer semantic feature extraction,and(2)a Global Illumination Attention Module(GIA)at the end of the contracting path to couple local and global information,significantly improving detail recovery and illumination adjustment.The proposed algorithm operates in an unsupervised manner.It is trained and evaluated on our self-constructed,unpaired Spacecraft Dataset for Detection,Enforcement,and Parts Recognition(SDDEP),designed specifically for low-light enhancement tasks.Extensive experiments demonstrate that our method outperforms the baseline EnlightenGAN,achieving improvements of 2.7%in structural similarity(SSIM),4.7%in peak signal-to-noise ratio(PSNR),6.3%in learning perceptual image patch similarity(LPIPS),and 53.2%in DeltaE 2000.Qualitatively,the enhanced images exhibit higher overall and local brightness,improved contrast,and more natural visual effects.
基金supported by the National Natural Science Foundation of China(Grant No.62302086)the Natural Science Foundation of Liaoning Province(Grant No.2023-MSBA-070)the Fundamental Research Funds for the Central Universities(Grant No.N2317005).
文摘Visible-infrared object detection leverages the day-night stable object perception capability of infrared images to enhance detection robustness in low-light environments by fusing the complementary information of visible and infrared images.However,the inherent differences in the imaging mechanisms of visible and infrared modalities make effective cross-modal fusion challenging.Furthermore,constrained by the physical characteristics of sensors and thermal diffusion effects,infrared images generally suffer from blurred object contours and missing details,making it difficult to extract object features effectively.To address these issues,we propose an infrared-visible image fusion network that realizesmultimodal information fusion of infrared and visible images through a carefully designedmultiscale fusion strategy.First,we design an adaptive gray-radiance enhancement(AGRE)module to strengthen the detail representation in infrared images,improving their usability in complex lighting scenarios.Next,we introduce a channelspatial feature interaction(CSFI)module,which achieves efficient complementarity between the RGB and infrared(IR)modalities via dynamic channel switching and a spatial attention mechanism.Finally,we propose a multi-scale enhanced cross-attention fusion(MSECA)module,which optimizes the fusion ofmulti-level features through dynamic convolution and gating mechanisms and captures long-range complementary relationships of cross-modal features on a global scale,thereby enhancing the expressiveness of the fused features.Experiments on the KAIST,M3FD,and FLIR datasets demonstrate that our method delivers outstanding performance in daytime and nighttime scenarios.On the KAIST dataset,the miss rate drops to 5.99%,and further to 4.26% in night scenes.On the FLIR and M3FD datasets,it achieves AP50 scores of 79.4% and 88.9%,respectively.
文摘Recently,a multitude of techniques that fuse deep learning with Retinex theory have been utilized in the field of low-light image enhancement,yielding remarkable outcomes.Due to the intricate nature of imaging scenarios,including fluctuating noise levels and unpredictable environmental elements,these techniques do not fully resolve these challenges.We introduce an innovative strategy that builds upon Retinex theory and integrates a novel deep network architecture,merging the Convolutional Block Attention Module(CBAM)with the Transformer.Our model is capable of detecting more prominent features across both channel and spatial domains.We have conducted extensive experiments across several datasets,namely LOLv1,LOLv2-real,and LOLv2-sync.The results show that our approach surpasses other methods when evaluated against critical metrics such as Peak Signal-to-Noise Ratio(PSNR)and Structural Similarity Index(SSIM).Moreover,we have visually assessed images enhanced by various techniques and utilized visual metrics like LPIPS for comparison,and the experimental data clearly demonstrate that our approach excels visually over other methods as well.
基金supported by the Key Program of Natural Science Foundation of Sichuan Province(2022NSFSC0013)National Key Research and Development Program of China(2022YFD1901603,2023YFD2301902).
文摘This study aimed to identify the physiological mechanisms enabling low-N-tolerant maize cultivar to maintain higher photosynthesis and yield under low-N,low-light,and combined stress.In a three-year field trial of low-N-tolerant and low-N-sensitive maize cultivars under two N fertilization(normal N:240 kg N ha^(−1);low-N:150 kg N ha^(−1))and two light conditions(normal light;low-light:35%light reduction),the tolerant cultivar showed higher net photosynthetic rate than the sensitive one.Random Forest analysis and Structural Equation Modeling identified PSI donor-side limitation(elevated Y_(ND))as the key photosynthetic constraint.The tolerant cultivar maintained higher D1 and PsaA protein levels and preferentially allocated photosynthetic N to electron transport.This strategy reduced Y_(ND)and sustained photosystem stability,thus improving carboxylation efficiency and resulting in higher photosynthesis.
基金support by the Guangxi Natural Science Foundation(Grant No.2024GXNSFAA010484)the NationalNatural Science Foundation of China(No.62466013),this work has been made possible.
文摘Under low-illumination conditions, the quality of image signals deteriorates significantly, typically characterized by a peak signal-to-noise ratio (PSNR) below 10 dB, which severely limits the usability of the images. Supervised methods, which utilize paired high-low light images as training sets, can enhance the PSNR to around 20 dB, significantly improving image quality. However, such data is challenging to obtain. In recent years, unsupervised low-light image enhancement (LIE) methods based on the Retinex framework have been proposed, but they generally lag behind supervised methods by 5–10 dB in performance. In this paper, we introduce the Denoising-Distilled Retine (DDR) method, an unsupervised approach that integrates denoising priors into a Retinex-based training framework. By explicitly incorporating denoising, the DDR method effectively addresses the challenges of noise and artifacts in low-light images, thereby enhancing the performance of the Retinex framework. The model achieved a PSNR of 19.82 dB on the LOL dataset, which is comparable to the performance of supervised methods. Furthermore, by applying knowledge distillation, the DDR method optimizes the model for real-time processing of low-light images, achieving a processing speed of 199.7 fps without incurring additional computational costs. While the DDR method has demonstrated superior performance in terms of image quality and processing speed, there is still room for improvement in terms of robustness across different color spaces and under highly resource-constrained conditions. Future research will focus on enhancing the model’s generalizability and adaptability to address these challenges. Our rigorous testing on public datasets further substantiates the DDR method’s state-of-the-art performance in both image quality and processing speed.
基金supported by the Key Laboratory of Forensic Science and Technology at College of Sichuan Province(2023YB04).
文摘Enhancing low-light images with color distortion and uneven multi-light source distribution presents challenges. Most advanced methods for low-light image enhancement are based on the Retinex model using deep learning. Retinexformer introduces channel self-attention mechanisms in the IG-MSA. However, it fails to effectively capture long-range spatial dependencies, leaving room for improvement. Based on the Retinexformer deep learning framework, we designed the Retinexformer+ network. The “+” signifies our advancements in extracting long-range spatial dependencies. We introduced multi-scale dilated convolutions in illumination estimation to expand the receptive field. These convolutions effectively capture the weakening semantic dependency between pixels as distance increases. In illumination restoration, we used Unet++ with multi-level skip connections to better integrate semantic information at different scales. The designed Illumination Fusion Dual Self-Attention (IF-DSA) module embeds multi-scale dilated convolutions to achieve spatial self-attention. This module captures long-range spatial semantic relationships within acceptable computational complexity. Experimental results on the Low-Light (LOL) dataset show that Retexformer+ outperforms other State-Of-The-Art (SOTA) methods in both quantitative and qualitative evaluations, with the computational complexity increased to an acceptable 51.63 G FLOPS. On the LOL_v1 dataset, RetinexFormer+ shows an increase of 1.15 in Peak Signal-to-Noise Ratio (PSNR) and a decrease of 0.39 in Root Mean Square Error (RMSE). On the LOL_v2_real dataset, the PSNR increases by 0.42 and the RMSE decreases by 0.18. Experimental results on the Exdark dataset show that Retexformer+ can effectively enhance real-scene images and maintain their semantic information.
基金supported by the National Key Research and Development Program Topics(Grant No.2021YFB4000905)the National Natural Science Foundation of China(Grant Nos.62101432 and 62102309)in part by Shaanxi Natural Science Fundamental Research Program Project(No.2022JM-508).
文摘Low-light image enhancement methods have limitations in addressing issues such as color distortion,lack of vibrancy,and uneven light distribution and often require paired training data.To address these issues,we propose a two-stage unsupervised low-light image enhancement algorithm called Retinex and Exposure Fusion Network(RFNet),which can overcome the problems of over-enhancement of the high dynamic range and under-enhancement of the low dynamic range in existing enhancement algorithms.This algorithm can better manage the challenges brought about by complex environments in real-world scenarios by training with unpaired low-light images and regular-light images.In the first stage,we design a multi-scale feature extraction module based on Retinex theory,capable of extracting details and structural information at different scales to generate high-quality illumination and reflection images.In the second stage,an exposure image generator is designed through the camera response mechanism function to acquire exposure images containing more dark features,and the generated images are fused with the original input images to complete the low-light image enhancement.Experiments show the effectiveness and rationality of each module designed in this paper.And the method reconstructs the details of contrast and color distribution,outperforms the current state-of-the-art methods in both qualitative and quantitative metrics,and shows excellent performance in the real world.
基金supported by the China Scholarship CouncilPostgraduate Research&Practice Innovation Program of Jiangsu Province(No.KYCX17_0776)the Natural Science Foundation of NUPT(No.NY214039)
文摘A new image enhancement algorithm based on Retinex theory is proposed to solve the problem of bad visual effect of an image in low-light conditions. First, an image is converted from the RGB color space to the HSV color space to get the V channel. Next, the illuminations are respectively estimated by the guided filtering and the variational framework on the V channel and combined into a new illumination by average gradient. The new reflectance is calculated using V channel and the new illumination. Then a new V channel obtained by multiplying the new illumination and reflectance is processed with contrast limited adaptive histogram equalization(CLAHE). Finally, the new image in HSV space is converted back to RGB space to obtain the enhanced image. Experimental results show that the proposed method has better subjective quality and objective quality than existing methods.
文摘Unmanned aerial vehicle (UAV) target tracking tasks can currently be successfully completed in daytime situations with enough lighting, but they are unable to do so in nighttime scenes with inadequate lighting, poor contrast, and low signal-to-noise ratio. This letter presents an enhanced low-light enhancer for UAV nighttime tracking based on Zero-DCE++ due to its ad-vantages of low processing cost and quick inference. We developed a light-weight UCBAM capable of integrating channel information and spatial features and offered a fully considered curve projection model in light of the low signal-to-noise ratio of night scenes. This method significantly improved the tracking performance of the UAV tracker in night situations when tested on the public UAVDark135 and compared to other cutting-edge low-light enhancers. By applying our work to different trackers, this search shows how broadly applicable it is.
基金supported by National Sciences Foundation of China Grants(No.61902158).
文摘This paper expounds upon a novel target detection methodology distinguished by its elevated discriminatory efficacy,specifically tailored for environments characterized by markedly low luminance levels.Conventional methodologies struggle with the challenges posed by luminosity fluctuations,especially in settings characterized by diminished radiance,further exacerbated by the utilization of suboptimal imaging instrumentation.The envisioned approach mandates a departure from the conventional YOLOX model,which exhibits inadequacies in mitigating these challenges.To enhance the efficacy of this approach in low-light conditions,the dehazing algorithm undergoes refinement,effecting a discerning regulation of the transmission rate at the pixel level,reducing it to values below 0.5,thereby resulting in an augmentation of image contrast.Subsequently,the coiflet wavelet transform is employed to discern and isolate high-discriminatory attributes by dismantling low-frequency image attributes and extracting high-frequency attributes across divergent axes.The utilization of CycleGAN serves to elevate the features of low-light imagery across an array of stylistic variances.Advanced computational methodologies are then employed to amalgamate and conflate intricate attributes originating from images characterized by distinct stylistic orientations,thereby augmenting the model’s erudition potential.Empirical validation conducted on the PASCAL VOC and MS COCO 2017 datasets substantiates pronounced advancements.The refined low-light enhancement algorithm yields a discernible 5.9%augmentation in the target detection evaluation index when compared to the original imagery.Mean Average Precision(mAP)undergoes enhancements of 9.45%and 0.052%in low-light visual renditions relative to conventional YOLOX outcomes.The envisaged approach presents a myriad of advantages over prevailing benchmark methodologies in the realm of target detection within environments marked by an acute scarcity of luminosity.
基金the National Natural Science Foundation of China(Grant Nos.1872251 and 11875229).
文摘Due to their excellent carrier mobility,high absorption coefficient and narrow bandgap,most 2D IVA metal chalcogenide semiconductors(GIVMCs,metal=Ge,Sn,Pb;chalcogen=S,Se)are regarded as promising candidates for realizing high-performance photodetectors.We synthesized high-quality two-dimensional(2D)tin sulfide(SnS)nanosheets using the physical vapor deposition(PVD)method and fabricated a 2D SnS visible-light photodetector.The photodetector exhibits a high photoresponsivity of 161 A·W-1 and possesses an external quantum efficiency of 4.45×10^(4)%,as well as a detectivity of 1.15×10^(9) Jones under 450 nm blue light illumination.Moreover,under poor illumination at optical densities down to 2 mW·cm^(-2),the responsivity of the device is higher than that at stronger optical densities.We suggest that a photogating effect in the 2D SnS photodetector is mainly responsible for its low-light responsivity.Defects and impurities in 2D SnS can trap carriers and form localized electric fields,which can delay the recombination process of electron-hole pairs,prolong carrier lifetimes,and thus improve the low-light responsivity.This work provides design strategies for detecting low levels of light using photodetectors made of 2D materials.
基金supported by the National Natural Science Foundation of China(62276192)。
文摘Low-light images suffer from low quality due to poor lighting conditions,noise pollution,and improper settings of cameras.To enhance low-light images,most existing methods rely on normal-light images for guidance but the collection of suitable normal-light images is difficult.In contrast,a self-supervised method breaks free from the reliance on normal-light data,resulting in more convenience and better generalization.Existing self-supervised methods primarily focus on illumination adjustment and design pixel-based adjustment methods,resulting in remnants of other degradations,uneven brightness and artifacts.In response,this paper proposes a self-supervised enhancement method,termed as SLIE.It can handle multiple degradations including illumination attenuation,noise pollution,and color shift,all in a self-supervised manner.Illumination attenuation is estimated based on physical principles and local neighborhood information.The removal and correction of noise and color shift removal are solely realized with noisy images and images with color shifts.Finally,the comprehensive and fully self-supervised approach can achieve better adaptability and generalization.It is applicable to various low light conditions,and can reproduce the original color of scenes in natural light.Extensive experiments conducted on four public datasets demonstrate the superiority of SLIE to thirteen state-of-the-art methods.Our code is available at https://github.com/hanna-xu/SLIE.
文摘A novel frame shift and integral technique for the enhancement of low light level moving image sequence is introduced. According to the technique, motion parameters of target are measured by algorithm based on difference processing. To obtain spatial relativity, images are shifted according to the motion parameters. As a result, the processing of integral and average can be applied to images that have been shifted. The technique of frame shift and integral that includes the algorithm of motion parameter determination is discussed, experiments with low light level moving image sequences are also described. The experiment results show the effectiveness and the robustness of the parameter determination algorithm, and the improvement in the signal-to-noise ratio (SNR) of low light level moving images.
基金Project supported by the National Natural Science Foundation of China(Nos.62403242 and 61973167)the Fundamental Research Funds for the Central Universities(No.30924010932)the Postgraduate Research&Practice Innovation Program of Jiangsu Province(No.KYCX23_0481)。
文摘On-orbit service is important for maintaining the sustainability of the space environment.A space-based visible camera is an economical and lightweight sensor for situational awareness during on-orbit service.However,it can be easily affected by the low illumination environment.Recently,deep learning has achieved remarkable success in image enhancement of natural images,but it is seldom applied in space due to the data bottleneck.In this study,we first propose a dataset of BeiDou navigation satellites for on-orbit low-light image enhancement(LLIE).In the automatic data collection scheme,we focus on reducing the domain gap and improving the diversity of the dataset.We collect hardware-in-the-loop images based on a robotic simulation testbed imitating space lighting conditions.To evenly sample poses of different orientations and distances without collision,we propose a collision-free workspace and pose-stratified sampling.Subsequently,we develop a novel diffusion model.To enhance the image contrast without over-exposure and blurred details,we design fused attention guidance to highlight the structure and the dark region.Finally,a comparison of our method with previous methods indicates that our method has better on-orbit LLIE performance.
基金This work was supported by the Shanghai Aerospace Science and Technology Innovation Fund(No.SAST2019-048)the Cross-Media Intelligent Technology Project of Beijing National Research Center for Information Science and Technology(BNRist)(No.BNR2019TD01022).
文摘Poor illumination greatly affects the quality of obtained images.In this paper,a novel convolutional neural network named DEANet is proposed on the basis of Retinex for low-light image enhancement.DEANet combines the frequency and content information of images and is divided into three subnetworks:decomposition,enhancement,and adjustment networks,which perform image decomposition;denoising,contrast enhancement,and detail preservation;and image adjustment and generation,respectively.The model is trained on the public LOL dataset,and the experimental results show that it outperforms the existing state-of-the-art methods regarding visual effects and image quality.