期刊文献+
共找到34篇文章
< 1 2 >
每页显示 20 50 100
BDMFuse:Multi-scale network fusion for infrared and visible images based on base and detail features
1
作者 SI Hai-Ping ZHAO Wen-Rui +4 位作者 LI Ting-Ting LI Fei-Tao Fernando Bacao SUN Chang-Xia LI Yan-Ling 《红外与毫米波学报》 北大核心 2025年第2期289-298,共10页
The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method f... The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method for infrared and visible image fusion is proposed.The encoder designed according to the optimization objective consists of a base encoder and a detail encoder,which is used to extract low-frequency and high-frequency information from the image.This extraction may lead to some information not being captured,so a compensation encoder is proposed to supplement the missing information.Multi-scale decomposition is also employed to extract image features more comprehensively.The decoder combines low-frequency,high-frequency and supplementary information to obtain multi-scale features.Subsequently,the attention strategy and fusion module are introduced to perform multi-scale fusion for image reconstruction.Experimental results on three datasets show that the fused images generated by this network effectively retain salient targets while being more consistent with human visual perception. 展开更多
关键词 infrared image visible image image fusion encoder-decoder multi-scale features
在线阅读 下载PDF
PromptFusion:Harmonized Semantic Prompt Learning for Infrared and Visible Image Fusion
2
作者 Jinyuan Liu Xingyuan Li +4 位作者 Zirui Wang Zhiying Jiang Wei Zhong Wei Fan Bin Xu 《IEEE/CAA Journal of Automatica Sinica》 2025年第3期502-515,共14页
The goal of infrared and visible image fusion(IVIF)is to integrate the unique advantages of both modalities to achieve a more comprehensive understanding of a scene.However,existing methods struggle to effectively han... The goal of infrared and visible image fusion(IVIF)is to integrate the unique advantages of both modalities to achieve a more comprehensive understanding of a scene.However,existing methods struggle to effectively handle modal disparities,resulting in visual degradation of the details and prominent targets of the fused images.To address these challenges,we introduce Prompt Fusion,a prompt-based approach that harmoniously combines multi-modality images under the guidance of semantic prompts.Firstly,to better characterize the features of different modalities,a contourlet autoencoder is designed to separate and extract the high-/low-frequency components of different modalities,thereby improving the extraction of fine details and textures.We also introduce a prompt learning mechanism using positive and negative prompts,leveraging Vision-Language Models to improve the fusion model's understanding and identification of targets in multi-modality images,leading to improved performance in downstream tasks.Furthermore,we employ bi-level asymptotic convergence optimization.This approach simplifies the intricate non-singleton non-convex bi-level problem into a series of convergent and differentiable single optimization problems that can be effectively resolved through gradient descent.Our approach advances the state-of-the-art,delivering superior fusion quality and boosting the performance of related downstream tasks.Project page:https://github.com/hey-it-s-me/PromptFusion. 展开更多
关键词 Bi-level optimization image fusion infrared and visible image prompt learning
在线阅读 下载PDF
A Mask-Guided Latent Low-Rank Representation Method for Infrared and Visible Image Fusion
3
作者 Kezhen Xie Syed Mohd Zahid Syed Zainal Ariffin Muhammad Izzad Ramli 《Computers, Materials & Continua》 2025年第7期997-1011,共15页
Infrared and visible image fusion technology integrates the thermal radiation information of infrared images with the texture details of visible images to generate more informative fused images.However,existing method... Infrared and visible image fusion technology integrates the thermal radiation information of infrared images with the texture details of visible images to generate more informative fused images.However,existing methods often fail to distinguish salient objects from background regions,leading to detail suppression in salient regions due to global fusion strategies.This study presents a mask-guided latent low-rank representation fusion method to address this issue.First,the GrabCut algorithm is employed to extract a saliency mask,distinguishing salient regions from background regions.Then,latent low-rank representation(LatLRR)is applied to extract deep image features,enhancing key information extraction.In the fusion stage,a weighted fusion strategy strengthens infrared thermal information and visible texture details in salient regions,while an average fusion strategy improves background smoothness and stability.Experimental results on the TNO dataset demonstrate that the proposed method achieves superior performance in SPI,MI,Qabf,PSNR,and EN metrics,effectively preserving salient target details while maintaining balanced background information.Compared to state-of-the-art fusion methods,our approach achieves more stable and visually consistent fusion results.The fusion code is available on GitHub at:https://github.com/joyzhen1/Image(accessed on 15 January 2025). 展开更多
关键词 Infrared and visible image fusion latent low-rank representation saliency mask extraction weighted fusion strategy
在线阅读 下载PDF
Pseudo Color Fusion of Infrared and Visible Images Based on the Rattlesnake Vision Imaging System 被引量:3
4
作者 Yong Wang Hongqi Liu Xiaoguang Wang 《Journal of Bionic Engineering》 SCIE EI CSCD 2022年第1期209-223,共15页
Image fusion is a key technology in the field of digital image processing.In the present study,an effect-based pseudo color fusion model of infrared and visible images based on the rattlesnake vision imaging system(th... Image fusion is a key technology in the field of digital image processing.In the present study,an effect-based pseudo color fusion model of infrared and visible images based on the rattlesnake vision imaging system(the rattlesnake bimodal cell fusion mechanism and the visual receptive field model)is proposed.The innovation point of the proposed model lies in the following three features:first,the introduction of a simple mathematical model of the visual receptive field reduce computational complexity;second,the enhanced image is obtained by extracting the common information and unique information of source images,which improves fusion image quality;and third,the Waxman typical fusion structure is improved for the pseudo color image fusion model.The performance of the image fusion model is verified through comparative experiments.In the subjective visual evaluation,we find that the color of the fusion image obtained through the proposed model is natural and can highlight the target and scene details.In the objective quantitative evaluation,we observe that the best values on the four indicators,namely standard deviation,average gradient,entropy,and spatial frequency,accounts for 90%,100%,90%,and 100%,respectively,indicating that the fusion image exhibits superior contrast,image clarity,information content,and overall activity.Experimental results reveal that the performance of the proposed model is superior to that of other models and thus verified the validity and reliability of the model. 展开更多
关键词 BIONIC RATTLESNAKE Bimodal cell Infrared image visible image image fusion
在线阅读 下载PDF
Infrared and Visible Image Fusion Based on Res2Net-Transformer Automatic Encoding and Decoding 被引量:2
5
作者 Chunming Wu Wukai Liu Xin Ma 《Computers, Materials & Continua》 SCIE EI 2024年第4期1441-1461,共21页
A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The ne... A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The networkcomprises an encoder module, fusion layer, decoder module, and edge improvementmodule. The encoder moduleutilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformerto achieve deep-level co-extraction of local and global features from the original picture. An edge enhancementmodule (EEM) is created to extract significant edge features. A modal maximum difference fusion strategy isintroduced to enhance the adaptive representation of information in various regions of the source image, therebyenhancing the contrast of the fused image. The encoder and the EEM module extract features, which are thencombined in the fusion layer to create a fused picture using the decoder. Three datasets were chosen to test thealgorithmproposed in this paper. The results of the experiments demonstrate that the network effectively preservesbackground and detail information in both infrared and visible images, yielding superior outcomes in subjectiveand objective evaluations. 展开更多
关键词 image fusion Res2Net-Transformer infrared image visible image
在线阅读 下载PDF
Sub-Regional Infrared-Visible Image Fusion Using Multi-Scale Transformation 被引量:2
6
作者 Yexin Liu Ben Xu +2 位作者 Mengmeng Zhang Wei Li Ran Tao 《Journal of Beijing Institute of Technology》 EI CAS 2022年第6期535-550,共16页
Infrared-visible image fusion plays an important role in multi-source data fusion,which has the advantage of integrating useful information from multi-source sensors.However,there are still challenges in target enhanc... Infrared-visible image fusion plays an important role in multi-source data fusion,which has the advantage of integrating useful information from multi-source sensors.However,there are still challenges in target enhancement and visual improvement.To deal with these problems,a sub-regional infrared-visible image fusion method(SRF)is proposed.First,morphology and threshold segmentation is applied to extract targets interested in infrared images.Second,the infrared back-ground is reconstructed based on extracted targets and the visible image.Finally,target and back-ground regions are fused using a multi-scale transform.Experimental results are obtained using public data for comparison and evaluation,which demonstrate that the proposed SRF has poten-tial benefits over other methods. 展开更多
关键词 image fusion infrared image visible image multi-scale transform
在线阅读 下载PDF
An infrared and visible image fusion method based upon multi-scale and top-hat transforms 被引量:1
7
作者 Gui-Qing He Qi-Qi Zhang +3 位作者 Hai-Xi Zhang Jia-Qi Ji Dan-Dan Dong Jun Wang 《Chinese Physics B》 SCIE EI CAS CSCD 2018年第11期340-348,共9页
The high-frequency components in the traditional multi-scale transform method are approximately sparse, which can represent different information of the details. But in the low-frequency component, the coefficients ar... The high-frequency components in the traditional multi-scale transform method are approximately sparse, which can represent different information of the details. But in the low-frequency component, the coefficients around the zero value are very few, so we cannot sparsely represent low-frequency image information. The low-frequency component contains the main energy of the image and depicts the profile of the image. Direct fusion of the low-frequency component will not be conducive to obtain highly accurate fusion result. Therefore, this paper presents an infrared and visible image fusion method combining the multi-scale and top-hat transforms. On one hand, the new top-hat-transform can effectively extract the salient features of the low-frequency component. On the other hand, the multi-scale transform can extract highfrequency detailed information in multiple scales and from diverse directions. The combination of the two methods is conducive to the acquisition of more characteristics and more accurate fusion results. Among them, for the low-frequency component, a new type of top-hat transform is used to extract low-frequency features, and then different fusion rules are applied to fuse the low-frequency features and low-frequency background; for high-frequency components, the product of characteristics method is used to integrate the detailed information in high-frequency. Experimental results show that the proposed algorithm can obtain more detailed information and clearer infrared target fusion results than the traditional multiscale transform methods. Compared with the state-of-the-art fusion methods based on sparse representation, the proposed algorithm is simple and efficacious, and the time consumption is significantly reduced. 展开更多
关键词 infrared and visible image fusion multi-scale transform mathematical morphology top-hat trans- form
原文传递
Intelligent Fusion of Infrared and Visible Image Data Based on Convolutional Sparse Representation and Improved Pulse-Coupled Neural Network 被引量:3
8
作者 Jingming Xia Yi Lu +1 位作者 Ling Tan Ping Jiang 《Computers, Materials & Continua》 SCIE EI 2021年第4期613-624,共12页
Multi-source information can be obtained through the fusion of infrared images and visible light images,which have the characteristics of complementary information.However,the existing acquisition methods of fusion im... Multi-source information can be obtained through the fusion of infrared images and visible light images,which have the characteristics of complementary information.However,the existing acquisition methods of fusion images have disadvantages such as blurred edges,low contrast,and loss of details.Based on convolution sparse representation and improved pulse-coupled neural network this paper proposes an image fusion algorithm that decompose the source images into high-frequency and low-frequency subbands by non-subsampled Shearlet Transform(NSST).Furthermore,the low-frequency subbands were fused by convolutional sparse representation(CSR),and the high-frequency subbands were fused by an improved pulse coupled neural network(IPCNN)algorithm,which can effectively solve the problem of difficulty in setting parameters of the traditional PCNN algorithm,improving the performance of sparse representation with details injection.The result reveals that the proposed method in this paper has more advantages than the existing mainstream fusion algorithms in terms of visual effects and objective indicators. 展开更多
关键词 image fusion infrared image visible light image non-downsampling shear wave transform improved PCNN convolutional sparse representation
在线阅读 下载PDF
HaIVFusion: Haze-Free Infrared and Visible Image Fusion
9
作者 Xiang Gao Yongbiao Gao +2 位作者 Aimei Dong Jinyong Cheng Guohua Lv 《IEEE/CAA Journal of Automatica Sinica》 2025年第10期2040-2055,共16页
The purpose of infrared and visible image fusion is to create a single image containing the texture details and significant object information of the source images,particularly in challenging environments.However,exis... The purpose of infrared and visible image fusion is to create a single image containing the texture details and significant object information of the source images,particularly in challenging environments.However,existing image fusion algorithms are generally suitable for normal scenes.In the hazy scene,a lot of texture information in the visible image is hidden,the results of existing methods are filled with infrared information,resulting in the lack of texture details and poor visual effect.To address the aforementioned difficulties,we propose a haze-free infrared and visible fusion method,termed HaIVFusion,which can eliminate the influence of haze and obtain richer texture information in the fused image.Specifically,we first design a scene information restoration network(SIRNet)to mine the masked texture information in visible images.Then,a denoising fusion network(DFNet)is designed to integrate the features extracted from infrared and visible images and remove the influence of residual noise as much as possible.In addition,we use color consistency loss to reduce the color distortion resulting from haze.Furthermore,we publish a dataset of hazy scenes for infrared and visible image fusion to promote research in extreme scenes.Extensive experiments show that HaIVFusion produces fused images with increased texture details and higher contrast in hazy scenes,and achieves better quantitative results,when compared to state-ofthe-art image fusion methods,even combined with state-of-the-art dehazing methods. 展开更多
关键词 Deep learning dehazing image fusion infrared image visible image
在线阅读 下载PDF
Multiscale feature learning and attention mechanism for infrared and visible image fusion 被引量:3
10
作者 GAO Li LUO DeLin WANG Song 《Science China(Technological Sciences)》 SCIE EI CAS CSCD 2024年第2期408-422,共15页
Current fusion methods for infrared and visible images tend to extract features at a single scale,which results in insufficient detail and incomplete feature preservation.To address these issues,we propose an infrared... Current fusion methods for infrared and visible images tend to extract features at a single scale,which results in insufficient detail and incomplete feature preservation.To address these issues,we propose an infrared and visible image fusion network based on a multiscale feature learning and attention mechanism(MsAFusion).A multiscale dilation convolution framework is employed to capture image features across various scales and broaden the perceptual scope.Furthermore,an attention network is introduced to enhance the focus on salient targets in infrared images and detailed textures in visible images.To compensate for information loss during convolution,jump connections are utilized during the image reconstruction phase.The fusion process utilizes a combined loss function consisting of pixel loss and gradient loss for unsupervised fusion of infrared and visible images.Extensive experiments on the dataset of electricity facilities demonstrate that our proposed method outperforms nine state-of-theart methods in terms of visual perception and four objective evaluation metrics. 展开更多
关键词 infrared and visible images image fusion attention mechanism CNN feature extraction
原文传递
THE FIRST VISIBLE IMAGE FROM FY-2B GMS
11
《Acta meteorologica Sinica》 SCIE 2000年第3期383-384,共2页
This is the first visible(0.55-1.05μm)image transmitted from the FY-2B geostationary meteorologicalsatellite(see below figure)and it was received by the National Satellite Meteorological Center at 1331 BT(BeijingTime... This is the first visible(0.55-1.05μm)image transmitted from the FY-2B geostationary meteorologicalsatellite(see below figure)and it was received by the National Satellite Meteorological Center at 1331 BT(BeijingTime)on July 6,2000.From the image it can be seen that a frontal cloud system is covering an extensive area of Lake Baikal inRussia,west Mongolia and the eastern part of Tibetan Plateau in China.Another cyclone cloud system is influencingChina’s Northeast,North,Huanghuai area,and the eastern part of Northwest area.In addition,the typhoon cloud 展开更多
关键词 THE FIRST visible image FROM FY-2B GMS LINE LAKE
在线阅读 下载PDF
FY-3D Sent Back First Visible Light Images
12
作者 ZONG Wen 《Aerospace China》 2017年第4期61-61,共1页
China successfully launched FY-3D by a LM-4C carrier rocket from the Taiyuan Satellite Launch Center at 02:35 Beijing time on November 15.The mission also carried the HEAD-1experiment satellite which was developed by... China successfully launched FY-3D by a LM-4C carrier rocket from the Taiyuan Satellite Launch Center at 02:35 Beijing time on November 15.The mission also carried the HEAD-1experiment satellite which was developed by SAST.The LM-4C carrier rocket was developed by SAST.22 technological improvements were made for this launch mission to meet the satellite’s requirement and improve the flight reliability.So far, 展开更多
关键词 FY-3D Sent Back First visible Light images
在线阅读 下载PDF
Construction of three-dimensional atlas of the lenticular nuclei and its subnucleus based on the cryosection images from Chinese visible human:a preliminary study
13
作者 陈晓光 《外科研究与新技术》 2011年第3期227-227,共1页
Objective To establish a 3D atlas of the lenticular nuclei and its subnucleus with the cryosection images of the male from "Atlas of Chinese Visible Human". Methods The lenticular nuclei and its subnucleus w... Objective To establish a 3D atlas of the lenticular nuclei and its subnucleus with the cryosection images of the male from "Atlas of Chinese Visible Human". Methods The lenticular nuclei and its subnucleus were segmented from the cryosection images and reconstructed with the software 展开更多
关键词 Construction of three-dimensional atlas of the lenticular nuclei and its subnucleus based on the cryosection images from Chinese visible human
在线阅读 下载PDF
Multi-sensors Image Fusion via NSCT and GoogLeNet 被引量:4
14
作者 LI Yangyu WANG Caiyun YAO Chen 《Transactions of Nanjing University of Aeronautics and Astronautics》 EI CSCD 2020年第S01期88-94,共7页
In order to improve the detail preservation and target information integrity of different sensor fusion images,an image fusion method of different sensors based on non-subsampling contourlet transform(NSCT)and GoogLeN... In order to improve the detail preservation and target information integrity of different sensor fusion images,an image fusion method of different sensors based on non-subsampling contourlet transform(NSCT)and GoogLeNet neural network model is proposed. First,the different sensors images,i. e.,infrared and visible images,are transformed by NSCT to obtain a low frequency sub-band and a series of high frequency sub-bands respectively.Then,the high frequency sub-bands are fused with the max regional energy selection strategy,the low frequency subbands are input into GoogLeNet neural network model to extract feature maps,and the fusion weight matrices are adaptively calculated from the feature maps. Next,the fused low frequency sub-band is obtained with weighted summation. Finally,the fused image is obtained by inverse NSCT. The experimental results demonstrate that the proposed method improves the image visual effect and achieves better performance in both edge retention and mutual information. 展开更多
关键词 image fusion non-subsampling contourlet transform GoogLeNet neural network infrared image visible image
在线阅读 下载PDF
Image Fusion Based on Bioinspired Rattlesnake Visual Mechanism Under Lighting Environments of Day and Night Two Levels
15
作者 Yong Wang Hongmin Zou 《Journal of Bionic Engineering》 SCIE EI CSCD 2024年第3期1496-1510,共15页
This study,grounded in Waxman fusion method,introduces an algorithm for the fusion of visible and infrared images,tailored to a two-level lighting environment,inspired by the mathematical model of the visual receptive... This study,grounded in Waxman fusion method,introduces an algorithm for the fusion of visible and infrared images,tailored to a two-level lighting environment,inspired by the mathematical model of the visual receptive field of rattlesnakes and the two-mode cells'mechanism.The research presented here is segmented into three components.In the first segment,we design a preprocessing module to judge the ambient light intensity and divide the lighting environment into two levels:day and night.The second segment proposes two distinct network structures designed specifically for these daytime and nighttime images.For the daytime images,where visible light information is predominant,we feed the ON-VIS signal and the IR-enhanced visual signal into the central excitation and surrounding suppression regions of the ON-center receptive field in the B channel,respectively.Conversely,for nighttime images where infrared information takes precedence,the ON-IR signal and the Visual-enhanced IR signal are separately input into the central excitation and surrounding suppression regions of the ON-center receptive field in the B channel.The outcome is a pseudo-color fused image.The third segment employs five different no-reference image quality assessment methods to evaluate the quality of thirteen sets of pseudo-color images produced by fusing infrared and visible information.These images are then compared with those obtained by six other methods cited in the relevant reference.The empirical results indicate that this study's outcomes surpass the comparative results in terms of average gradient and spatial frequency.Only one or two sets of fused images underperformed in terms of standard deviation and entropy when compared to the control results.Four sets of fused images did not perform as well as the comparison in the QAB/F index.In conclusion,the fused images generated through the proposed method show superior performance in terms of scene detail,visual perception,and image sharpness when compared with their counterparts from other methods. 展开更多
关键词 RATTLESNAKE visible image Infrared image DAYLIGHT BIONIC
在线阅读 下载PDF
High-resolution visible imaging with piezoelectric deformable secondary mirror: experimental results at the 1.8-m adaptive telescope 被引量:6
16
作者 Youming Guo Kele Chen +7 位作者 Jiahui Zhou Zhengdai Li Wenyu Han Xuejun Rao Hua Bao Jinsheng Yang Xinlong Fan Changhui Rao 《Opto-Electronic Advances》 SCIE EI CAS CSCD 2023年第12期15-26,共12页
Integrating deformable mirrors within the optical train of an adaptive telescope was one of the major innovations in astronomical observation technology,distinguished by its high optical throughput,reduced optical sur... Integrating deformable mirrors within the optical train of an adaptive telescope was one of the major innovations in astronomical observation technology,distinguished by its high optical throughput,reduced optical surfaces,and the incorporation of the deformable mirror.Typically,voice-coil actuators are used,which require additional position sensors,internal control electronics,and cooling systems,leading to a very complex structure.Piezoelectric deformable secondary mirror technologies were proposed to overcome these problems.Recently,a high-order piezoelectric deformable secondary mirror has been developed and installed on the 1.8-m telescope at Lijiang Observatory in China to make it an adaptive telescope.The system consists of a 241-actuator piezoelectric deformable secondary mirror,a 192-sub-aperture Shack-Hartmann wavefront sensor,and a multi-core-based real-time controller.The actuator spacing of the PDSM measures 19.3 mm,equivalent to approximately 12.6 cm when mapped onto the primary mirror,significantly less than the voicecoil-based adaptive telescopes such as LBT,Magellan and VLT.As a result,stellar images with Strehl ratios above 0.49 in the R band have been obtained.To our knowledge,these are the highest R band images captured by an adaptive telescope with deformable secondary mirrors.Here,we report the system description and on-sky performance of this adaptive telescope. 展开更多
关键词 adaptive optics deformable secondary mirror visible imaging
在线阅读 下载PDF
A nondestructive method for estimating the total green leaf area of individual rice plants using multi-angle color images 被引量:1
17
作者 Ni Jiang Wanneng Yang +4 位作者 Lingfeng Duan Guoxing Chen Wei Fang Lizhong Xiong Qian Liu 《Journal of Innovative Optical Health Sciences》 SCIE EI CAS 2015年第2期7-18,共12页
Total green leaf area(GLA)is an important trait for agronomic studies.However,existing methods for estimating the GLA of individual rice plants are destructive and labor-intensive.A nondestructive method for estimatin... Total green leaf area(GLA)is an important trait for agronomic studies.However,existing methods for estimating the GLA of individual rice plants are destructive and labor-intensive.A nondestructive method for estimating the total GLA of individual rice plants based on multi-angle color images is presented.Using projected areas of the plant in images,linear,quadratic,exponential and power regression models for estimating total GLA were evaluated.Tests demonstrated that the side-view projected area had a stronger relationship with the actual total leaf area than the top-projected area.And power models fit better than other models.In addition,the use of multiple side-view images was an efficient method for reducing the estimation error.The inclusion of the top-view projected area as a seoond predictor provided only a slight improvement of the total leaf area est imation.When the projected areas from multi angle images were used,the estimated leaf area(ELA)using the power model and the actual leaf area had a high correlation cofficient(R2>0.98),and the mean absolute percentage error(MAPE)was about 6%.The method was capable of estimating the total leaf area in a nondestructive,accurate and eficient manner,and it may be used for monitoring rice plant growth. 展开更多
关键词 Agri photonics image processing plant phenotyping regression model visible light imaging
原文传递
Deep learning-assisted common temperature measurement based on visible light imaging
18
作者 朱佳仪 何志民 +8 位作者 黄成 曾峻 林惠川 陈福昌 余超群 李燕 张永涛 陈焕庭 蒲继雄 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第8期230-236,共7页
Real-time,contact-free temperature monitoring of low to medium range(30℃-150℃)has been extensively used in industry and agriculture,which is usually realized by costly infrared temperature detection methods.This pap... Real-time,contact-free temperature monitoring of low to medium range(30℃-150℃)has been extensively used in industry and agriculture,which is usually realized by costly infrared temperature detection methods.This paper proposes an alternative approach of extracting temperature information in real time from the visible light images of the monitoring target using a convolutional neural network(CNN).A mean-square error of<1.119℃was reached in the temperature measurements of low to medium range using the CNN and the visible light images.Imaging angle and imaging distance do not affect the temperature detection using visible optical images by the CNN.Moreover,the CNN has a certain illuminance generalization ability capable of detection temperature information from the images which were collected under different illuminance and were not used for training.Compared to the conventional machine learning algorithms mentioned in the recent literatures,this real-time,contact-free temperature measurement approach that does not require any further image processing operations facilitates temperature monitoring applications in the industrial and civil fields. 展开更多
关键词 convolutional neural network visible light image temperature measurement low-to-medium-range temperatures
原文传递
A Visible Light Imaging System for the Estimation of Plasma Vertical Displacement in J-TEXT
19
作者 朱孟周 庄革 +4 位作者 王之江 丁永华 高丽 胡希伟 潘垣 《Plasma Science and Technology》 SCIE EI CAS CSCD 2010年第6期641-645,共5页
A wide-viewing-angle visible light imaging system (VLIS) was mounted on the Joint Texas Experimental Tokamak (J-TEXT) to monitor the discharge process. It is proposed that by using the film data recorded the plasm... A wide-viewing-angle visible light imaging system (VLIS) was mounted on the Joint Texas Experimental Tokamak (J-TEXT) to monitor the discharge process. It is proposed that by using the film data recorded the plasma vertical displacement can be estimated. In this paper installation and operation of the VLIS are presented in detailed. The estimated result is further compared with that measured by using an array of magnetic pickup coils. Their consistency verifies that the estimation of the plasma vertical displacement in J-TEXT by using the imaging data is promising. 展开更多
关键词 visible light imaging vertical displacement J-TEXT
在线阅读 下载PDF
UFEFusion:a new visible-infrared image fusion model combined with CBAM
20
作者 Yawei Ren Rui Zhou Jun Li 《International Journal of Intelligent Computing and Cybernetics》 2025年第2期326-352,共27页
Purpose-Current multi-source image fusion methods frequently overlook the issue of detailed features when employing deep learning technology,resulting in inadequate target feature information.In real-world mission sce... Purpose-Current multi-source image fusion methods frequently overlook the issue of detailed features when employing deep learning technology,resulting in inadequate target feature information.In real-world mission scenarios,such as military information acquisition or medical image enhancement,the prominence of target feature information is of paramount importance.To address these challenges,this paper introduces a novel infrared-visible light fusion model.Design/methodology/approach-Leveraging the foundational architecture of the traditional DenseFuse model,this paper optimizes the backbone network structure and incorporates a Unique Feature Encoder(UFE)to meticulously extract the distinctive features inherent in the two images.Furthermore,it integrates the Convolutional Block Attention Module(CBAM)and the Squeeze and Excitation Network(SE)to enhance and replace the original spatial and channel attention mechanisms.Findings-Compared to other methods such as IFCNN,NestFuse,DenseFuse,etc.,the values of entropy,standard deviation,and mutual information index of the method presented in this paper can reach 6.9985,82.6652,and 13.6022,respectively,which are significantly improved compared with other methods.Originality/value-This paper presents a UFEFusion framework that synergizes with the CBAM attention mechanism to markedly augment the extraction of detailed features relative to other methods.Moreover,the framework adeptly extracts and amplifies unique features from disparate images,thereby elevating the overall feature representation capability. 展开更多
关键词 Deep learning image fusion Infrared and visible images Convolutional block attention module UFE structure
在线阅读 下载PDF
上一页 1 2 下一页 到第
使用帮助 返回顶部