期刊文献+
共找到11,677篇文章
< 1 2 250 >
每页显示 20 50 100
A lightweight physics-conditioned diffusion multi-model for medical image reconstruction
1
作者 Raja Vavekanand Ganesh Kumar Shakhlokhon Kurbanova 《Biomedical Engineering Communications》 2026年第2期50-59,共10页
Background:Medical imaging advancements are constrained by fundamental trade-offs between acquisition speed,radiation dose,and image quality,forcing clinicians to work with noisy,incomplete data.Existing reconstructio... Background:Medical imaging advancements are constrained by fundamental trade-offs between acquisition speed,radiation dose,and image quality,forcing clinicians to work with noisy,incomplete data.Existing reconstruction methods either compromise on accuracy with iterative algorithms or suffer from limited generalizability with task-specific deep learning approaches.Methods:We present LDM-PIR,a lightweight physics-conditioned diffusion multi-model for medical image reconstruction that addresses key challenges in magnetic resonance imaging(MRI),CT,and low-photon imaging.Unlike traditional iterative methods,which are computationally expensive,or task-specific deep learning approaches lacking generalizability,integrates three innovations.A physics-conditioned diffusion framework that embeds acquisition operators(Fourier/Radon transforms)and noise models directly into the reconstruction process.A multi-model architecture that unifies denoising,inpainting,and super-resolution via shared weight conditioning.A lightweight design(2.1M parameters)enabling rapid inference(0.8s/image on GPU).Through self-supervised fine-tuning with measurement consistency losses adapts to new imaging modalities using fewer annotated samples.Results:Achieves state-of-the-art performance on fastMRI(peak signal-to-noise ratio(PSNR):34.04 for single-coil/31.50 for multi-coil)and Lung Image Database Consortium and Image Database Resource Initiative(28.83 PSNR under Poisson noise).Clinical evaluations demonstrate superior preservation of anatomical structures,with SSIM improvements of 8.8%for single-coil and 4.36%for multi-coil MRI over uDPIR.Conclusion:It offers a flexible,efficient,and scalable solution for medical image reconstruction,addressing the challenges of noise,undersampling,and modality generalization.The model’s lightweight design allows for rapid inference,while its self-supervised fine-tuning capability minimizes reliance on large annotated datasets,making it suitable for real-world clinical applications. 展开更多
关键词 medical image reconstruction physics-conditioned diffusion multi-task learning self-supervised fine-tuning multimodal fusion lightweight neural networks
在线阅读 下载PDF
Application of Image Fusion Methods to Cell Imaging Processing
2
作者 李勤 代彩虹 +4 位作者 俞信 王苏生 张同存 曹恩华 李景福 《Journal of Beijing Institute of Technology》 EI CAS 1998年第4期412-417,共6页
Aim To fuse the fluorescence image and transmission image of a cell into a single image containing more information than any of the individual image. Methods Image fusion technology was applied to biological cell imag... Aim To fuse the fluorescence image and transmission image of a cell into a single image containing more information than any of the individual image. Methods Image fusion technology was applied to biological cell imaging processing. It could match the images and improve the confidence and spatial resolution of the images. Using two algorithms, double thresholds algorithm and denoising algorithm based on wavelet transform,the fluorescence image and transmission image of a Cell were merged into a composite image. Results and Conclusion The position of fluorescence and the structure of cell can be displyed in the composite image. The signal-to-noise ratio of the exultant image is improved to a large extent. The algorithms are not only useful to investigate the fluorescence and transmission images, but also suitable to observing two or more fluoascent label proes in a single cell. 展开更多
关键词 image fusion wavelet transform double thresholds algorithm denoising algorithms living cell image
在线阅读 下载PDF
Fluorescence molecular imaging system and fusion algorithm based on 2CCD camera
3
作者 王玉 王明泉 +1 位作者 杨晓峰 王艳翔 《Journal of Measurement Science and Instrumentation》 CAS CSCD 2016年第2期161-164,共4页
Infrared and visible light images can be obtained simultaneously by building fluorescence imaging system,which includes fluorescence excitation,images acquisition,mechanical part,image transmission and processing sect... Infrared and visible light images can be obtained simultaneously by building fluorescence imaging system,which includes fluorescence excitation,images acquisition,mechanical part,image transmission and processing section.This system studied the 2charge-coupled device(CCD)camera(AD-080CL)of the JAI company.Fusion algorithm of visible light and near infrared images was designed for the fluorescence imaging system with wavelet transform image fusion algorithm.In order to enhance the fluorescent moiety of the fusion image,the luminance value of the green component of the color image was changed.And using microsoft foundation classes(MFC)application architecture,the supporting software system was bulit in VS2010 environment. 展开更多
关键词 fluorescence imaging system image fusion wavelet transform microsoft foundation classes(MFC)
在线阅读 下载PDF
Pseudo Color Fusion of Infrared and Visible Images Based on the Rattlesnake Vision Imaging System 被引量:3
4
作者 Yong Wang Hongqi Liu Xiaoguang Wang 《Journal of Bionic Engineering》 SCIE EI CSCD 2022年第1期209-223,共15页
Image fusion is a key technology in the field of digital image processing.In the present study,an effect-based pseudo color fusion model of infrared and visible images based on the rattlesnake vision imaging system(th... Image fusion is a key technology in the field of digital image processing.In the present study,an effect-based pseudo color fusion model of infrared and visible images based on the rattlesnake vision imaging system(the rattlesnake bimodal cell fusion mechanism and the visual receptive field model)is proposed.The innovation point of the proposed model lies in the following three features:first,the introduction of a simple mathematical model of the visual receptive field reduce computational complexity;second,the enhanced image is obtained by extracting the common information and unique information of source images,which improves fusion image quality;and third,the Waxman typical fusion structure is improved for the pseudo color image fusion model.The performance of the image fusion model is verified through comparative experiments.In the subjective visual evaluation,we find that the color of the fusion image obtained through the proposed model is natural and can highlight the target and scene details.In the objective quantitative evaluation,we observe that the best values on the four indicators,namely standard deviation,average gradient,entropy,and spatial frequency,accounts for 90%,100%,90%,and 100%,respectively,indicating that the fusion image exhibits superior contrast,image clarity,information content,and overall activity.Experimental results reveal that the performance of the proposed model is superior to that of other models and thus verified the validity and reliability of the model. 展开更多
关键词 BIONIC RATTLESNAKE Bimodal cell Infrared image Visible image image fusion
在线阅读 下载PDF
Spinal fusion-hardware construct: Basic concepts and imaging review 被引量:2
5
作者 Mohamed Ragab Nouh 《World Journal of Radiology》 CAS 2012年第5期193-206,共14页
The interpretation of spinal images fixed with metallic hardware forms an increasing bulk of daily practice in a busy imaging department. Radiologists are required to be familiar with the instrumentation and operative... The interpretation of spinal images fixed with metallic hardware forms an increasing bulk of daily practice in a busy imaging department. Radiologists are required to be familiar with the instrumentation and operative options used in spinal fixation and fusion procedures, especially in his or her institute. This is critical in evaluating the position of implants and potential complications associated with the operative approaches and spinal fixation devices used. Thus, the radiologist can play an important role in patient care and outcome. This review outlines the advantages and disadvantages of commonly used imaging methods and reports on the best yield for each modality and how to overcome the problematic issues associated with the presence of metallic hardware during imaging. Baseline radiographs are essential as they are the baseline point for evaluation of future studies should patients develop symptoms suggesting possible complications. They may justify further imaging workup with computed tomography, magnetic resonance and/or nuclear medicine studies as the evaluation of a patient with a spinal implant involves a multi-modality approach. This review describes imaging features of potential complications associated with spinal fusion surgery as well as the instrumentation used. This basic knowledge aims to help radiologists approach everyday practice in clinical imaging. 展开更多
关键词 HARDWARE imaging INSTRUMENTATION SPINAL fusion SPINE
暂未订购
Cognitive magnetic resonance imaging-ultrasound fusion transperineal targeted biopsy combined with randomized biopsy in detection of prostate cancer 被引量:5
6
作者 Cheng Pang Miao Wang +8 位作者 Hui-Min Hou Jian-Yong Liu Zhi-Peng Zhang Xuan Wang Ya-Qun Zhang Chun-Mei Li Wei Zhang Jian-Ye Wang Ming Liu 《World Journal of Clinical Cases》 SCIE 2021年第36期11183-11192,共10页
BACKGROUND Prostate cancer(PCa)is one of the most common cancers among men.Various strategies for targeted biopsy based on multiparametric magnetic resonance imaging(mp-MRI)have emerged,which may improve the accuracy ... BACKGROUND Prostate cancer(PCa)is one of the most common cancers among men.Various strategies for targeted biopsy based on multiparametric magnetic resonance imaging(mp-MRI)have emerged,which may improve the accuracy of detecting clinically significant PCa in recent years.AIM To investigate the diagnostic efficiency of a template for cognitive MRIultrasound fusion transperineal targeted plus randomized biopsy in detecting PCa.METHODS Data from patients with an increasing prostate-specific antigen(PSA)level but less than 20 ng/mL and at least one lesion suspicious for PCa on MRI from December 2015 to June 2018 were retrospectively analyzed.All patients underwent cognitive fusion transperineal template-guided targeted biopsy followed by randomized biopsy outside the targeted area.A total of 127 patients with complete data were included in the final analysis.A multivariable logistic regression analysis was conducted,and a two-sided P<0.05 was considered statistically significant.RESULTS PCa was detected in 66 of 127 patients,and 56 cases presented clinically significant PCa.Cognitive fusion targeted biopsy alone detected 59/127 cases of PCa,specifically 52/59 cases with clinically significant PCa and 7/59 cases with clinically insignificant PCa.A randomized biopsy detected seven cases of PCa negative on targeted biopsy,and four cases had clinically significant PCa.PSA density(OR:1.008,95%CI:1.003-1.012,P=0.001;OR:1.006,95%CI:1.002-1.010,P=0.004)and Prostate Imaging-Reporting and Data System(PI-RADS)scores(both P<0.001)were independently associated with the results of cognitive fusion targeted biopsy combined with randomized biopsy and targeted biopsy alone.CONCLUSION This single-centered study proposed a feasible template for cognitive MRIultrasound fusion transperineal targeted plus randomized biopsy.Patients with higher PSAD and PI-RADS scores were more likely to be diagnosed with PCa. 展开更多
关键词 Prostate neoplasms Magnetic resonance imaging Cognitive fusion Prostate biopsy Prostate cancer
暂未订购
Diffusion-weighted magnetic resonance imaging reflects activation of signal transducer and activator of transcription 3 during focal cerebral ischemia/reperfusion 被引量:2
7
作者 Wen-juan Wu Chun-juan Jiang +2 位作者 Zhui-yang Zhang Kai Xu Wei Li 《Neural Regeneration Research》 SCIE CAS CSCD 2017年第7期1124-1130,共7页
Signal transducer and activator of transcription(STAT)is a unique protein family that binds to DNA,coupled with tyrosine phosphorylation signaling pathways,acting as a transcriptional regulator to mediate a variety ... Signal transducer and activator of transcription(STAT)is a unique protein family that binds to DNA,coupled with tyrosine phosphorylation signaling pathways,acting as a transcriptional regulator to mediate a variety of biological effects.Cerebral ischemia and reperfusion can activate STATs signaling pathway,but no studies have confirmed whether STAT activation can be verified by diffusion-weighted magnetic resonance imaging(DWI)in rats after cerebral ischemia/reperfusion.Here,we established a rat model of focal cerebral ischemia injury using the modified Longa method.DWI revealed hyperintensity in parts of the left hemisphere before reperfusion and a low apparent diffusion coefficient.STAT3 protein expression showed no significant change after reperfusion,but phosphorylated STAT3 expression began to increase after 30 minutes of reperfusion and peaked at 24 hours.Pearson correlation analysis showed that STAT3 activation was correlated positively with the relative apparent diffusion coefficient and negatively with the DWI abnormal signal area.These results indicate that DWI is a reliable representation of the infarct area and reflects STAT phosphorylation in rat brain following focal cerebral ischemia/reperfusion. 展开更多
关键词 nerve regeneration cerebral ischemia/repe(fusion magnetic resonance imaging diffusion weighted imaging signal transducer and activator of transcription 3 phosphorylated signal transducer and activator of transcription 3 apparent diffusion coefficient relative apparentdiffusion coefficient IMMUNOHISTOCHEMISTRY western blot assay neural regeneration
暂未订购
BDMFuse:Multi-scale network fusion for infrared and visible images based on base and detail features
8
作者 SI Hai-Ping ZHAO Wen-Rui +4 位作者 LI Ting-Ting LI Fei-Tao Fernando Bacao SUN Chang-Xia LI Yan-Ling 《红外与毫米波学报》 北大核心 2025年第2期289-298,共10页
The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method f... The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method for infrared and visible image fusion is proposed.The encoder designed according to the optimization objective consists of a base encoder and a detail encoder,which is used to extract low-frequency and high-frequency information from the image.This extraction may lead to some information not being captured,so a compensation encoder is proposed to supplement the missing information.Multi-scale decomposition is also employed to extract image features more comprehensively.The decoder combines low-frequency,high-frequency and supplementary information to obtain multi-scale features.Subsequently,the attention strategy and fusion module are introduced to perform multi-scale fusion for image reconstruction.Experimental results on three datasets show that the fused images generated by this network effectively retain salient targets while being more consistent with human visual perception. 展开更多
关键词 infrared image visible image image fusion encoder-decoder multi-scale features
在线阅读 下载PDF
Magnetic resonance imaging evaluation and nuclear receptor binding SET domain protein 1 mutation in the Sotos syndrome with attention-deficit/hyperactivity disorder
9
作者 Wei Zhu 《World Journal of Clinical Cases》 SCIE 2025年第2期5-9,共5页
Sotos syndrome is characterized by overgrowth features and is caused by alterations in the nuclear receptor binding SET domain protein 1 gene.Attentiondeficit/hyperactivity disorder(ADHD)is considered a neurodevelopme... Sotos syndrome is characterized by overgrowth features and is caused by alterations in the nuclear receptor binding SET domain protein 1 gene.Attentiondeficit/hyperactivity disorder(ADHD)is considered a neurodevelopment and psychiatric disorder in childhood.Genetic characteristics and clinical presentation could play an important role in the diagnosis of Sotos syndrome and ADHD.Magnetic resonance imaging(MRI)has been used to assess medical images in Sotos syndrome and ADHD.The images process is considered to display in MRI while wavelet fusion has been used to integrate distinct images for achieving more complete information in single image in this editorial.In the future,genetic mechanisms and artificial intelligence related to medical images could be used in the clinical diagnosis of Sotos syndrome and ADHD. 展开更多
关键词 Sotos syndrome Attention-deficit/hyperactivity disorder Genetic mutation Magnetic resonance imaging Wavelet fusion
暂未订购
Value of ^(99m)Tc-MDP SPECT/CT fusion imaging and CT in evaluating the extent of mandibular invasion by malignant tumor of oral cavity 被引量:1
10
作者 Qingyun Duan Muyun Jia +6 位作者 Rongtao Yuan Lingxue Bu Wei Shang Xiaoming Jin Ningyi Li Jie Zhao Guoming Wang 《The Chinese-German Journal of Clinical Oncology》 CAS 2012年第12期694-698,共5页
Objective: The aim of our study was to compare the value of computed tomography (CT) and 99mTc-methylene- diphosphonate (MDP) SPECT (single photon emission computed tomography)/CT fusion imaging in determining ... Objective: The aim of our study was to compare the value of computed tomography (CT) and 99mTc-methylene- diphosphonate (MDP) SPECT (single photon emission computed tomography)/CT fusion imaging in determining the extent of mandibular invasion by malignant tumor of the oral cavity. Methods: This study had local ethical committee approval, and all patients gave written informed consent. Fifty-three patients were revealed mandibular invasion by malignant tumor of the oral cavity underwent CT and SPECT/CT. The patients were divided into two groups: group A (invasion-periphery-type) and group B (invasion-center- type). Two radiologists assessed the CT images and two nuclear medicine physicians separately assessed the $PECT/CT images in consensus and without knowledge of the results of other imaging tests. The extent of bone involvement suggested with an imaging modality was compared with pathological findings in the surgical specimen. Results: With pathological findings as the standard of reference, Group A: The extent of mandibular invasion by malignant tumor under- went SPECT/CT was 1.02 _+ 0.20 cm larger than that underwent pathological findings. And the extent of mandibular invasion underwent CT was 1.42 + 0.35 cm smaller than that underwent pathological examination. There were significant difference among the three methods (P 〈 0.01). Group B: The extent of mandibular invasion by malignant tumor underwent SPECT/CT was 1.3 + 0.39 cm larger than that underwent pathological examination. The extent of mandibular invasion underwent CT was 2.55 + 1.44 cm smaller than that underwent pathological findings. There were significant difference among the three methods (P 〈 0.01). The extent of mandibular invasion underwent SPECT/CT was the extent which surgeon must excise to get clear margins. Conclusion: SPECT/CT fusion imaging has significant clinical value in determining the extent of mandibular inva- sion by malignant tumor of oral cavity. 展开更多
关键词 SPECT/CT fusion imaging mandibular invasion malignant tumor
在线阅读 下载PDF
HaIVFusion: Haze-Free Infrared and Visible Image Fusion
11
作者 Xiang Gao Yongbiao Gao +2 位作者 Aimei Dong Jinyong Cheng Guohua Lv 《IEEE/CAA Journal of Automatica Sinica》 2025年第10期2040-2055,共16页
The purpose of infrared and visible image fusion is to create a single image containing the texture details and significant object information of the source images,particularly in challenging environments.However,exis... The purpose of infrared and visible image fusion is to create a single image containing the texture details and significant object information of the source images,particularly in challenging environments.However,existing image fusion algorithms are generally suitable for normal scenes.In the hazy scene,a lot of texture information in the visible image is hidden,the results of existing methods are filled with infrared information,resulting in the lack of texture details and poor visual effect.To address the aforementioned difficulties,we propose a haze-free infrared and visible fusion method,termed HaIVFusion,which can eliminate the influence of haze and obtain richer texture information in the fused image.Specifically,we first design a scene information restoration network(SIRNet)to mine the masked texture information in visible images.Then,a denoising fusion network(DFNet)is designed to integrate the features extracted from infrared and visible images and remove the influence of residual noise as much as possible.In addition,we use color consistency loss to reduce the color distortion resulting from haze.Furthermore,we publish a dataset of hazy scenes for infrared and visible image fusion to promote research in extreme scenes.Extensive experiments show that HaIVFusion produces fused images with increased texture details and higher contrast in hazy scenes,and achieves better quantitative results,when compared to state-ofthe-art image fusion methods,even combined with state-of-the-art dehazing methods. 展开更多
关键词 Deep learning dehazing image fusion infrared image visible image
在线阅读 下载PDF
Diagnosis of osteosarcoma based on multimodal microscopic imaging and deep learning
12
作者 Zihan Wang Jinjin Wu +6 位作者 Chenbei Li Bing Wang Qingxia Wu Lan Li Huijie Wang Chao Tu Jianhua Yin 《Journal of Innovative Optical Health Sciences》 2025年第2期47-56,共10页
Osteosarcoma is the most common primary bone tumor with high malignancy.It is particularly necessary to achieve rapid and accurate diagnosis in its intraoperative examination and early diagnosis.Accordingly,the multim... Osteosarcoma is the most common primary bone tumor with high malignancy.It is particularly necessary to achieve rapid and accurate diagnosis in its intraoperative examination and early diagnosis.Accordingly,the multimodal microscopic imaging diagnosis system constructed by bright field,spontaneous fluorescence and polarized light microscopic imaging was used to study the pathological mechanism of osteosarcoma from the tissue microenvironment level and achieve rapid and accurate diagnosis.First,the multimodal microscopic images of normal and osteosarcoma tissue slices were collected to characterize the overall morphology of the tissue microenvironment of the samples,the arrangement structure of collagen fibers and the content and distribution of endogenous fluorescent substances.Second,based on the correlation and complementarity of the feature information contained in the three single-mode images,combined with convolutional neural network(CNN)and image fusion methods,a multimodal intelligent diagnosis model was constructed to effectively improve the information utilization and diagnosis accuracy.The accuracy and true positivity of the multimodal diagnostic model were significantly improved to 0.8495 and 0.9412,respectively,compared to those of the single-modal models.Besides,the difference of tissue microenvironments before and after cancerization can be used as a basis for cancer diagnosis,and the information extraction and intelligent diagnosis of osteosarcoma tissue can be achieved by using multimodal microscopic imaging technology combined with deep learning,which significantly promoted the application of tissue microenvironment in pathological examination.This diagnostic system relies on its advantages of simple operation,high efficiency and accuracy and high cost-effectiveness,and has enormous clinical application potential and research significance. 展开更多
关键词 Multimodal imaging image fusion deep learning OSTEOSARCOMA intelligent diagnosis
原文传递
An Infrared-Visible Image Fusion Network with Channel-Switching for Low-Light Object Detection
13
作者 Tianzhe Jiao Yuming Chen +2 位作者 Xiaoyue Feng Chaopeng Guo Jie Song 《Computers, Materials & Continua》 2025年第11期2681-2700,共20页
Visible-infrared object detection leverages the day-night stable object perception capability of infrared images to enhance detection robustness in low-light environments by fusing the complementary information of vis... Visible-infrared object detection leverages the day-night stable object perception capability of infrared images to enhance detection robustness in low-light environments by fusing the complementary information of visible and infrared images.However,the inherent differences in the imaging mechanisms of visible and infrared modalities make effective cross-modal fusion challenging.Furthermore,constrained by the physical characteristics of sensors and thermal diffusion effects,infrared images generally suffer from blurred object contours and missing details,making it difficult to extract object features effectively.To address these issues,we propose an infrared-visible image fusion network that realizesmultimodal information fusion of infrared and visible images through a carefully designedmultiscale fusion strategy.First,we design an adaptive gray-radiance enhancement(AGRE)module to strengthen the detail representation in infrared images,improving their usability in complex lighting scenarios.Next,we introduce a channelspatial feature interaction(CSFI)module,which achieves efficient complementarity between the RGB and infrared(IR)modalities via dynamic channel switching and a spatial attention mechanism.Finally,we propose a multi-scale enhanced cross-attention fusion(MSECA)module,which optimizes the fusion ofmulti-level features through dynamic convolution and gating mechanisms and captures long-range complementary relationships of cross-modal features on a global scale,thereby enhancing the expressiveness of the fused features.Experiments on the KAIST,M3FD,and FLIR datasets demonstrate that our method delivers outstanding performance in daytime and nighttime scenarios.On the KAIST dataset,the miss rate drops to 5.99%,and further to 4.26% in night scenes.On the FLIR and M3FD datasets,it achieves AP50 scores of 79.4% and 88.9%,respectively. 展开更多
关键词 Infrared-visible image fusion channel switching low-light object detection cross-attention fusion
在线阅读 下载PDF
LLE-Fuse:Lightweight Infrared and Visible Light Image Fusion Based on Low-Light Image Enhancement
14
作者 Song Qian Guzailinuer Yiming +3 位作者 Ping Li Junfei Yang Yan Xue Shuping Zhang 《Computers, Materials & Continua》 2025年第3期4069-4091,共23页
Infrared and visible light image fusion technology integrates feature information from two different modalities into a fused image to obtain more comprehensive information.However,in low-light scenarios,the illuminati... Infrared and visible light image fusion technology integrates feature information from two different modalities into a fused image to obtain more comprehensive information.However,in low-light scenarios,the illumination degradation of visible light images makes it difficult for existing fusion methods to extract texture detail information from the scene.At this time,relying solely on the target saliency information provided by infrared images is far from sufficient.To address this challenge,this paper proposes a lightweight infrared and visible light image fusion method based on low-light enhancement,named LLE-Fuse.The method is based on the improvement of the MobileOne Block,using the Edge-MobileOne Block embedded with the Sobel operator to perform feature extraction and downsampling on the source images.The intermediate features at different scales obtained are then fused by a cross-modal attention fusion module.In addition,the Contrast Limited Adaptive Histogram Equalization(CLAHE)algorithm is used for image enhancement of both infrared and visible light images,guiding the network model to learn low-light enhancement capabilities through enhancement loss.Upon completion of network training,the Edge-MobileOne Block is optimized into a direct connection structure similar to MobileNetV1 through structural reparameterization,effectively reducing computational resource consumption.Finally,after extensive experimental comparisons,our method achieved improvements of 4.6%,40.5%,156.9%,9.2%,and 98.6%in the evaluation metrics Standard Deviation(SD),Visual Information Fidelity(VIF),Entropy(EN),and Spatial Frequency(SF),respectively,compared to the best results of the compared algorithms,while only being 1.5 ms/it slower in computation speed than the fastest method. 展开更多
关键词 Infrared images image fusion low-light enhancement feature extraction computational resource optimization
在线阅读 下载PDF
Image Mosaic Method of Capsule Endoscopy Intestinal Wall Based on Improved Weighted Fusion
15
作者 MA Ting WU Jianfang +2 位作者 HU Feng NIE Wei LIU Youxin 《Journal of Shanghai Jiaotong university(Science)》 2025年第3期535-544,共10页
There is still a dearth of systematic study on picture stitching techniques for the natural tubular structures of intestines,and traditional stitching techniques have a poor application to endoscopic images with deep ... There is still a dearth of systematic study on picture stitching techniques for the natural tubular structures of intestines,and traditional stitching techniques have a poor application to endoscopic images with deep scenes.In order to recreate the intestinal wall in two dimensions,a method is developed.The normalized Laplacian algorithm is used to enhance the image and transform it into polar coordinates according to the characteristics that intestinal images are not obvious and usually arranged in a circle,in order to extract the new image segments of the current image relative to the previous image.The improved weighted fusion algorithm is then used to sequentially splice the segment images.The experimental results demonstrate that the suggested approach can improve image clarity and minimize noise while maintaining the information content of intestinal images.In addition,the method's seamless transition between the final portions of a panoramic image also demonstrates that the stitching trace has been removed. 展开更多
关键词 capsule endoscopy image stitching intestinal wall image enhancement improved weighted fusion
原文传递
PromptFusion:Harmonized Semantic Prompt Learning for Infrared and Visible Image Fusion
16
作者 Jinyuan Liu Xingyuan Li +4 位作者 Zirui Wang Zhiying Jiang Wei Zhong Wei Fan Bin Xu 《IEEE/CAA Journal of Automatica Sinica》 2025年第3期502-515,共14页
The goal of infrared and visible image fusion(IVIF)is to integrate the unique advantages of both modalities to achieve a more comprehensive understanding of a scene.However,existing methods struggle to effectively han... The goal of infrared and visible image fusion(IVIF)is to integrate the unique advantages of both modalities to achieve a more comprehensive understanding of a scene.However,existing methods struggle to effectively handle modal disparities,resulting in visual degradation of the details and prominent targets of the fused images.To address these challenges,we introduce Prompt Fusion,a prompt-based approach that harmoniously combines multi-modality images under the guidance of semantic prompts.Firstly,to better characterize the features of different modalities,a contourlet autoencoder is designed to separate and extract the high-/low-frequency components of different modalities,thereby improving the extraction of fine details and textures.We also introduce a prompt learning mechanism using positive and negative prompts,leveraging Vision-Language Models to improve the fusion model's understanding and identification of targets in multi-modality images,leading to improved performance in downstream tasks.Furthermore,we employ bi-level asymptotic convergence optimization.This approach simplifies the intricate non-singleton non-convex bi-level problem into a series of convergent and differentiable single optimization problems that can be effectively resolved through gradient descent.Our approach advances the state-of-the-art,delivering superior fusion quality and boosting the performance of related downstream tasks.Project page:https://github.com/hey-it-s-me/PromptFusion. 展开更多
关键词 Bi-level optimization image fusion infrared and visible image prompt learning
在线阅读 下载PDF
Visible and near-infrared image fusion based on information complementarity
17
作者 Zhuo Li Shiliang Pu +2 位作者 Mengqi Ji Feng Zeng Bo Li 《CAAI Transactions on Intelligence Technology》 2025年第1期193-206,共14页
Images with complementary spectral information can be recorded using image sensors that can identify visible and near-infrared spectrum.The fusion of visible and nearinfrared(NIR)aims to enhance the quality of images ... Images with complementary spectral information can be recorded using image sensors that can identify visible and near-infrared spectrum.The fusion of visible and nearinfrared(NIR)aims to enhance the quality of images acquired by video monitoring systems for the ease of user observation and data processing.Unfortunately,current fusion algorithms produce artefacts and colour distortion since they cannot make use of spectrum properties and are lacking in information complementarity.Therefore,an information complementarity fusion(ICF)model is designed based on physical signals.In order to separate high-frequency noise from important information in distinct frequency layers,the authors first extracted texture-scale and edge-scale layers using a two-scale filter.Second,the difference map between visible and near-infrared was filtered using the extended-DoG filter to produce the initial visible-NIR complementary weight map.Then,to generate a guide map,the near-infrared image with night adjustment was processed as well.The final complementarity weight map was subsequently derived via an arctanI function mapping using the guide map and the initial weight maps.Finally,fusion images were generated with the complementarity weight maps.The experimental results demonstrate that the proposed approach outperforms the state-of-the-art in both avoiding artificial colours as well as effectively utilising information complementarity. 展开更多
关键词 color distortion image fusion information complementarity low light NEAR-INFRARED
在线阅读 下载PDF
Multi-Scale Feature Fusion and Advanced Representation Learning for Multi Label Image Classification
18
作者 Naikang Zhong Xiao Lin +1 位作者 Wen Du Jin Shi 《Computers, Materials & Continua》 2025年第3期5285-5306,共22页
Multi-label image classification is a challenging task due to the diverse sizes and complex backgrounds of objects in images.Obtaining class-specific precise representations at different scales is a key aspect of feat... Multi-label image classification is a challenging task due to the diverse sizes and complex backgrounds of objects in images.Obtaining class-specific precise representations at different scales is a key aspect of feature representation.However,existing methods often rely on the single-scale deep feature,neglecting shallow and deeper layer features,which poses challenges when predicting objects of varying scales within the same image.Although some studies have explored multi-scale features,they rarely address the flow of information between scales or efficiently obtain class-specific precise representations for features at different scales.To address these issues,we propose a two-stage,three-branch Transformer-based framework.The first stage incorporates multi-scale image feature extraction and hierarchical scale attention.This design enables the model to consider objects at various scales while enhancing the flow of information across different feature scales,improving the model’s generalization to diverse object scales.The second stage includes a global feature enhancement module and a region selection module.The global feature enhancement module strengthens interconnections between different image regions,mitigating the issue of incomplete represen-tations,while the region selection module models the cross-modal relationships between image features and labels.Together,these components enable the efficient acquisition of class-specific precise feature representations.Extensive experiments on public datasets,including COCO2014,VOC2007,and VOC2012,demonstrate the effectiveness of our proposed method.Our approach achieves consistent performance gains of 0.3%,0.4%,and 0.2%over state-of-the-art methods on the three datasets,respectively.These results validate the reliability and superiority of our approach for multi-label image classification. 展开更多
关键词 image classification MULTI-LABEL multi scale attention mechanisms feature fusion
在线阅读 下载PDF
A Mask-Guided Latent Low-Rank Representation Method for Infrared and Visible Image Fusion
19
作者 Kezhen Xie Syed Mohd Zahid Syed Zainal Ariffin Muhammad Izzad Ramli 《Computers, Materials & Continua》 2025年第7期997-1011,共15页
Infrared and visible image fusion technology integrates the thermal radiation information of infrared images with the texture details of visible images to generate more informative fused images.However,existing method... Infrared and visible image fusion technology integrates the thermal radiation information of infrared images with the texture details of visible images to generate more informative fused images.However,existing methods often fail to distinguish salient objects from background regions,leading to detail suppression in salient regions due to global fusion strategies.This study presents a mask-guided latent low-rank representation fusion method to address this issue.First,the GrabCut algorithm is employed to extract a saliency mask,distinguishing salient regions from background regions.Then,latent low-rank representation(LatLRR)is applied to extract deep image features,enhancing key information extraction.In the fusion stage,a weighted fusion strategy strengthens infrared thermal information and visible texture details in salient regions,while an average fusion strategy improves background smoothness and stability.Experimental results on the TNO dataset demonstrate that the proposed method achieves superior performance in SPI,MI,Qabf,PSNR,and EN metrics,effectively preserving salient target details while maintaining balanced background information.Compared to state-of-the-art fusion methods,our approach achieves more stable and visually consistent fusion results.The fusion code is available on GitHub at:https://github.com/joyzhen1/Image(accessed on 15 January 2025). 展开更多
关键词 Infrared and visible image fusion latent low-rank representation saliency mask extraction weighted fusion strategy
在线阅读 下载PDF
Structured-illumination reflectance imaging for the evaluation of microorganism contamination in pork:effects of spectral and imaging features on its prediction performance
20
作者 Binjing Zhou Xiaohua Liu +6 位作者 Yan Ge Kang Tu Jing Peng Juan Francisco García-Martín Jie Wu Weijie Lan Leiqing Pan 《Food Science and Human Wellness》 2025年第2期683-691,共9页
Structured-illumination reflectance imaging(SIRI)provides a new means for food quality detection.This original work investigated the capability of(SIRI)technique coupled with multivariate chemometrics to evaluate the ... Structured-illumination reflectance imaging(SIRI)provides a new means for food quality detection.This original work investigated the capability of(SIRI)technique coupled with multivariate chemometrics to evaluate the microbial contamination in pork inoculated with Pseudomonas fluorescens and Brochothrix thermosphacta during storage at different temperatures.The prediction performances based on different spectrum and the textural features of direct component and amplitude component images demodulated from the SIRI pattern,as well as their data fusion were comprehensively compared.Based on the full wavelength spectrum(420-700 nm)of amplitude component images,the orthogonal signal correction coupled with support vector machine regression provided the best predictions of the number of P.fluorescens and B.thermosphacta in pork,with the determination coefficients of prediction(R_(p)^(2))values of 0.870 and 0.906,respectively.Besides,the prediction models based on the amplitude component or direct component image textural features and the data fusion models using spectrum and textural features from direct component and amplitude component images cannot significantly improve their prediction accuracy.Consequently,SIRI can be further considered as a potential technique for the rapid evaluation of microbial contaminations in pork meat. 展开更多
关键词 Pseudomonas fluorescens Brochothrix thermosphacta PORK Structured-illumination reflectance imaging Data fusion
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部