期刊文献+
共找到11,678篇文章
< 1 2 250 >
每页显示 20 50 100
Multi-modality hierarchical fusion network for lumbar spine segmentation with magnetic resonance images 被引量:1
1
作者 Han Yan Guangtao Zhang +1 位作者 Wei Cui Zhuliang Yu 《Control Theory and Technology》 EI CSCD 2024年第4期612-622,共11页
For the analysis of spinal and disc diseases,automated tissue segmentation of the lumbar spine is vital.Due to the continuous and concentrated location of the target,the abundance of edge features,and individual diffe... For the analysis of spinal and disc diseases,automated tissue segmentation of the lumbar spine is vital.Due to the continuous and concentrated location of the target,the abundance of edge features,and individual differences,conventional automatic segmentation methods perform poorly.Since the success of deep learning in the segmentation of medical images has been shown in the past few years,it has been applied to this task in a number of ways.The multi-scale and multi-modal features of lumbar tissues,however,are rarely explored by methodologies of deep learning.Because of the inadequacies in medical images availability,it is crucial to effectively fuse various modes of data collection for model training to alleviate the problem of insufficient samples.In this paper,we propose a novel multi-modality hierarchical fusion network(MHFN)for improving lumbar spine segmentation by learning robust feature representations from multi-modality magnetic resonance images.An adaptive group fusion module(AGFM)is introduced in this paper to fuse features from various modes to extract cross-modality features that could be valuable.Furthermore,to combine features from low to high levels of cross-modality,we design a hierarchical fusion structure based on AGFM.Compared to the other feature fusion methods,AGFM is more effective based on experimental results on multi-modality MR images of the lumbar spine.To further enhance segmentation accuracy,we compare our network with baseline fusion structures.Compared to the baseline fusion structures(input-level:76.27%,layer-level:78.10%,decision-level:79.14%),our network was able to segment fractured vertebrae more accurately(85.05%). 展开更多
关键词 Lumbar spine segmentation Deep learning multi-modality fusion Feature fusion
原文传递
BDMFuse:Multi-scale network fusion for infrared and visible images based on base and detail features
2
作者 SI Hai-Ping ZHAO Wen-Rui +4 位作者 LI Ting-Ting LI Fei-Tao Fernando Bacao SUN Chang-Xia LI Yan-Ling 《红外与毫米波学报》 北大核心 2025年第2期289-298,共10页
The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method f... The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method for infrared and visible image fusion is proposed.The encoder designed according to the optimization objective consists of a base encoder and a detail encoder,which is used to extract low-frequency and high-frequency information from the image.This extraction may lead to some information not being captured,so a compensation encoder is proposed to supplement the missing information.Multi-scale decomposition is also employed to extract image features more comprehensively.The decoder combines low-frequency,high-frequency and supplementary information to obtain multi-scale features.Subsequently,the attention strategy and fusion module are introduced to perform multi-scale fusion for image reconstruction.Experimental results on three datasets show that the fused images generated by this network effectively retain salient targets while being more consistent with human visual perception. 展开更多
关键词 infrared image visible image image fusion encoder-decoder multi-scale features
在线阅读 下载PDF
Transformers for Multi-Modal Image Analysis in Healthcare
3
作者 Sameera V Mohd Sagheer Meghana K H +2 位作者 P M Ameer Muneer Parayangat Mohamed Abbas 《Computers, Materials & Continua》 2025年第9期4259-4297,共39页
Integrating multiple medical imaging techniques,including Magnetic Resonance Imaging(MRI),Computed Tomography,Positron Emission Tomography(PET),and ultrasound,provides a comprehensive view of the patient health status... Integrating multiple medical imaging techniques,including Magnetic Resonance Imaging(MRI),Computed Tomography,Positron Emission Tomography(PET),and ultrasound,provides a comprehensive view of the patient health status.Each of these methods contributes unique diagnostic insights,enhancing the overall assessment of patient condition.Nevertheless,the amalgamation of data from multiple modalities presents difficulties due to disparities in resolution,data collection methods,and noise levels.While traditional models like Convolutional Neural Networks(CNNs)excel in single-modality tasks,they struggle to handle multi-modal complexities,lacking the capacity to model global relationships.This research presents a novel approach for examining multi-modal medical imagery using a transformer-based system.The framework employs self-attention and cross-attention mechanisms to synchronize and integrate features across various modalities.Additionally,it shows resilience to variations in noise and image quality,making it adaptable for real-time clinical use.To address the computational hurdles linked to transformer models,particularly in real-time clinical applications in resource-constrained environments,several optimization techniques have been integrated to boost scalability and efficiency.Initially,a streamlined transformer architecture was adopted to minimize the computational load while maintaining model effectiveness.Methods such as model pruning,quantization,and knowledge distillation have been applied to reduce the parameter count and enhance the inference speed.Furthermore,efficient attention mechanisms such as linear or sparse attention were employed to alleviate the substantial memory and processing requirements of traditional self-attention operations.For further deployment optimization,researchers have implemented hardware-aware acceleration strategies,including the use of TensorRT and ONNX-based model compression,to ensure efficient execution on edge devices.These optimizations allow the approach to function effectively in real-time clinical settings,ensuring viability even in environments with limited resources.Future research directions include integrating non-imaging data to facilitate personalized treatment and enhancing computational efficiency for implementation in resource-limited environments.This study highlights the transformative potential of transformer models in multi-modal medical imaging,offering improvements in diagnostic accuracy and patient care outcomes. 展开更多
关键词 multi-modal image analysis medical imaging deep learning image segmentation disease detection multi-modal fusion Vision Transformers(ViTs) precision medicine clinical decision support
在线阅读 下载PDF
MMGC-Net: Deep neural network for classification of mineral grains using multi-modal polarization images
4
作者 Jun Shu Xiaohai He +3 位作者 Qizhi Teng Pengcheng Yan Haibo He Honggang Chen 《Journal of Rock Mechanics and Geotechnical Engineering》 2025年第6期3894-3909,共16页
The multi-modal characteristics of mineral particles play a pivotal role in enhancing the classification accuracy,which is critical for obtaining a profound understanding of the Earth's composition and ensuring ef... The multi-modal characteristics of mineral particles play a pivotal role in enhancing the classification accuracy,which is critical for obtaining a profound understanding of the Earth's composition and ensuring effective exploitation utilization of its resources.However,the existing methods for classifying mineral particles do not fully utilize these multi-modal features,thereby limiting the classification accuracy.Furthermore,when conventional multi-modal image classification methods are applied to planepolarized and cross-polarized sequence images of mineral particles,they encounter issues such as information loss,misaligned features,and challenges in spatiotemporal feature extraction.To address these challenges,we propose a multi-modal mineral particle polarization image classification network(MMGC-Net)for precise mineral particle classification.Initially,MMGC-Net employs a two-dimensional(2D)backbone network with shared parameters to extract features from two types of polarized images to ensure feature alignment.Subsequently,a cross-polarized intra-modal feature fusion module is designed to refine the spatiotemporal features from the extracted features of the cross-polarized sequence images.Ultimately,the inter-modal feature fusion module integrates the two types of modal features to enhance the classification precision.Quantitative and qualitative experimental results indicate that when compared with the current state-of-the-art multi-modal image classification methods,MMGC-Net demonstrates marked superiority in terms of mineral particle multi-modal feature learning and four classification evaluation metrics.It also demonstrates better stability than the existing models. 展开更多
关键词 Mineral particles multi-modal image classification Shared parameters Feature fusion Spatiotemporal feature
暂未订购
HaIVFusion: Haze-Free Infrared and Visible Image Fusion
5
作者 Xiang Gao Yongbiao Gao +2 位作者 Aimei Dong Jinyong Cheng Guohua Lv 《IEEE/CAA Journal of Automatica Sinica》 2025年第10期2040-2055,共16页
The purpose of infrared and visible image fusion is to create a single image containing the texture details and significant object information of the source images,particularly in challenging environments.However,exis... The purpose of infrared and visible image fusion is to create a single image containing the texture details and significant object information of the source images,particularly in challenging environments.However,existing image fusion algorithms are generally suitable for normal scenes.In the hazy scene,a lot of texture information in the visible image is hidden,the results of existing methods are filled with infrared information,resulting in the lack of texture details and poor visual effect.To address the aforementioned difficulties,we propose a haze-free infrared and visible fusion method,termed HaIVFusion,which can eliminate the influence of haze and obtain richer texture information in the fused image.Specifically,we first design a scene information restoration network(SIRNet)to mine the masked texture information in visible images.Then,a denoising fusion network(DFNet)is designed to integrate the features extracted from infrared and visible images and remove the influence of residual noise as much as possible.In addition,we use color consistency loss to reduce the color distortion resulting from haze.Furthermore,we publish a dataset of hazy scenes for infrared and visible image fusion to promote research in extreme scenes.Extensive experiments show that HaIVFusion produces fused images with increased texture details and higher contrast in hazy scenes,and achieves better quantitative results,when compared to state-ofthe-art image fusion methods,even combined with state-of-the-art dehazing methods. 展开更多
关键词 Deep learning dehazing image fusion infrared image visible image
在线阅读 下载PDF
An Infrared-Visible Image Fusion Network with Channel-Switching for Low-Light Object Detection
6
作者 Tianzhe Jiao Yuming Chen +2 位作者 Xiaoyue Feng Chaopeng Guo Jie Song 《Computers, Materials & Continua》 2025年第11期2681-2700,共20页
Visible-infrared object detection leverages the day-night stable object perception capability of infrared images to enhance detection robustness in low-light environments by fusing the complementary information of vis... Visible-infrared object detection leverages the day-night stable object perception capability of infrared images to enhance detection robustness in low-light environments by fusing the complementary information of visible and infrared images.However,the inherent differences in the imaging mechanisms of visible and infrared modalities make effective cross-modal fusion challenging.Furthermore,constrained by the physical characteristics of sensors and thermal diffusion effects,infrared images generally suffer from blurred object contours and missing details,making it difficult to extract object features effectively.To address these issues,we propose an infrared-visible image fusion network that realizesmultimodal information fusion of infrared and visible images through a carefully designedmultiscale fusion strategy.First,we design an adaptive gray-radiance enhancement(AGRE)module to strengthen the detail representation in infrared images,improving their usability in complex lighting scenarios.Next,we introduce a channelspatial feature interaction(CSFI)module,which achieves efficient complementarity between the RGB and infrared(IR)modalities via dynamic channel switching and a spatial attention mechanism.Finally,we propose a multi-scale enhanced cross-attention fusion(MSECA)module,which optimizes the fusion ofmulti-level features through dynamic convolution and gating mechanisms and captures long-range complementary relationships of cross-modal features on a global scale,thereby enhancing the expressiveness of the fused features.Experiments on the KAIST,M3FD,and FLIR datasets demonstrate that our method delivers outstanding performance in daytime and nighttime scenarios.On the KAIST dataset,the miss rate drops to 5.99%,and further to 4.26% in night scenes.On the FLIR and M3FD datasets,it achieves AP50 scores of 79.4% and 88.9%,respectively. 展开更多
关键词 Infrared-visible image fusion channel switching low-light object detection cross-attention fusion
在线阅读 下载PDF
LLE-Fuse:Lightweight Infrared and Visible Light Image Fusion Based on Low-Light Image Enhancement
7
作者 Song Qian Guzailinuer Yiming +3 位作者 Ping Li Junfei Yang Yan Xue Shuping Zhang 《Computers, Materials & Continua》 2025年第3期4069-4091,共23页
Infrared and visible light image fusion technology integrates feature information from two different modalities into a fused image to obtain more comprehensive information.However,in low-light scenarios,the illuminati... Infrared and visible light image fusion technology integrates feature information from two different modalities into a fused image to obtain more comprehensive information.However,in low-light scenarios,the illumination degradation of visible light images makes it difficult for existing fusion methods to extract texture detail information from the scene.At this time,relying solely on the target saliency information provided by infrared images is far from sufficient.To address this challenge,this paper proposes a lightweight infrared and visible light image fusion method based on low-light enhancement,named LLE-Fuse.The method is based on the improvement of the MobileOne Block,using the Edge-MobileOne Block embedded with the Sobel operator to perform feature extraction and downsampling on the source images.The intermediate features at different scales obtained are then fused by a cross-modal attention fusion module.In addition,the Contrast Limited Adaptive Histogram Equalization(CLAHE)algorithm is used for image enhancement of both infrared and visible light images,guiding the network model to learn low-light enhancement capabilities through enhancement loss.Upon completion of network training,the Edge-MobileOne Block is optimized into a direct connection structure similar to MobileNetV1 through structural reparameterization,effectively reducing computational resource consumption.Finally,after extensive experimental comparisons,our method achieved improvements of 4.6%,40.5%,156.9%,9.2%,and 98.6%in the evaluation metrics Standard Deviation(SD),Visual Information Fidelity(VIF),Entropy(EN),and Spatial Frequency(SF),respectively,compared to the best results of the compared algorithms,while only being 1.5 ms/it slower in computation speed than the fastest method. 展开更多
关键词 Infrared images image fusion low-light enhancement feature extraction computational resource optimization
在线阅读 下载PDF
Image Mosaic Method of Capsule Endoscopy Intestinal Wall Based on Improved Weighted Fusion
8
作者 MA Ting WU Jianfang +2 位作者 HU Feng NIE Wei LIU Youxin 《Journal of Shanghai Jiaotong university(Science)》 2025年第3期535-544,共10页
There is still a dearth of systematic study on picture stitching techniques for the natural tubular structures of intestines,and traditional stitching techniques have a poor application to endoscopic images with deep ... There is still a dearth of systematic study on picture stitching techniques for the natural tubular structures of intestines,and traditional stitching techniques have a poor application to endoscopic images with deep scenes.In order to recreate the intestinal wall in two dimensions,a method is developed.The normalized Laplacian algorithm is used to enhance the image and transform it into polar coordinates according to the characteristics that intestinal images are not obvious and usually arranged in a circle,in order to extract the new image segments of the current image relative to the previous image.The improved weighted fusion algorithm is then used to sequentially splice the segment images.The experimental results demonstrate that the suggested approach can improve image clarity and minimize noise while maintaining the information content of intestinal images.In addition,the method's seamless transition between the final portions of a panoramic image also demonstrates that the stitching trace has been removed. 展开更多
关键词 capsule endoscopy image stitching intestinal wall image enhancement improved weighted fusion
原文传递
PromptFusion:Harmonized Semantic Prompt Learning for Infrared and Visible Image Fusion
9
作者 Jinyuan Liu Xingyuan Li +4 位作者 Zirui Wang Zhiying Jiang Wei Zhong Wei Fan Bin Xu 《IEEE/CAA Journal of Automatica Sinica》 2025年第3期502-515,共14页
The goal of infrared and visible image fusion(IVIF)is to integrate the unique advantages of both modalities to achieve a more comprehensive understanding of a scene.However,existing methods struggle to effectively han... The goal of infrared and visible image fusion(IVIF)is to integrate the unique advantages of both modalities to achieve a more comprehensive understanding of a scene.However,existing methods struggle to effectively handle modal disparities,resulting in visual degradation of the details and prominent targets of the fused images.To address these challenges,we introduce Prompt Fusion,a prompt-based approach that harmoniously combines multi-modality images under the guidance of semantic prompts.Firstly,to better characterize the features of different modalities,a contourlet autoencoder is designed to separate and extract the high-/low-frequency components of different modalities,thereby improving the extraction of fine details and textures.We also introduce a prompt learning mechanism using positive and negative prompts,leveraging Vision-Language Models to improve the fusion model's understanding and identification of targets in multi-modality images,leading to improved performance in downstream tasks.Furthermore,we employ bi-level asymptotic convergence optimization.This approach simplifies the intricate non-singleton non-convex bi-level problem into a series of convergent and differentiable single optimization problems that can be effectively resolved through gradient descent.Our approach advances the state-of-the-art,delivering superior fusion quality and boosting the performance of related downstream tasks.Project page:https://github.com/hey-it-s-me/PromptFusion. 展开更多
关键词 Bi-level optimization image fusion infrared and visible image prompt learning
在线阅读 下载PDF
Visible and near-infrared image fusion based on information complementarity
10
作者 Zhuo Li Shiliang Pu +2 位作者 Mengqi Ji Feng Zeng Bo Li 《CAAI Transactions on Intelligence Technology》 2025年第1期193-206,共14页
Images with complementary spectral information can be recorded using image sensors that can identify visible and near-infrared spectrum.The fusion of visible and nearinfrared(NIR)aims to enhance the quality of images ... Images with complementary spectral information can be recorded using image sensors that can identify visible and near-infrared spectrum.The fusion of visible and nearinfrared(NIR)aims to enhance the quality of images acquired by video monitoring systems for the ease of user observation and data processing.Unfortunately,current fusion algorithms produce artefacts and colour distortion since they cannot make use of spectrum properties and are lacking in information complementarity.Therefore,an information complementarity fusion(ICF)model is designed based on physical signals.In order to separate high-frequency noise from important information in distinct frequency layers,the authors first extracted texture-scale and edge-scale layers using a two-scale filter.Second,the difference map between visible and near-infrared was filtered using the extended-DoG filter to produce the initial visible-NIR complementary weight map.Then,to generate a guide map,the near-infrared image with night adjustment was processed as well.The final complementarity weight map was subsequently derived via an arctanI function mapping using the guide map and the initial weight maps.Finally,fusion images were generated with the complementarity weight maps.The experimental results demonstrate that the proposed approach outperforms the state-of-the-art in both avoiding artificial colours as well as effectively utilising information complementarity. 展开更多
关键词 color distortion image fusion information complementarity low light NEAR-INFRARED
在线阅读 下载PDF
Multi-Scale Feature Fusion and Advanced Representation Learning for Multi Label Image Classification
11
作者 Naikang Zhong Xiao Lin +1 位作者 Wen Du Jin Shi 《Computers, Materials & Continua》 2025年第3期5285-5306,共22页
Multi-label image classification is a challenging task due to the diverse sizes and complex backgrounds of objects in images.Obtaining class-specific precise representations at different scales is a key aspect of feat... Multi-label image classification is a challenging task due to the diverse sizes and complex backgrounds of objects in images.Obtaining class-specific precise representations at different scales is a key aspect of feature representation.However,existing methods often rely on the single-scale deep feature,neglecting shallow and deeper layer features,which poses challenges when predicting objects of varying scales within the same image.Although some studies have explored multi-scale features,they rarely address the flow of information between scales or efficiently obtain class-specific precise representations for features at different scales.To address these issues,we propose a two-stage,three-branch Transformer-based framework.The first stage incorporates multi-scale image feature extraction and hierarchical scale attention.This design enables the model to consider objects at various scales while enhancing the flow of information across different feature scales,improving the model’s generalization to diverse object scales.The second stage includes a global feature enhancement module and a region selection module.The global feature enhancement module strengthens interconnections between different image regions,mitigating the issue of incomplete represen-tations,while the region selection module models the cross-modal relationships between image features and labels.Together,these components enable the efficient acquisition of class-specific precise feature representations.Extensive experiments on public datasets,including COCO2014,VOC2007,and VOC2012,demonstrate the effectiveness of our proposed method.Our approach achieves consistent performance gains of 0.3%,0.4%,and 0.2%over state-of-the-art methods on the three datasets,respectively.These results validate the reliability and superiority of our approach for multi-label image classification. 展开更多
关键词 image classification MULTI-LABEL multi scale attention mechanisms feature fusion
在线阅读 下载PDF
A Mask-Guided Latent Low-Rank Representation Method for Infrared and Visible Image Fusion
12
作者 Kezhen Xie Syed Mohd Zahid Syed Zainal Ariffin Muhammad Izzad Ramli 《Computers, Materials & Continua》 2025年第7期997-1011,共15页
Infrared and visible image fusion technology integrates the thermal radiation information of infrared images with the texture details of visible images to generate more informative fused images.However,existing method... Infrared and visible image fusion technology integrates the thermal radiation information of infrared images with the texture details of visible images to generate more informative fused images.However,existing methods often fail to distinguish salient objects from background regions,leading to detail suppression in salient regions due to global fusion strategies.This study presents a mask-guided latent low-rank representation fusion method to address this issue.First,the GrabCut algorithm is employed to extract a saliency mask,distinguishing salient regions from background regions.Then,latent low-rank representation(LatLRR)is applied to extract deep image features,enhancing key information extraction.In the fusion stage,a weighted fusion strategy strengthens infrared thermal information and visible texture details in salient regions,while an average fusion strategy improves background smoothness and stability.Experimental results on the TNO dataset demonstrate that the proposed method achieves superior performance in SPI,MI,Qabf,PSNR,and EN metrics,effectively preserving salient target details while maintaining balanced background information.Compared to state-of-the-art fusion methods,our approach achieves more stable and visually consistent fusion results.The fusion code is available on GitHub at:https://github.com/joyzhen1/Image(accessed on 15 January 2025). 展开更多
关键词 Infrared and visible image fusion latent low-rank representation saliency mask extraction weighted fusion strategy
在线阅读 下载PDF
An EnFCM remote sensing image forest land extraction method based on PCA multi-feature fusion
13
作者 ZHU Shengyang WANG Xiaopeng +2 位作者 WEI Tongyi FAN Weiwei SONG Yubo 《Journal of Measurement Science and Instrumentation》 2025年第2期216-223,共8页
The traditional EnFCM(Enhanced fuzzy C-means)algorithm only considers the grey-scale features in image segmentation,resulting in less than satisfactory results when the algorithm is used for remote sensing woodland im... The traditional EnFCM(Enhanced fuzzy C-means)algorithm only considers the grey-scale features in image segmentation,resulting in less than satisfactory results when the algorithm is used for remote sensing woodland image segmentation and extraction.An EnFCM remote sensing forest land extraction method based on PCA multi-feature fusion was proposed.Firstly,histogram equalization was applied to improve the image contrast.Secondly,the texture and edge features of the image were extracted,and a multi-feature fused pixel image was generated using the PCA technique.Moreover,the fused feature was used as a feature constraint to measure the difference of pixels instead of a single grey-scale feature.Finally,an improved feature distance metric calculated the similarity between the pixel points and the cluster center to complete the cluster segmentation.The experimental results showed that the error was between 1.5%and 4.0%compared with the forested area counted by experts’hand-drawing,which could obtain a high accuracy segmentation and extraction result. 展开更多
关键词 image segmentation forest land extraction PCA transform multi-feature fusion EnFCM algorithm
在线阅读 下载PDF
VSMI^(2)-PANet:Versatile Scale-Malleable Image Integration and Patch Wise Attention Network With Transformer for Lung Tumour Segmentation Using Multi-Modal Imaging Techniques
14
作者 Nayef Alqahtani Arfat Ahmad Khan +1 位作者 Rakesh Kumar Mahendran Muhammad Faheem 《CAAI Transactions on Intelligence Technology》 2025年第5期1376-1393,共18页
Lung cancer(LC)is a major cancer which accounts for higher mortality rates worldwide.Doctors utilise many imaging modalities for identifying lung tumours and their severity in earlier stages.Nowadays,machine learning(... Lung cancer(LC)is a major cancer which accounts for higher mortality rates worldwide.Doctors utilise many imaging modalities for identifying lung tumours and their severity in earlier stages.Nowadays,machine learning(ML)and deep learning(DL)methodologies are utilised for the robust detection and prediction of lung tumours.Recently,multi modal imaging emerged as a robust technique for lung tumour detection by combining various imaging features.To cope with that,we propose a novel multi modal imaging technique named versatile scale malleable image integration and patch wise attention network(VSMI2−PANet)which adopts three imaging modalities named computed tomography(CT),magnetic resonance imaging(MRI)and single photon emission computed tomography(SPECT).The designed model accepts input from CT and MRI images and passes it to the VSMI2 module that is composed of three sub-modules named image cropping module,scale malleable convolution layer(SMCL)and PANet module.CT and MRI images are subjected to image cropping module in a parallel manner to crop the meaningful image patches and provide them to the SMCL module.The SMCL module is composed of adaptive convolutional layers that investigate those patches in a parallel manner by preserving the spatial information.The output from the SMCL is then fused and provided to the PANet module.The PANet module examines the fused patches by analysing its height,width and channels of the image patch.As a result,it provides an output as high-resolution spatial attention maps indicating the location of suspicious tumours.The high-resolution spatial attention maps are then provided as an input to the backbone module which uses light wave transformer(LWT)for segmenting the lung tumours into three classes,such as normal,benign and malignant.In addition,the LWT also accepts SPECT image as input for capturing the variations precisely to segment the lung tumours.The performance of the proposed model is validated using several performance metrics,such as accuracy,precision,recall,F1-score and AUC curve,and the results show that the proposed work outperforms the existing approaches. 展开更多
关键词 computational intelligence computer vision data fusion deep learning feature extraction image segmentation
在线阅读 下载PDF
Multi-Label Image Classification Model Based on Multiscale Fusion and Adaptive Label Correlation
15
作者 YE Jihua JIANG Lu +2 位作者 XIAO Shunjie ZONG Yi JIANG Aiwen 《Journal of Shanghai Jiaotong university(Science)》 2025年第5期889-898,共10页
At present,research on multi-label image classification mainly focuses on exploring the correlation between labels to improve the classification accuracy of multi-label images.However,in existing methods,label correla... At present,research on multi-label image classification mainly focuses on exploring the correlation between labels to improve the classification accuracy of multi-label images.However,in existing methods,label correlation is calculated based on the statistical information of the data.This label correlation is global and depends on the dataset,not suitable for all samples.In the process of extracting image features,the characteristic information of small objects in the image is easily lost,resulting in a low classification accuracy of small objects.To this end,this paper proposes a multi-label image classification model based on multiscale fusion and adaptive label correlation.The main idea is:first,the feature maps of multiple scales are fused to enhance the feature information of small objects.Semantic guidance decomposes the fusion feature map into feature vectors of each category,then adaptively mines the correlation between categories in the image through the self-attention mechanism of graph attention network,and obtains feature vectors containing category-related information for the final classification.The mean average precision of the model on the two public datasets of VOC 2007 and MS COCO 2014 reached 95.6% and 83.6%,respectively,and most of the indicators are better than those of the existing latest methods. 展开更多
关键词 image classification label correlation graph attention network small object multi-scale fusion
原文传递
MMIF:Multimodal Medical Image Fusion Network Based on Multi-Scale Hybrid Attention
16
作者 Jianjun Liu Yang Li +2 位作者 Xiaoting Sun Xiaohui Wang Hanjiang Luo 《Computers, Materials & Continua》 2025年第11期3551-3568,共18页
Multimodal image fusion plays an important role in image analysis and applications.Multimodal medical image fusion helps to combine contrast features from two or more input imaging modalities to represent fused inform... Multimodal image fusion plays an important role in image analysis and applications.Multimodal medical image fusion helps to combine contrast features from two or more input imaging modalities to represent fused information in a single image.One of the critical clinical applications of medical image fusion is to fuse anatomical and functional modalities for rapid diagnosis of malignant tissues.This paper proposes a multimodal medical image fusion network(MMIF-Net)based on multiscale hybrid attention.The method first decomposes the original image to obtain the low-rank and significant parts.Then,to utilize the features at different scales,we add amultiscalemechanism that uses three filters of different sizes to extract the features in the encoded network.Also,a hybrid attention module is introduced to obtain more image details.Finally,the fused images are reconstructed by decoding the network.We conducted experiments with clinical images from brain computed tomography/magnetic resonance.The experimental results show that the multimodal medical image fusion network method based on multiscale hybrid attention works better than other advanced fusion methods. 展开更多
关键词 Medical image fusion multiscale mechanism hybrid attention module encoded network
在线阅读 下载PDF
DeepFissureNets-Infrared-Visible:Infrared visible image fusion for boosting mining-induced ground fissure semantic segmentation
17
作者 Jihong Guo Yixin Zhao +3 位作者 Chunwei Ling Kangning Zhang Shirui Wang Liangchen Zhao 《Journal of Rock Mechanics and Geotechnical Engineering》 2025年第11期6932-6950,共19页
High-intensive underground mining has caused severe ground fissures,resulting in environmental degradation.Consequently,prompt detection is crucial to mitigate their environmental impact.However,the accurate segmentat... High-intensive underground mining has caused severe ground fissures,resulting in environmental degradation.Consequently,prompt detection is crucial to mitigate their environmental impact.However,the accurate segmentation of fissuresin complex and variable scenes of visible imagery is a challenging issue.Our method,DeepFissureNets-Infrared-Visible(DFN-IV),highlights the potential of incorporating visible images with infrared information for improved ground fissuresegmentation.DFNIV adopts a two-step process.First,a fusion network is trained with the dual adversarial learning strategy fuses infrared and visible imaging,providing an integrated representation of fissuretargets that combines the structural information with the textual details.Second,the fused images are processed by a fine-tunedsegmentation network,which lever-ages knowledge injection to learn the distinctive characteristics of fissuretargets effectively.Furthermore,an infrared-visible ground fissuredataset(IVGF)is built from an aerial investigation of the Daliuta Coal Mine.Extensive experiments reveal that our approach provides superior accuracy over single-modality image strategies employed in fivesegmentation models.Notably,DeeplabV3+tested with DFN-IV improves by 9.7%and 11.13%in pixel accuracy and Intersection over Union(IoU),respectively,compared to solely visible images.Moreover,our method surpasses six state-of-the-art image fusion methods,achieving a 5.28%improvement in pixel accuracy and a 1.57%increase in IoU,respectively,compared to the second-best effective method.In addition,ablation studies further validate the significanceof the dual adversarial learning module and the integrated knowledge injection strategy.By leveraging DFN-IV,we aim to quantify the impacts of mining-induced ground fissures,facilitating the implementation of intelligent safety measures. 展开更多
关键词 Ground fissuresegmentation Mining-induced ground hazards Deep learning Generative adversarial network image fusion
在线阅读 下载PDF
Low-Light Image Enhancement Based on Wavelet Local and Global Feature Fusion Network
18
作者 Shun Song Xiangqian Jiang Dawei Zhao 《Journal of Contemporary Educational Research》 2025年第11期209-214,共6页
A wavelet-based local and global feature fusion network(LAGN)is proposed for low-light image enhancement,aiming to enhance image details and restore colors in dark areas.This study focuses on addressing three key issu... A wavelet-based local and global feature fusion network(LAGN)is proposed for low-light image enhancement,aiming to enhance image details and restore colors in dark areas.This study focuses on addressing three key issues in low-light image enhancement:Enhancing low-light images using LAGN to preserve image details and colors;extracting image edge information via wavelet transform to enhance image details;and extracting local and global features of images through convolutional neural networks and Transformer to improve image contrast.Comparisons with state-of-the-art methods on two datasets verify that LAGN achieves the best performance in terms of details,brightness,and contrast. 展开更多
关键词 image enhancement Feature fusion Wavelet transform Convolutional Neural Network(CNN) TRANSFORMER
在线阅读 下载PDF
Multimodal medical image fusion based on mask optimization and parallel attention mechanism
19
作者 DI Jing LIANG Chan +1 位作者 GUO Wenqing LIAN Jing 《Journal of Measurement Science and Instrumentation》 2025年第1期26-36,共11页
Medical image fusion technology is crucial for improving the detection accuracy and treatment efficiency of diseases,but existing fusion methods have problems such as blurred texture details,low contrast,and inability... Medical image fusion technology is crucial for improving the detection accuracy and treatment efficiency of diseases,but existing fusion methods have problems such as blurred texture details,low contrast,and inability to fully extract fused image information.Therefore,a multimodal medical image fusion method based on mask optimization and parallel attention mechanism was proposed to address the aforementioned issues.Firstly,it converted the entire image into a binary mask,and constructed a contour feature map to maximize the contour feature information of the image and a triple path network for image texture detail feature extraction and optimization.Secondly,a contrast enhancement module and a detail preservation module were proposed to enhance the overall brightness and texture details of the image.Afterwards,a parallel attention mechanism was constructed using channel features and spatial feature changes to fuse images and enhance the salient information of the fused images.Finally,a decoupling network composed of residual networks was set up to optimize the information between the fused image and the source image so as to reduce information loss in the fused image.Compared with nine high-level methods proposed in recent years,the seven objective evaluation indicators of our method have improved by 6%−31%,indicating that this method can obtain fusion results with clearer texture details,higher contrast,and smaller pixel differences between the fused image and the source image.It is superior to other comparison algorithms in both subjective and objective indicators. 展开更多
关键词 multimodal medical image fusion binary mask contrast enhancement module parallel attention mechanism decoupling network
在线阅读 下载PDF
Fusion method for water depth data from multiple sources based on image recognition
20
作者 Huiyu HAN Feng ZHOU 《Journal of Oceanology and Limnology》 2025年第4期1093-1105,共13页
Considering the difficulty of integrating the depth points of nautical charts of the East China Sea into a global high-precision Grid Digital Elevation Model(Grid-DEM),we proposed a“Fusion based on Image Recognition(... Considering the difficulty of integrating the depth points of nautical charts of the East China Sea into a global high-precision Grid Digital Elevation Model(Grid-DEM),we proposed a“Fusion based on Image Recognition(FIR)”method for multi-sourced depth data fusion,and used it to merge the electronic nautical chart dataset(referred to as Chart2014 in this paper)with the global digital elevation dataset(referred to as Globalbath2002 in this paper).Compared to the traditional fusion of two datasets by direct combination and interpolation,the new Grid-DEM formed by FIR can better represent the data characteristics of Chart2014,reduce the calculation difficulty,and be more intuitive,and,the choice of different interpolation methods in FIR and the influence of the“exclusion radius R”parameter were discussed.FIR avoids complex calculations of spatial distances among points from different sources,and instead uses spatial exclusion map to perform one-step screening based on the exclusion radius R,which greatly improved the fusion status of a reliable dataset.The fusion results of different experiments were analyzed statistically with root mean square error and mean relative error,showing that the interpolation methods based on Delaunay triangulation are more suitable for the fusion of nautical chart depth of China,and factors such as the point density distribution of multiple source data,accuracy,interpolation method,and various terrain conditions should be fully considered when selecting the exclusion radius R. 展开更多
关键词 water depth fusion method Grid Digital Elevation Model(Grid-DEM) image recognition Delaunay triangulation
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部