期刊文献+
共找到6篇文章
< 1 >
每页显示 20 50 100
Multiscale parallel feature aggregation network with attention fusion(MPFAN-AF):A novel approach to cataract disease classification
1
作者 Mohd Aquib Ansari Shahnawaz Ahmad Arvind Mewada 《Medical Data Mining》 2025年第4期17-28,共12页
Background:Early and accurate diagnosis of cataracts,which ranks among the leading preventable causes of blindness,is critical to securing positive outcomes for patients.Recently,eye image analyses have used deep lear... Background:Early and accurate diagnosis of cataracts,which ranks among the leading preventable causes of blindness,is critical to securing positive outcomes for patients.Recently,eye image analyses have used deep learning(DL)approaches to automate cataract classification more precisely,leading to the development of the Multiscale Parallel Feature Aggregation Network with Attention Fusion(MPFAN-AF).Focused on improving a model’s performance,this approach applies multiscale feature extraction,parallel feature fusion,along with attention-based fusion to sharpen its focus on salient features,which are crucial in detecting cataracts.Methods:Coarse-level features are captured through the application of convolutional layers,and these features undergo refinement through layered kernels of varying sizes.Moreover,this method captures all the diverse representations of cataracts accurately by parallel feature aggregation.Utilizing the Cataract Eye Dataset available on Kaggle,containing 612 labelled images of eyes with and without cataracts proportionately(normal vs.pathological),this model was trained and tested.Results:Results using the proposed model reflect greater precision over traditional convolutional neural networks(CNNs)models,achieving a classification accuracy of 97.52%.Additionally,the model demonstrated exceptional performance in classification tasks.The ablation studies validated that all applications added value to the prediction process,particularly emphasizing the attention fusion module.Conclusion:The MPFAN-AF model demonstrates high efficiency together with interpretability because it shows promise as an integration solution for real-time mobile cataract detection screening systems.Standard performance indicators indicate that AI-based ophthalmology tools have a promising future for use in remote conditions that lack medical resources. 展开更多
关键词 cataract classification deep learning multiscale feature extraction attention mechanism medical image analysis
在线阅读 下载PDF
Nonlinear frequency prediction and uncertainty analysis for fully clamped laminates by using a self-developed multi-scale neural networks system
2
作者 Yuan LIU Xuan ZHANG +6 位作者 Xibin CAO Jinsheng GUO Zhongxi SHAO Qingyang DENG Pengbo FU Yaodong HOU Haipeng CHEN 《Chinese Journal of Aeronautics》 2025年第9期225-250,共26页
To improve design accuracy and reliability of structures,this study solves the uncertain natural frequencies with consideration for geometric nonlinearity and structural uncertainty.Frequencies of the laminated plate ... To improve design accuracy and reliability of structures,this study solves the uncertain natural frequencies with consideration for geometric nonlinearity and structural uncertainty.Frequencies of the laminated plate with all four edges clamped(CCCC)are derived based on Navier's method and Galerkin's method.The novelty of the current work is that the number of unknowns in the displacement field model of a CCCC plate with free midsurface(CCCC-2 plate)is only three compared with four or five in cases of other exposed methods.The present analytical method is proved to be accurate and reliable by comparing linear natural frequencies and nonlinear natural frequencies with other models available in the open literature.Furthermore,a novel method for analyzing effects of mean values and tolerance zones of uncertain structural parameters on random frequencies is proposed based on a self-developed Multiscale Feature Extraction and Fusion Network(MFEFN)system.Compared with a direct Monte Carlo Simulation(MCS),the MFEFNbased procedure significantly reduces the calculation burden with a guarantee of accuracy.Our research provides a method to calculate nonlinear natural frequencies under two boundary conditions and presentes a surrogate model to predict frequencies for accuracy analysis and optimization design. 展开更多
关键词 Geometric nonlinearity LAMINATES multiscale feature extraction and fusion networks(MFEFN) Natural frequency Uncertainty analysis
原文传递
Efficient 3D Biomedical Image Segmentation by Parallelly Multiscale Transformer−CNN Aggregation Network
3
作者 Wei Liu Yuxiao He +8 位作者 Tiantian Man Fulin Zhu Qiaoliang Chen Yaqi Huang Xuyu Feng Bin Li Ying Wan Jian He Shengyuan Deng 《Chemical & Biomedical Imaging》 2025年第8期522-533,共12页
Accurate and automated segmentation of 3D biomedical images is a sophisticated imperative in clinical diagnosis,imaging-guided surgery,and prognosis judgment.Although the burgeoning of deep learning technologies has f... Accurate and automated segmentation of 3D biomedical images is a sophisticated imperative in clinical diagnosis,imaging-guided surgery,and prognosis judgment.Although the burgeoning of deep learning technologies has fostered smart segmentators,the successive and simultaneous garnering global and local features still remains challenging,which is essential for an exact and efficient imageological assay.To this end,a segmentation solution dubbed the mixed parallel shunted transformer(MPSTrans)is developed here,highlighting 3DMPST blocks in a U-form framework.It enabled not only comprehensive characteristic capture and multiscale slice synchronization but also deep supervision in the decoder to facilitate the fetching of hierarchical representations.Performing on an unpublished colon cancer data set,this model achieved an impressive increase in dice similarity coefficient(DSC)and a 1.718 mm decease in Hausdorff distance at 95%(HD95),alongside a substantial shrink of computational load of 56.7%in giga floating-point operations per second(GFLOPs).Meanwhile,MPSTrans outperforms other mainstream methods(Swin UNETR,UNETR,nnU-Net,PHTrans,and 3D U-Net)on three public multiorgan(aorta,gallbladder,kidney,liver,pancreas,spleen,stomach,etc.)and multimodal(CT,PET-CT,and MRI)data sets of medical segmentation decathlon(MSD)brain tumor,multiatlas labeling beyond cranial vault(BCV),and automated cardiac diagnosis challenge(ACDC),accentuating its adaptability.These results reflect the potential of MPSTrans to advance the state-of-the-art in biomedical imaging analysis,which would offer a robust tool for enhanced diagnostic capacity. 展开更多
关键词 3D biomedical image segmentation shunted transformer convolutional neural networks parallel architecture multiscale feature extraction
在线阅读 下载PDF
RF-Net: Unsupervised Low-Light Image Enhancement Based on Retinex and Exposure Fusion 被引量:2
4
作者 Tian Ma Chenhui Fu +2 位作者 Jiayi Yang Jiehui Zhang Chuyang Shang 《Computers, Materials & Continua》 SCIE EI 2023年第10期1103-1122,共20页
Low-light image enhancement methods have limitations in addressing issues such as color distortion,lack of vibrancy,and uneven light distribution and often require paired training data.To address these issues,we propo... Low-light image enhancement methods have limitations in addressing issues such as color distortion,lack of vibrancy,and uneven light distribution and often require paired training data.To address these issues,we propose a two-stage unsupervised low-light image enhancement algorithm called Retinex and Exposure Fusion Network(RFNet),which can overcome the problems of over-enhancement of the high dynamic range and under-enhancement of the low dynamic range in existing enhancement algorithms.This algorithm can better manage the challenges brought about by complex environments in real-world scenarios by training with unpaired low-light images and regular-light images.In the first stage,we design a multi-scale feature extraction module based on Retinex theory,capable of extracting details and structural information at different scales to generate high-quality illumination and reflection images.In the second stage,an exposure image generator is designed through the camera response mechanism function to acquire exposure images containing more dark features,and the generated images are fused with the original input images to complete the low-light image enhancement.Experiments show the effectiveness and rationality of each module designed in this paper.And the method reconstructs the details of contrast and color distribution,outperforms the current state-of-the-art methods in both qualitative and quantitative metrics,and shows excellent performance in the real world. 展开更多
关键词 Low-light image enhancement multiscale feature extraction module exposure generator exposure fusion
在线阅读 下载PDF
Convolutional Neural Network Based on Spatial Pyramid for Image Classification 被引量:2
5
作者 Gaihua Wang Meng Lu +2 位作者 Tao Li Guoliang Yuan Wenzhou Liu 《Journal of Beijing Institute of Technology》 EI CAS 2018年第4期630-636,共7页
A novel convolutional neural network based on spatial pyramid for image classification is proposed.The network exploits image features with spatial pyramid representation.First,it extracts global features from an orig... A novel convolutional neural network based on spatial pyramid for image classification is proposed.The network exploits image features with spatial pyramid representation.First,it extracts global features from an original image,and then different layers of grids are utilized to extract feature maps from different convolutional layers.Inspired by the spatial pyramid,the new network contains two parts,one of which is just like a standard convolutional neural network,composing of alternating convolutions and subsampling layers.But those convolution layers would be averagely pooled by the grid way to obtain feature maps,and then concatenated into a feature vector individually.Finally,those vectors are sequentially concatenated into a total feature vector as the last feature to the fully connection layer.This generated feature vector derives benefits from the classic and previous convolution layer,while the size of the grid adjusting the weight of the feature maps improves the recognition efficiency of the network.Experimental results demonstrate that this model improves the accuracy and applicability compared with the traditional model. 展开更多
关键词 convolutional neural network multiscale feature extraction image classification
在线阅读 下载PDF
VMMAO-YOLO:an ultra-lightweight and scale-aware detector for real-time defect detection of avionics thermistor wire solder joints
6
作者 Xiaoqi YANG Xingyue LIU +4 位作者 Qian WU Guojun WEN Shuang MEI Guanglan LIAO Tielin SHI 《Frontiers of Mechanical Engineering》 SCIE CSCD 2024年第3期77-92,共16页
The quality of the exposed avionics solder joints has a significant impact on the stable operation of the inorbit spacecrafts.Nevertheless,the previously reported inspection methods for multi-scale solder joint defect... The quality of the exposed avionics solder joints has a significant impact on the stable operation of the inorbit spacecrafts.Nevertheless,the previously reported inspection methods for multi-scale solder joint defects generally suffer low accuracy and slow detection speed.Herein,a novel real-time detector VMMAO-YOLO is demonstrated based on variable multi-scale concurrency and multi-depth aggregation network(VMMANet)backbone and“one-stop”global information gather-distribute(OS-GD)module.Combined with infrared thermography technology,it can achieve fast and high-precision detection of both internal and external solder joint defects.Specifically,VMMANet is designed for efficient multi-scale feature extraction,which mainly comprises variable multi-scale feature concurrency(VMC)and multi-depth feature aggregation-alignment(MAA)modules.VMC can extract multi-scale features via multiple fix-sized and deformable convolutions,while MAA can aggregate and align multi-depth features on the same order for feature inference.This allows the low-level features with more spatial details to be transmitted in depth-wise,enabling the deeper network to selectively utilize the preceding inference information.The VMMANet replaces inefficient highdensity deep convolution by increasing the width of intermediate feature levels,leading to a salient decline in parameters.The OS-GD is developed for efficacious feature extraction,aggregation and distribution,further enhancing the global information gather and deployment capability of the network.On a self-made solder joint image data set,the VMMAOYOLO achieves a mean average precision mAP@0.5 of 91.6%,surpassing all the mainstream YOLO-series models.Moreover,the VMMAO-YOLO has a body size of merely 19.3 MB and a detection speed up to 119 frame per second,far superior to the prevalent YOLO-series detectors. 展开更多
关键词 defect detection of solder joints VMMAO-YOLO ultra-lightweight and high-performance multiscale feature extraction VMC and MAA modules OS-GD
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部