Lightweight convolutional neural networks(CNNs)have simple structures but struggle to comprehensively and accurately extract important semantic information from images.While attention mechanisms can enhance CNNs by le...Lightweight convolutional neural networks(CNNs)have simple structures but struggle to comprehensively and accurately extract important semantic information from images.While attention mechanisms can enhance CNNs by learning distinctive representations,most existing spatial and hybrid attention methods focus on local regions with extensive parameters,making them unsuitable for lightweight CNNs.In this paper,we propose a self-attention mechanism tailored for lightweight networks,namely the brief self-attention module(BSAM).BSAM consists of the brief spatial attention(BSA)and advanced channel attention blocks.Unlike conventional self-attention methods with many parameters,our BSA block improves the performance of lightweight networks by effectively learning global semantic representations.Moreover,BSAM can be seamlessly integrated into lightweight CNNs for end-to-end training,maintaining the network’s lightweight and mobile characteristics.We validate the effectiveness of the proposed method on image classification tasks using the Food-101,Caltech-256,and Mini-ImageNet datasets.展开更多
Detecting small forest fire targets in unmanned aerial vehicle(UAV)images is difficult,as flames typically cover only a very limited portion of the visual scene.This study proposes Context-guided Compact Lightweight N...Detecting small forest fire targets in unmanned aerial vehicle(UAV)images is difficult,as flames typically cover only a very limited portion of the visual scene.This study proposes Context-guided Compact Lightweight Network(CCLNet),an end-to-end lightweight model designed to detect small forest fire targets while ensuring efficient inference on devices with constrained computational resources.CCLNet employs a three-stage network architecture.Its key components include three modules.C3F-Convolutional Gated Linear Unit(C3F-CGLU)performs selective local feature extraction while preserving fine-grained high-frequency flame details.Context-Guided Feature Fusion Module(CGFM)replaces plain concatenation with triplet-attention interactions to emphasize subtle flame patterns.Lightweight Shared Convolution with Separated Batch Normalization Detection(LSCSBD)reduces parameters through separated batch normalization while maintaining scale-specific statistics.We build TF-11K,an 11,139-image dataset combining 9139 self-collected UAV images from subtropical forests and 2000 re-annotated frames from the FLAME dataset.On TF-11K,CCLNet attains 85.8%mAP@0.5,45.5%mean Average Precision(mAP)@[0.5:0.95],87.4%precision,and 79.1%recall with 2.21 M parameters and 5.7 Giga Floating-point Operations Per Second(GFLOPs).The ablation study confirms that each module contributes to both accuracy and efficiency.Cross-dataset evaluation on DFS yields 77.5%mAP@0.5 and 42.3%mAP@[0.5:0.95],indicating good generalization to unseen scenes.These results suggest that CCLNet offers a practical balance between accuracy and speed for small-target forest fire monitoring with UAVs.展开更多
In this paper,we present a fast mode decomposition method for few-mode fibers,utilizing a lightweight neural network called MobileNetV3-Light.This method can quickly and accurately predict the amplitude and phase info...In this paper,we present a fast mode decomposition method for few-mode fibers,utilizing a lightweight neural network called MobileNetV3-Light.This method can quickly and accurately predict the amplitude and phase information of different modes,enabling us to fully characterize the optical field without the need for expensive experimental equipment.We train the MobileNetV3-Light using simulated near-field optical field maps,and evaluate its performance using both simulated and reconstructed near-field optical field maps.To validate the effectiveness of this method,we conduct mode decomposition experiments on a few-mode fiber supporting six linear polarization(LP)modes(LP01,LP11e,LP11o,LP21e,LP21o,LP02).The results demonstrate a remarkable average correlation of 0.9995 between our simulated and reconstructed near-field lightfield maps.And the mode decomposition speed is about 6 ms per frame,indicating its powerful real-time processing capability.In addition,the proposed network model is compact,with a size of only 6.5 MB,making it well suited for deployment on portable mobile devices.展开更多
Background:Medical imaging advancements are constrained by fundamental trade-offs between acquisition speed,radiation dose,and image quality,forcing clinicians to work with noisy,incomplete data.Existing reconstructio...Background:Medical imaging advancements are constrained by fundamental trade-offs between acquisition speed,radiation dose,and image quality,forcing clinicians to work with noisy,incomplete data.Existing reconstruction methods either compromise on accuracy with iterative algorithms or suffer from limited generalizability with task-specific deep learning approaches.Methods:We present LDM-PIR,a lightweight physics-conditioned diffusion multi-model for medical image reconstruction that addresses key challenges in magnetic resonance imaging(MRI),CT,and low-photon imaging.Unlike traditional iterative methods,which are computationally expensive,or task-specific deep learning approaches lacking generalizability,integrates three innovations.A physics-conditioned diffusion framework that embeds acquisition operators(Fourier/Radon transforms)and noise models directly into the reconstruction process.A multi-model architecture that unifies denoising,inpainting,and super-resolution via shared weight conditioning.A lightweight design(2.1M parameters)enabling rapid inference(0.8s/image on GPU).Through self-supervised fine-tuning with measurement consistency losses adapts to new imaging modalities using fewer annotated samples.Results:Achieves state-of-the-art performance on fastMRI(peak signal-to-noise ratio(PSNR):34.04 for single-coil/31.50 for multi-coil)and Lung Image Database Consortium and Image Database Resource Initiative(28.83 PSNR under Poisson noise).Clinical evaluations demonstrate superior preservation of anatomical structures,with SSIM improvements of 8.8%for single-coil and 4.36%for multi-coil MRI over uDPIR.Conclusion:It offers a flexible,efficient,and scalable solution for medical image reconstruction,addressing the challenges of noise,undersampling,and modality generalization.The model’s lightweight design allows for rapid inference,while its self-supervised fine-tuning capability minimizes reliance on large annotated datasets,making it suitable for real-world clinical applications.展开更多
As the use of facial attributes continues to expand,research into facial age estimation is also developing.Because face images are easily affected by factors including illumination and occlusion,the age estimation of ...As the use of facial attributes continues to expand,research into facial age estimation is also developing.Because face images are easily affected by factors including illumination and occlusion,the age estimation of faces is a challenging process.This paper proposes a face age estimation algorithm based on lightweight convolutional neural network in view of the complexity of the environment and the limitations of device computing ability.Improving face age estimation based on Soft Stagewise Regression Network(SSR-Net)and facial images,this paper employs the Center Symmetric Local Binary Pattern(CSLBP)method to obtain the feature image and then combines the face image and the feature image as network input data.Adding feature images to the convolutional neural network can improve the accuracy as well as increase the network model robustness.The experimental results on IMDB-WIKI and MORPH 2 datasets show that the lightweight convolutional neural network method proposed in this paper reduces model complexity and increases the accuracy of face age estimations.展开更多
In the field of agricultural information,the identification and prediction of rice leaf disease have always been the focus of research,and deep learning(DL)technology is currently a hot research topic in the field of ...In the field of agricultural information,the identification and prediction of rice leaf disease have always been the focus of research,and deep learning(DL)technology is currently a hot research topic in the field of pattern recognition.The research and development of high-efficiency,highquality and low-cost automatic identification methods for rice diseases that can replace humans is an important means of dealing with the current situation from a technical perspective.This paper mainly focuses on the problem of huge parameters of the Convolutional Neural Network(CNN)model and proposes a recognitionmodel that combines amulti-scale convolution module with a neural network model based on Visual Geometry Group(VGG).The accuracy and loss of the training set and the test set are used to evaluate the performance of the model.The test accuracy of this model is 97.1%that has increased 5.87%over VGG.Furthermore,the memory requirement is 26.1M,only 1.6%of the VGG.Experiment results show that this model performs better in terms of accuracy,recognition speed and memory size.展开更多
Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware reso...Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline.展开更多
In pursuit of cost-effective manufacturing,enterprises are increasingly adopting the practice of utilizing recycled semiconductor chips.To ensure consistent chip orientation during packaging,a circular marker on the f...In pursuit of cost-effective manufacturing,enterprises are increasingly adopting the practice of utilizing recycled semiconductor chips.To ensure consistent chip orientation during packaging,a circular marker on the front side is employed for pin alignment following successful functional testing.However,recycled chips often exhibit substantial surface wear,and the identification of the relatively small marker proves challenging.Moreover,the complexity of generic target detection algorithms hampers seamless deployment.Addressing these issues,this paper introduces a lightweight YOLOv8s-based network tailored for detecting markings on recycled chips,termed Van-YOLOv8.Initially,to alleviate the influence of diminutive,low-resolution markings on the precision of deep learning models,we utilize an upscaling approach for enhanced resolution.This technique relies on the Super-Resolution Generative Adversarial Network with Extended Training(SRGANext)network,facilitating the reconstruction of high-fidelity images that align with input specifications.Subsequently,we replace the original YOLOv8smodel’s backbone feature extraction network with the lightweight VanillaNetwork(VanillaNet),simplifying the branch structure to reduce network parameters.Finally,a Hybrid Attention Mechanism(HAM)is implemented to capture essential details from input images,improving feature representation while concurrently expediting model inference speed.Experimental results demonstrate that the Van-YOLOv8 network outperforms the original YOLOv8s on a recycled chip dataset in various aspects.Significantly,it demonstrates superiority in parameter count,computational intricacy,precision in identifying targets,and speed when compared to certain prevalent algorithms in the current landscape.The proposed approach proves promising for real-time detection of recycled chips in practical factory settings.展开更多
Accurate cloud classification plays a crucial role in aviation safety,climate monitoring,and localized weather forecasting.Current research has been focusing on machine learning techniques,particularly deep learning b...Accurate cloud classification plays a crucial role in aviation safety,climate monitoring,and localized weather forecasting.Current research has been focusing on machine learning techniques,particularly deep learning based model,for the types identification.However,traditional approaches such as convolutional neural networks(CNNs)encounter difficulties in capturing global contextual information.In addition,they are computationally expensive,which restricts their usability in resource-limited environments.To tackle these issues,we present the Cloud Vision Transformer(CloudViT),a lightweight model that integrates CNNs with Transformers.The integration enables an effective balance between local and global feature extraction.To be specific,CloudViT comprises two innovative modules:Feature Extraction(E_Module)and Downsampling(D_Module).These modules are able to significantly reduce the number of model parameters and computational complexity while maintaining translation invariance and enhancing contextual comprehension.Overall,the CloudViT includes 0.93×10^(6)parameters,which decreases more than ten times compared to the SOTA(State-of-the-Art)model CloudNet.Comprehensive evaluations conducted on the HBMCD and SWIMCAT datasets showcase the outstanding performance of CloudViT.It achieves classification accuracies of 98.45%and 100%,respectively.Moreover,the efficiency and scalability of CloudViT make it an ideal candidate for deployment inmobile cloud observation systems,enabling real-time cloud image classification.The proposed hybrid architecture of CloudViT offers a promising approach for advancing ground-based cloud image classification.It holds significant potential for both optimizing performance and facilitating practical deployment scenarios.展开更多
The intersection of the Industrial Internet of Things(IIoT)and artificial intelligence(AI)has garnered ever-increasing attention and research interest.Nevertheless,the dilemma between the strict resource-constrained n...The intersection of the Industrial Internet of Things(IIoT)and artificial intelligence(AI)has garnered ever-increasing attention and research interest.Nevertheless,the dilemma between the strict resource-constrained nature of IIoT devices and the extensive resource demands of AI has not yet been fully addressed with a comprehensive solution.Taking advantage of the lightweight constructive neural network(LightGCNet)in developing fast learner models for IIoT,a convex geometric constructive neural network with a low-complexity control strategy,namely,ConGCNet,is proposed in this article via convex optimization and matrix theory,which enhances the convergence rate and reduces the computational consumption in comparison with LightGCNet.Firstly,a low-complexity control strategy is proposed to reduce the computational consumption during the hidden parameters training process.Secondly,a novel output weights evaluated method based on convex optimization is proposed to guarantee the convergence rate.Finally,the universal approximation property of ConGCNet is proved by the low-complexity control strategy and convex output weights evaluated method.Simulation results,including four benchmark datasets and the real-world ore grinding process,demonstrate that ConGCNet effectively reduces computational consumption in the modelling process and improves the model’s convergence rate.展开更多
Aiming at the problem of potential information noise introduced during the generation of ghost feature maps in GhostNet,this paper proposes a novel lightweight neural network model called ResghostNet.This model constr...Aiming at the problem of potential information noise introduced during the generation of ghost feature maps in GhostNet,this paper proposes a novel lightweight neural network model called ResghostNet.This model constructs the Resghost Module by combining residual connections and Adaptive-SE Blocks,which enhances the quality of generated feature maps through direct propagation of original input information and selection of important channels before cheap operations.Specifically,ResghostNet introduces residual connections on the basis of the Ghost Module to optimize the information flow,and designs a weight self-attention mechanism combined with SE blocks to enhance feature expression capabilities in cheap operations.Experimental results on the ImageNet dataset show that,compared to GhostNet,ResghostNet achieves higher accuracy while reducing the number of parameters by 52%.Although the computational complexity increases,by optimizing the usage strategy of GPU cachememory,themodel’s inference speed becomes faster.The ResghostNet is optimized in terms of classification accuracy and the number of model parameters,and shows great potential in edge computing devices.展开更多
轻量化神经网络是指通过优化,减少资源消耗,使其能够在资源受限的环境中高效运行的神经网络。其训练过程通常以整体最优为目标,然而在实际应用中,可能存在某些感兴趣类别的分类精度偏低的问题,这些类别对于用户或应用而言,其准确性比其...轻量化神经网络是指通过优化,减少资源消耗,使其能够在资源受限的环境中高效运行的神经网络。其训练过程通常以整体最优为目标,然而在实际应用中,可能存在某些感兴趣类别的分类精度偏低的问题,这些类别对于用户或应用而言,其准确性比其他类更重要。为解决上述问题,提出了一种适用于轻量化神经网络的结构微调方法——基于次小值阈值选取的突触连接方法(Synaptic join method based on the sub-minimum value threshold,SMVT-SJ)。该方法通过次小值选取策略划定新突触的权值阈值,从隐藏层向输出层目标神经元跨层添加新突触,从而特异性地提升用户关注类别的分类精度。为了筛选更高效的新突触,SMVT-SJ提出突触评估过程,根据所有可能的适当权值的分布来评估每个候选突触的性能。在多个不同数据集上的实验结果表明,该方法能够有效地提高特定目标类别的分类精度,并维持总体精度不发生明显降低,具有很好的泛化性和鲁棒性。展开更多
文摘Lightweight convolutional neural networks(CNNs)have simple structures but struggle to comprehensively and accurately extract important semantic information from images.While attention mechanisms can enhance CNNs by learning distinctive representations,most existing spatial and hybrid attention methods focus on local regions with extensive parameters,making them unsuitable for lightweight CNNs.In this paper,we propose a self-attention mechanism tailored for lightweight networks,namely the brief self-attention module(BSAM).BSAM consists of the brief spatial attention(BSA)and advanced channel attention blocks.Unlike conventional self-attention methods with many parameters,our BSA block improves the performance of lightweight networks by effectively learning global semantic representations.Moreover,BSAM can be seamlessly integrated into lightweight CNNs for end-to-end training,maintaining the network’s lightweight and mobile characteristics.We validate the effectiveness of the proposed method on image classification tasks using the Food-101,Caltech-256,and Mini-ImageNet datasets.
基金funded by the Natural Science Foundation of Hunan Province(Grant No.2025JJ80352)the National Natural Science Foundation Project of China(Grant No.32271879).
文摘Detecting small forest fire targets in unmanned aerial vehicle(UAV)images is difficult,as flames typically cover only a very limited portion of the visual scene.This study proposes Context-guided Compact Lightweight Network(CCLNet),an end-to-end lightweight model designed to detect small forest fire targets while ensuring efficient inference on devices with constrained computational resources.CCLNet employs a three-stage network architecture.Its key components include three modules.C3F-Convolutional Gated Linear Unit(C3F-CGLU)performs selective local feature extraction while preserving fine-grained high-frequency flame details.Context-Guided Feature Fusion Module(CGFM)replaces plain concatenation with triplet-attention interactions to emphasize subtle flame patterns.Lightweight Shared Convolution with Separated Batch Normalization Detection(LSCSBD)reduces parameters through separated batch normalization while maintaining scale-specific statistics.We build TF-11K,an 11,139-image dataset combining 9139 self-collected UAV images from subtropical forests and 2000 re-annotated frames from the FLAME dataset.On TF-11K,CCLNet attains 85.8%mAP@0.5,45.5%mean Average Precision(mAP)@[0.5:0.95],87.4%precision,and 79.1%recall with 2.21 M parameters and 5.7 Giga Floating-point Operations Per Second(GFLOPs).The ablation study confirms that each module contributes to both accuracy and efficiency.Cross-dataset evaluation on DFS yields 77.5%mAP@0.5 and 42.3%mAP@[0.5:0.95],indicating good generalization to unseen scenes.These results suggest that CCLNet offers a practical balance between accuracy and speed for small-target forest fire monitoring with UAVs.
基金supported by the Scientific Research Fund of Hunan Provincial Education Department of China(No.22B0324)the Natural Science Foundation of Hunan Province of China(No.2020JJ5606)。
文摘In this paper,we present a fast mode decomposition method for few-mode fibers,utilizing a lightweight neural network called MobileNetV3-Light.This method can quickly and accurately predict the amplitude and phase information of different modes,enabling us to fully characterize the optical field without the need for expensive experimental equipment.We train the MobileNetV3-Light using simulated near-field optical field maps,and evaluate its performance using both simulated and reconstructed near-field optical field maps.To validate the effectiveness of this method,we conduct mode decomposition experiments on a few-mode fiber supporting six linear polarization(LP)modes(LP01,LP11e,LP11o,LP21e,LP21o,LP02).The results demonstrate a remarkable average correlation of 0.9995 between our simulated and reconstructed near-field lightfield maps.And the mode decomposition speed is about 6 ms per frame,indicating its powerful real-time processing capability.In addition,the proposed network model is compact,with a size of only 6.5 MB,making it well suited for deployment on portable mobile devices.
文摘Background:Medical imaging advancements are constrained by fundamental trade-offs between acquisition speed,radiation dose,and image quality,forcing clinicians to work with noisy,incomplete data.Existing reconstruction methods either compromise on accuracy with iterative algorithms or suffer from limited generalizability with task-specific deep learning approaches.Methods:We present LDM-PIR,a lightweight physics-conditioned diffusion multi-model for medical image reconstruction that addresses key challenges in magnetic resonance imaging(MRI),CT,and low-photon imaging.Unlike traditional iterative methods,which are computationally expensive,or task-specific deep learning approaches lacking generalizability,integrates three innovations.A physics-conditioned diffusion framework that embeds acquisition operators(Fourier/Radon transforms)and noise models directly into the reconstruction process.A multi-model architecture that unifies denoising,inpainting,and super-resolution via shared weight conditioning.A lightweight design(2.1M parameters)enabling rapid inference(0.8s/image on GPU).Through self-supervised fine-tuning with measurement consistency losses adapts to new imaging modalities using fewer annotated samples.Results:Achieves state-of-the-art performance on fastMRI(peak signal-to-noise ratio(PSNR):34.04 for single-coil/31.50 for multi-coil)and Lung Image Database Consortium and Image Database Resource Initiative(28.83 PSNR under Poisson noise).Clinical evaluations demonstrate superior preservation of anatomical structures,with SSIM improvements of 8.8%for single-coil and 4.36%for multi-coil MRI over uDPIR.Conclusion:It offers a flexible,efficient,and scalable solution for medical image reconstruction,addressing the challenges of noise,undersampling,and modality generalization.The model’s lightweight design allows for rapid inference,while its self-supervised fine-tuning capability minimizes reliance on large annotated datasets,making it suitable for real-world clinical applications.
基金This work was funded by the foundation of Liaoning Educational committee under the Grant No.2019LNJC03.
文摘As the use of facial attributes continues to expand,research into facial age estimation is also developing.Because face images are easily affected by factors including illumination and occlusion,the age estimation of faces is a challenging process.This paper proposes a face age estimation algorithm based on lightweight convolutional neural network in view of the complexity of the environment and the limitations of device computing ability.Improving face age estimation based on Soft Stagewise Regression Network(SSR-Net)and facial images,this paper employs the Center Symmetric Local Binary Pattern(CSLBP)method to obtain the feature image and then combines the face image and the feature image as network input data.Adding feature images to the convolutional neural network can improve the accuracy as well as increase the network model robustness.The experimental results on IMDB-WIKI and MORPH 2 datasets show that the lightweight convolutional neural network method proposed in this paper reduces model complexity and increases the accuracy of face age estimations.
基金supported by National key research and development program sub-topics[2018YFF0213606-03(Mu Y.,Hu T.L.,Gong H.,Li S.J.and Sun Y.H.)http://www.most.gov.cn]Jilin Province Science and Technology Development Plan focuses on research and development projects[20200402006NC(Mu Y.,Hu T.L.,Gong H.and Li S.J.)http://kjt.jl.gov.cn]+1 种基金Science and technology support project for key industries in southern Xinjiang[2018DB001(Gong H.,and Li S.J.)http://kjj.xjbt.gov.cn]Key technology R&D project of Changchun Science and Technology Bureau of Jilin Province[21ZGN29(Mu Y.,Bao H.P.,Wang X.B.)http://kjj.changchun.gov.cn].
文摘In the field of agricultural information,the identification and prediction of rice leaf disease have always been the focus of research,and deep learning(DL)technology is currently a hot research topic in the field of pattern recognition.The research and development of high-efficiency,highquality and low-cost automatic identification methods for rice diseases that can replace humans is an important means of dealing with the current situation from a technical perspective.This paper mainly focuses on the problem of huge parameters of the Convolutional Neural Network(CNN)model and proposes a recognitionmodel that combines amulti-scale convolution module with a neural network model based on Visual Geometry Group(VGG).The accuracy and loss of the training set and the test set are used to evaluate the performance of the model.The test accuracy of this model is 97.1%that has increased 5.87%over VGG.Furthermore,the memory requirement is 26.1M,only 1.6%of the VGG.Experiment results show that this model performs better in terms of accuracy,recognition speed and memory size.
文摘Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline.
基金the Liaoning Provincial Department of Education 2021 Annual Scientific Research Funding Program(Grant Numbers LJKZ0535,LJKZ0526)the 2021 Annual Comprehensive Reform of Undergraduate Education Teaching(Grant Numbers JGLX2021020,JCLX2021008)Graduate Innovation Fund of Dalian Polytechnic University(Grant Number 2023CXYJ13).
文摘In pursuit of cost-effective manufacturing,enterprises are increasingly adopting the practice of utilizing recycled semiconductor chips.To ensure consistent chip orientation during packaging,a circular marker on the front side is employed for pin alignment following successful functional testing.However,recycled chips often exhibit substantial surface wear,and the identification of the relatively small marker proves challenging.Moreover,the complexity of generic target detection algorithms hampers seamless deployment.Addressing these issues,this paper introduces a lightweight YOLOv8s-based network tailored for detecting markings on recycled chips,termed Van-YOLOv8.Initially,to alleviate the influence of diminutive,low-resolution markings on the precision of deep learning models,we utilize an upscaling approach for enhanced resolution.This technique relies on the Super-Resolution Generative Adversarial Network with Extended Training(SRGANext)network,facilitating the reconstruction of high-fidelity images that align with input specifications.Subsequently,we replace the original YOLOv8smodel’s backbone feature extraction network with the lightweight VanillaNetwork(VanillaNet),simplifying the branch structure to reduce network parameters.Finally,a Hybrid Attention Mechanism(HAM)is implemented to capture essential details from input images,improving feature representation while concurrently expediting model inference speed.Experimental results demonstrate that the Van-YOLOv8 network outperforms the original YOLOv8s on a recycled chip dataset in various aspects.Significantly,it demonstrates superiority in parameter count,computational intricacy,precision in identifying targets,and speed when compared to certain prevalent algorithms in the current landscape.The proposed approach proves promising for real-time detection of recycled chips in practical factory settings.
基金funded by Innovation and Development Special Project of China Meteorological Administration(CXFZ2022J038,CXFZ2024J035)Sichuan Science and Technology Program(No.2023YFQ0072)+1 种基金Key Laboratory of Smart Earth(No.KF2023YB03-07)Automatic Software Generation and Intelligent Service Key Laboratory of Sichuan Province(CUIT-SAG202210).
文摘Accurate cloud classification plays a crucial role in aviation safety,climate monitoring,and localized weather forecasting.Current research has been focusing on machine learning techniques,particularly deep learning based model,for the types identification.However,traditional approaches such as convolutional neural networks(CNNs)encounter difficulties in capturing global contextual information.In addition,they are computationally expensive,which restricts their usability in resource-limited environments.To tackle these issues,we present the Cloud Vision Transformer(CloudViT),a lightweight model that integrates CNNs with Transformers.The integration enables an effective balance between local and global feature extraction.To be specific,CloudViT comprises two innovative modules:Feature Extraction(E_Module)and Downsampling(D_Module).These modules are able to significantly reduce the number of model parameters and computational complexity while maintaining translation invariance and enhancing contextual comprehension.Overall,the CloudViT includes 0.93×10^(6)parameters,which decreases more than ten times compared to the SOTA(State-of-the-Art)model CloudNet.Comprehensive evaluations conducted on the HBMCD and SWIMCAT datasets showcase the outstanding performance of CloudViT.It achieves classification accuracies of 98.45%and 100%,respectively.Moreover,the efficiency and scalability of CloudViT make it an ideal candidate for deployment inmobile cloud observation systems,enabling real-time cloud image classification.The proposed hybrid architecture of CloudViT offers a promising approach for advancing ground-based cloud image classification.It holds significant potential for both optimizing performance and facilitating practical deployment scenarios.
文摘The intersection of the Industrial Internet of Things(IIoT)and artificial intelligence(AI)has garnered ever-increasing attention and research interest.Nevertheless,the dilemma between the strict resource-constrained nature of IIoT devices and the extensive resource demands of AI has not yet been fully addressed with a comprehensive solution.Taking advantage of the lightweight constructive neural network(LightGCNet)in developing fast learner models for IIoT,a convex geometric constructive neural network with a low-complexity control strategy,namely,ConGCNet,is proposed in this article via convex optimization and matrix theory,which enhances the convergence rate and reduces the computational consumption in comparison with LightGCNet.Firstly,a low-complexity control strategy is proposed to reduce the computational consumption during the hidden parameters training process.Secondly,a novel output weights evaluated method based on convex optimization is proposed to guarantee the convergence rate.Finally,the universal approximation property of ConGCNet is proved by the low-complexity control strategy and convex output weights evaluated method.Simulation results,including four benchmark datasets and the real-world ore grinding process,demonstrate that ConGCNet effectively reduces computational consumption in the modelling process and improves the model’s convergence rate.
基金funded by Science and Technology Innovation Project grant No.ZZKY20222304.
文摘Aiming at the problem of potential information noise introduced during the generation of ghost feature maps in GhostNet,this paper proposes a novel lightweight neural network model called ResghostNet.This model constructs the Resghost Module by combining residual connections and Adaptive-SE Blocks,which enhances the quality of generated feature maps through direct propagation of original input information and selection of important channels before cheap operations.Specifically,ResghostNet introduces residual connections on the basis of the Ghost Module to optimize the information flow,and designs a weight self-attention mechanism combined with SE blocks to enhance feature expression capabilities in cheap operations.Experimental results on the ImageNet dataset show that,compared to GhostNet,ResghostNet achieves higher accuracy while reducing the number of parameters by 52%.Although the computational complexity increases,by optimizing the usage strategy of GPU cachememory,themodel’s inference speed becomes faster.The ResghostNet is optimized in terms of classification accuracy and the number of model parameters,and shows great potential in edge computing devices.
文摘轻量化神经网络是指通过优化,减少资源消耗,使其能够在资源受限的环境中高效运行的神经网络。其训练过程通常以整体最优为目标,然而在实际应用中,可能存在某些感兴趣类别的分类精度偏低的问题,这些类别对于用户或应用而言,其准确性比其他类更重要。为解决上述问题,提出了一种适用于轻量化神经网络的结构微调方法——基于次小值阈值选取的突触连接方法(Synaptic join method based on the sub-minimum value threshold,SMVT-SJ)。该方法通过次小值选取策略划定新突触的权值阈值,从隐藏层向输出层目标神经元跨层添加新突触,从而特异性地提升用户关注类别的分类精度。为了筛选更高效的新突触,SMVT-SJ提出突触评估过程,根据所有可能的适当权值的分布来评估每个候选突触的性能。在多个不同数据集上的实验结果表明,该方法能够有效地提高特定目标类别的分类精度,并维持总体精度不发生明显降低,具有很好的泛化性和鲁棒性。