Traditional data-driven fault diagnosis methods depend on expert experience to manually extract effective fault features of signals,which has certain limitations.Conversely,deep learning techniques have gained promine...Traditional data-driven fault diagnosis methods depend on expert experience to manually extract effective fault features of signals,which has certain limitations.Conversely,deep learning techniques have gained prominence as a central focus of research in the field of fault diagnosis by strong fault feature extraction ability and end-to-end fault diagnosis efficiency.Recently,utilizing the respective advantages of convolution neural network(CNN)and Transformer in local and global feature extraction,research on cooperating the two have demonstrated promise in the field of fault diagnosis.However,the cross-channel convolution mechanism in CNN and the self-attention calculations in Transformer contribute to excessive complexity in the cooperative model.This complexity results in high computational costs and limited industrial applicability.To tackle the above challenges,this paper proposes a lightweight CNN-Transformer named as SEFormer for rotating machinery fault diagnosis.First,a separable multiscale depthwise convolution block is designed to extract and integrate multiscale feature information from different channel dimensions of vibration signals.Then,an efficient self-attention block is developed to capture critical fine-grained features of the signal from a global perspective.Finally,experimental results on the planetary gearbox dataset and themotor roller bearing dataset prove that the proposed framework can balance the advantages of robustness,generalization and lightweight compared to recent state-of-the-art fault diagnosis models based on CNN and Transformer.This study presents a feasible strategy for developing a lightweight rotating machinery fault diagnosis framework aimed at economical deployment.展开更多
Micro-expressions(ME)recognition is a complex task that requires advanced techniques to extract informative features fromfacial expressions.Numerous deep neural networks(DNNs)with convolutional structures have been pr...Micro-expressions(ME)recognition is a complex task that requires advanced techniques to extract informative features fromfacial expressions.Numerous deep neural networks(DNNs)with convolutional structures have been proposed.However,unlike DNNs,shallow convolutional neural networks often outperform deeper models in mitigating overfitting,particularly with small datasets.Still,many of these methods rely on a single feature for recognition,resulting in an insufficient ability to extract highly effective features.To address this limitation,in this paper,an Improved Dual-stream Shallow Convolutional Neural Network based on an Extreme Gradient Boosting Algorithm(IDSSCNN-XgBoost)is introduced for ME Recognition.The proposed method utilizes a dual-stream architecture where motion vectors(temporal features)are extracted using Optical Flow TV-L1 and amplify subtle changes(spatial features)via EulerianVideoMagnification(EVM).These features are processed by IDSSCNN,with an attention mechanism applied to refine the extracted effective features.The outputs are then fused,concatenated,and classified using the XgBoost algorithm.This comprehensive approach significantly improves recognition accuracy by leveraging the strengths of both temporal and spatial information,supported by the robust classification power of XgBoost.The proposed method is evaluated on three publicly available ME databases named Chinese Academy of Sciences Micro-expression Database(CASMEII),Spontaneous Micro-Expression Database(SMICHS),and Spontaneous Actions and Micro-Movements(SAMM).Experimental results indicate that the proposed model can achieve outstanding results compared to recent models.The accuracy results are 79.01%,69.22%,and 68.99%on CASMEII,SMIC-HS,and SAMM,and the F1-score are 75.47%,68.91%,and 63.84%,respectively.The proposed method has the advantage of operational efficiency and less computational time.展开更多
Hyperspectral image(HSI)classification is crucial for numerous remote sensing applications.Traditional deep learning methods may miss pixel relationships and context,leading to inefficiencies.This paper introduces the...Hyperspectral image(HSI)classification is crucial for numerous remote sensing applications.Traditional deep learning methods may miss pixel relationships and context,leading to inefficiencies.This paper introduces the spectral band graph convolutional and attention-enhanced CNN joint network(SGCCN),a novel approach that harnesses the power of spectral band graph convolutions for capturing long-range relationships,utilizes local perception of attention-enhanced multi-level convolutions for local spatial feature and employs a dynamic attention mechanism to enhance feature extraction.The SGCCN integrates spectral and spatial features through a self-attention fusion network,significantly improving classification accuracy and efficiency.The proposed method outperforms existing techniques,demonstrating its effectiveness in handling the challenges associated with HSI data.展开更多
Forecasting energy demand is essential for optimizing energy generation and effectively predicting power system needs.Recently,many researchers have developed various models on tabular datasets to enhance the effectiv...Forecasting energy demand is essential for optimizing energy generation and effectively predicting power system needs.Recently,many researchers have developed various models on tabular datasets to enhance the effectiveness of demand prediction,including neural networks,machine learning,deep learning,and advanced architectures such as CNN and LSTM.However,research on the CNN models has struggled to provide reliable outcomes due to insufficient dataset sizes,repeated investigations,and inappropriate baseline selection.To address these challenges,we propose a Tabular data-based Lightweight Convolutional Neural Network(TLCNN)model for predicting energy demand.It frames the problem as a regression task that effectively captures complex data trends for accurate forecasting.The BanE-16 dataset is preprocessed using normalization techniques for categorical and numerical data before training the model.The proposed approach dynamically selects relevant features through a two-dimensional convolutional structure that improves adaptability.The model’s performance is evaluated using MSE,MAE,and Accuracy metrics.Experimental results show that TLCNN achieves a 10.89%lower MSE than traditional ML algorithms,demonstrating superior predictive capability.Additionally,TLCNN’s lightweight structure enhances generalization while reducing computational costs,making it suitable for real-world energy forecasting tasks.This study contributes to energy informatics by introducing an optimized deep-learning framework that improves demand prediction by ensuring robustness and adaptability for tabular data.展开更多
Lightweight deep learning models are increasingly required in resource-constrained environments such as mobile devices and the Internet of Medical Things(IoMT).Multi-head convolution with channel attention can facilit...Lightweight deep learning models are increasingly required in resource-constrained environments such as mobile devices and the Internet of Medical Things(IoMT).Multi-head convolution with channel attention can facilitate learning activations relevant to different kernel sizes within a multi-head convolutional layer.Therefore,this study investigates the capability of novel lightweight models incorporating residual multi-head convolution with channel attention(ResMHCNN)blocks to classify medical images.We introduced three novel lightweight deep learning models(BT-Net,LCC-Net,and BC-Net)utilizing the ResMHCNN block as their backbone.These models were crossvalidated and tested on three publicly available medical image datasets:a brain tumor dataset from Figshare consisting of T1-weighted magnetic resonance imaging slices of meningioma,glioma,and pituitary tumors;the LC25000 dataset,which includes microscopic images of lung and colon cancers;and the BreaKHis dataset,containing benign and malignant breast microscopic images.The lightweight models achieved accuracies of 96.9%for 3-class brain tumor classification using BT-Net,and 99.7%for 5-class lung and colon cancer classification using LCC-Net.For 2-class breast cancer classification,BC-Net achieved an accuracy of 96.7%.The parameter counts for the proposed lightweight models—LCC-Net,BC-Net,and BT-Net—are 0.528,0.226,and 1.154 million,respectively.The presented lightweight models,featuring ResMHCNN blocks,may be effectively employed for accurate medical image classification.In the future,these models might be tested for viability in resource-constrained systems such as mobile devices and IoMT platforms.展开更多
基于RSSI(Received Signal Strength Indication)位置指纹的Wi-Fi室内定位现已被大量应用于各类基于位置信息的服务中。但指纹定位的精度受到RSSI信号的剧烈波动影响,难以满足高精度位置信息服务的需求。为克服该困难,提出一种结合虚拟A...基于RSSI(Received Signal Strength Indication)位置指纹的Wi-Fi室内定位现已被大量应用于各类基于位置信息的服务中。但指纹定位的精度受到RSSI信号的剧烈波动影响,难以满足高精度位置信息服务的需求。为克服该困难,提出一种结合虚拟AP技术与高精度CNN(Convolutional Neural Network)判别模型的定位方法。该方法通过距离比定位得到虚拟AP的位置,并将该信息与RSSI融合作为数据增强CNN模型的输入,确定样本的位置。设计实验方案采集实际的用户终端RSSI数据,构建指纹定位的数据集,验证所提出的指纹定位方案的有效性。实验结果表明,在该数据集上,所提出的方法在确定区域时的准确度达到91%,并将95%的定位误差控制在2 m以内。对比现有的定位方案,所提出的方案在定位精度上有显著提升。展开更多
基金supported by the National Natural Science Foundation of China(No.52277055).
文摘Traditional data-driven fault diagnosis methods depend on expert experience to manually extract effective fault features of signals,which has certain limitations.Conversely,deep learning techniques have gained prominence as a central focus of research in the field of fault diagnosis by strong fault feature extraction ability and end-to-end fault diagnosis efficiency.Recently,utilizing the respective advantages of convolution neural network(CNN)and Transformer in local and global feature extraction,research on cooperating the two have demonstrated promise in the field of fault diagnosis.However,the cross-channel convolution mechanism in CNN and the self-attention calculations in Transformer contribute to excessive complexity in the cooperative model.This complexity results in high computational costs and limited industrial applicability.To tackle the above challenges,this paper proposes a lightweight CNN-Transformer named as SEFormer for rotating machinery fault diagnosis.First,a separable multiscale depthwise convolution block is designed to extract and integrate multiscale feature information from different channel dimensions of vibration signals.Then,an efficient self-attention block is developed to capture critical fine-grained features of the signal from a global perspective.Finally,experimental results on the planetary gearbox dataset and themotor roller bearing dataset prove that the proposed framework can balance the advantages of robustness,generalization and lightweight compared to recent state-of-the-art fault diagnosis models based on CNN and Transformer.This study presents a feasible strategy for developing a lightweight rotating machinery fault diagnosis framework aimed at economical deployment.
基金supported by the Key Research and Development Program of Jiangsu Province under Grant BE2022059-3,CTBC Bank through the Industry-Academia Cooperation Project,as well as by the Ministry of Science and Technology of Taiwan through Grants MOST-108-2218-E-002-055,MOST-109-2223-E-009-002-MY3,MOST-109-2218-E-009-025,and MOST431109-2218-E-002-015.
文摘Micro-expressions(ME)recognition is a complex task that requires advanced techniques to extract informative features fromfacial expressions.Numerous deep neural networks(DNNs)with convolutional structures have been proposed.However,unlike DNNs,shallow convolutional neural networks often outperform deeper models in mitigating overfitting,particularly with small datasets.Still,many of these methods rely on a single feature for recognition,resulting in an insufficient ability to extract highly effective features.To address this limitation,in this paper,an Improved Dual-stream Shallow Convolutional Neural Network based on an Extreme Gradient Boosting Algorithm(IDSSCNN-XgBoost)is introduced for ME Recognition.The proposed method utilizes a dual-stream architecture where motion vectors(temporal features)are extracted using Optical Flow TV-L1 and amplify subtle changes(spatial features)via EulerianVideoMagnification(EVM).These features are processed by IDSSCNN,with an attention mechanism applied to refine the extracted effective features.The outputs are then fused,concatenated,and classified using the XgBoost algorithm.This comprehensive approach significantly improves recognition accuracy by leveraging the strengths of both temporal and spatial information,supported by the robust classification power of XgBoost.The proposed method is evaluated on three publicly available ME databases named Chinese Academy of Sciences Micro-expression Database(CASMEII),Spontaneous Micro-Expression Database(SMICHS),and Spontaneous Actions and Micro-Movements(SAMM).Experimental results indicate that the proposed model can achieve outstanding results compared to recent models.The accuracy results are 79.01%,69.22%,and 68.99%on CASMEII,SMIC-HS,and SAMM,and the F1-score are 75.47%,68.91%,and 63.84%,respectively.The proposed method has the advantage of operational efficiency and less computational time.
基金supported in part by the National Natural Science Foundations of China(No.61801214)the Postgraduate Research Practice Innovation Program of NUAA(No.xcxjh20231504)。
文摘Hyperspectral image(HSI)classification is crucial for numerous remote sensing applications.Traditional deep learning methods may miss pixel relationships and context,leading to inefficiencies.This paper introduces the spectral band graph convolutional and attention-enhanced CNN joint network(SGCCN),a novel approach that harnesses the power of spectral band graph convolutions for capturing long-range relationships,utilizes local perception of attention-enhanced multi-level convolutions for local spatial feature and employs a dynamic attention mechanism to enhance feature extraction.The SGCCN integrates spectral and spatial features through a self-attention fusion network,significantly improving classification accuracy and efficiency.The proposed method outperforms existing techniques,demonstrating its effectiveness in handling the challenges associated with HSI data.
文摘Forecasting energy demand is essential for optimizing energy generation and effectively predicting power system needs.Recently,many researchers have developed various models on tabular datasets to enhance the effectiveness of demand prediction,including neural networks,machine learning,deep learning,and advanced architectures such as CNN and LSTM.However,research on the CNN models has struggled to provide reliable outcomes due to insufficient dataset sizes,repeated investigations,and inappropriate baseline selection.To address these challenges,we propose a Tabular data-based Lightweight Convolutional Neural Network(TLCNN)model for predicting energy demand.It frames the problem as a regression task that effectively captures complex data trends for accurate forecasting.The BanE-16 dataset is preprocessed using normalization techniques for categorical and numerical data before training the model.The proposed approach dynamically selects relevant features through a two-dimensional convolutional structure that improves adaptability.The model’s performance is evaluated using MSE,MAE,and Accuracy metrics.Experimental results show that TLCNN achieves a 10.89%lower MSE than traditional ML algorithms,demonstrating superior predictive capability.Additionally,TLCNN’s lightweight structure enhances generalization while reducing computational costs,making it suitable for real-world energy forecasting tasks.This study contributes to energy informatics by introducing an optimized deep-learning framework that improves demand prediction by ensuring robustness and adaptability for tabular data.
基金supported by the Institute of Information&Communications Technology Planning&Evaluation(IITP)-Innovative Human Resource Development for Local Intellectualization program grant funded by the Korea government(MSIT)(IITP-2025-RS-2023-00259678)by INHA UNIVERSITY Research Grant.
文摘Lightweight deep learning models are increasingly required in resource-constrained environments such as mobile devices and the Internet of Medical Things(IoMT).Multi-head convolution with channel attention can facilitate learning activations relevant to different kernel sizes within a multi-head convolutional layer.Therefore,this study investigates the capability of novel lightweight models incorporating residual multi-head convolution with channel attention(ResMHCNN)blocks to classify medical images.We introduced three novel lightweight deep learning models(BT-Net,LCC-Net,and BC-Net)utilizing the ResMHCNN block as their backbone.These models were crossvalidated and tested on three publicly available medical image datasets:a brain tumor dataset from Figshare consisting of T1-weighted magnetic resonance imaging slices of meningioma,glioma,and pituitary tumors;the LC25000 dataset,which includes microscopic images of lung and colon cancers;and the BreaKHis dataset,containing benign and malignant breast microscopic images.The lightweight models achieved accuracies of 96.9%for 3-class brain tumor classification using BT-Net,and 99.7%for 5-class lung and colon cancer classification using LCC-Net.For 2-class breast cancer classification,BC-Net achieved an accuracy of 96.7%.The parameter counts for the proposed lightweight models—LCC-Net,BC-Net,and BT-Net—are 0.528,0.226,and 1.154 million,respectively.The presented lightweight models,featuring ResMHCNN blocks,may be effectively employed for accurate medical image classification.In the future,these models might be tested for viability in resource-constrained systems such as mobile devices and IoMT platforms.