Video emotion recognition is widely used due to its alignment with the temporal characteristics of human emotional expression,but existingmodels have significant shortcomings.On the one hand,Transformermultihead self-...Video emotion recognition is widely used due to its alignment with the temporal characteristics of human emotional expression,but existingmodels have significant shortcomings.On the one hand,Transformermultihead self-attention modeling of global temporal dependency has problems of high computational overhead and feature similarity.On the other hand,fixed-size convolution kernels are often used,which have weak perception ability for emotional regions of different scales.Therefore,this paper proposes a video emotion recognition model that combines multi-scale region-aware convolution with temporal interactive sampling.In terms of space,multi-branch large-kernel stripe convolution is used to perceive emotional region features at different scales,and attention weights are generated for each scale feature.In terms of time,multi-layer odd-even down-sampling is performed on the time series,and oddeven sub-sequence interaction is performed to solve the problem of feature similarity,while reducing computational costs due to the linear relationship between sampling and convolution overhead.This paper was tested on CMU-MOSI,CMU-MOSEI,and Hume Reaction.The Acc-2 reached 83.4%,85.2%,and 81.2%,respectively.The experimental results show that the model can significantly improve the accuracy of emotion recognition.展开更多
This study proposes a multi-scale simplified residual convolutional neural network(MS-SRCNN)for the precise prediction of Mg-Nd binary alloy compositions from scanning electron microscope(SEM)images.A multi-scale data...This study proposes a multi-scale simplified residual convolutional neural network(MS-SRCNN)for the precise prediction of Mg-Nd binary alloy compositions from scanning electron microscope(SEM)images.A multi-scale data structure is established by spatially aligning and stacking SEM images at different magnifications.The MS-SRCNN significantly reduces computational runtime by over 90%compared to traditional architectures like ResNet50,VGG16,and VGG19,without compromising prediction accuracy.The model demonstrates more excellent predictive performance,achieving a>5%increase in R^(2) compared to single-scale models.Furthermore,the MS-SRCNN exhibits robust composition prediction capability across other Mg-based binary alloys,including Mg-La,Mg-Sn,Mg-Ce,Mg-Sm,Mg-Ag,and Mg-Y,thereby emphasizing its generalization and extrapolation potential.This research establishes a non-destructive,microstructure-informed composition analysis framework,reduces characterization time compared to traditional experiment methods and provides insights into the composition-microstructure relationship in diverse material systems.展开更多
Transportation systems are experiencing a significant transformation due to the integration of advanced technologies, including artificial intelligence and machine learning. In the context of intelligent transportatio...Transportation systems are experiencing a significant transformation due to the integration of advanced technologies, including artificial intelligence and machine learning. In the context of intelligent transportation systems (ITS) and Advanced Driver Assistance Systems (ADAS), the development of efficient and reliable traffic light detection mechanisms is crucial for enhancing road safety and traffic management. This paper presents an optimized convolutional neural network (CNN) framework designed to detect traffic lights in real-time within complex urban environments. Leveraging multi-scale pyramid feature maps, the proposed model addresses key challenges such as the detection of small, occluded, and low-resolution traffic lights amidst complex backgrounds. The integration of dilated convolutions, Region of Interest (ROI) alignment, and Soft Non-Maximum Suppression (Soft-NMS) further improves detection accuracy and reduces false positives. By optimizing computational efficiency and parameter complexity, the framework is designed to operate seamlessly on embedded systems, ensuring robust performance in real-world applications. Extensive experiments using real-world datasets demonstrate that our model significantly outperforms existing methods, providing a scalable solution for ITS and ADAS applications. This research contributes to the advancement of Artificial Intelligence-driven (AI-driven) pattern recognition in transportation systems and offers a mathematical approach to improving efficiency and safety in logistics and transportation networks.展开更多
Camouflaged Object Detection(COD)aims to identify objects that share highly similar patterns—such as texture,intensity,and color—with their surrounding environment.Due to their intrinsic resemblance to the backgroun...Camouflaged Object Detection(COD)aims to identify objects that share highly similar patterns—such as texture,intensity,and color—with their surrounding environment.Due to their intrinsic resemblance to the background,camouflaged objects often exhibit vague boundaries and varying scales,making it challenging to accurately locate targets and delineate their indistinct edges.To address this,we propose a novel camouflaged object detection network called Edge-Guided and Multi-scale Fusion Network(EGMFNet),which leverages edge-guided multi-scale integration for enhanced performance.The model incorporates two innovative components:a Multi-scale Fusion Module(MSFM)and an Edge-Guided Attention Module(EGA).These designs exploit multi-scale features to uncover subtle cues between candidate objects and the background while emphasizing camouflaged object boundaries.Moreover,recognizing the rich contextual information in fused features,we introduce a Dual-Branch Global Context Module(DGCM)to refine features using extensive global context,thereby generatingmore informative representations.Experimental results on four benchmark datasets demonstrate that EGMFNet outperforms state-of-the-art methods across five evaluation metrics.Specifically,on COD10K,our EGMFNet-P improves F_(β)by 4.8 points and reduces mean absolute error(MAE)by 0.006 compared with ZoomNeXt;on NC4K,it achieves a 3.6-point increase in F_(β).OnCAMO and CHAMELEON,it obtains 4.5-point increases in F_(β),respectively.These consistent gains substantiate the superiority and robustness of EGMFNet.展开更多
Driven by rapid advances in deep learning,object detection has been widely adopted across diverse application scenarios.However,in low-light conditions,critical visual cues of target objects are severely degraded,posi...Driven by rapid advances in deep learning,object detection has been widely adopted across diverse application scenarios.However,in low-light conditions,critical visual cues of target objects are severely degraded,posing a significant challenge for accurate low-light object detection.Existing methods struggle to preserve discriminative features while maintaining semantic consistency between low-light and normal-light images.For this purpose,this study proposes a DL-YOLO model specially tailored for low-light detection.To mitigate target feature attenuation introduced by repeated downsampling,we design aMulti-Scale FeatureConvolution(MSF-Conv)module that captures rich,multi-level details via multi-scale feature learning,thereby reducing model complexity and computational cost.For feature fusion,we integrated the C3k2-DWRmodule by embedding the Dilation-wise Residual(DWR)mechanism into the 2-core optimized Cross Stage Partial(C3)framework,achieving efficient feature integration.In addition,we replace conventional localization losses with WIoU(Weighted Intersection over Union),which dynamically adjusts gradient gain according to sample quality,thereby improving localization robustness and precision.Experiments on the ExDark dataset demonstrate that DL-YOLO delivers strong low-light detection performance.The relevant code is published at https://github.com/cym0997/DL-YOLO.展开更多
Semantic segmentation for mixed scenes of aerial remote sensing and road traffic is one of the key technologies for visual perception of flying cars.The State-of-the-Art(SOTA)semantic segmentation methods have made re...Semantic segmentation for mixed scenes of aerial remote sensing and road traffic is one of the key technologies for visual perception of flying cars.The State-of-the-Art(SOTA)semantic segmentation methods have made remarkable achievements in both fine-grained segmentation and real-time performance.However,when faced with the huge differences in scale and semantic categories brought about by the mixed scenes of aerial remote sensing and road traffic,they still face great challenges and there is little related research.Addressing the above issue,this paper proposes a semantic segmentation model specifically for mixed datasets of aerial remote sensing and road traffic scenes.First,a novel decoding-recoding multi-scale feature iterative refinement structure is proposed,which utilizes the re-integration and continuous enhancement of multi-scale information to effectively deal with the huge scale differences between cross-domain scenes,while using a fully convolutional structure to ensure the lightweight and real-time requirements.Second,a welldesigned cross-window attention mechanism combined with a global information integration decoding block forms an enhanced global context perception,which can effectively capture the long-range dependencies and multi-scale global context information of different scenes,thereby achieving fine-grained semantic segmentation.The proposed method is tested on a large-scale mixed dataset of aerial remote sensing and road traffic scenes.The results confirm that it can effectively deal with the problem of large-scale differences in cross-domain scenes.Its segmentation accuracy surpasses that of the SOTA methods,which meets the real-time requirements.展开更多
The wheat above-ground biomass(AGB)is an important index that shows the life activity of vegetation,which is of great significance for wheat growth monitoring and yield prediction.Traditional biomass estimation method...The wheat above-ground biomass(AGB)is an important index that shows the life activity of vegetation,which is of great significance for wheat growth monitoring and yield prediction.Traditional biomass estimation methods specifically include sample surveys and harvesting statistics.Although these methods have high estimation accuracy,they are time-consuming,destructive,and difficult to implement to monitor the biomass at a large scale.The main objective of this study is to optimize the traditional remote sensing methods to estimate the wheat AGBbased on improved convolutional features(CFs).Low-cost unmanned aerial vehicles(UAV)were used as the main data acquisition equipment.This study acquired image data acquired by RGB camera(RGB)and multi-spectral(MS)image data of the wheat population canopy for two wheat varieties and five key growth stages.Then,field measurements were conducted to obtain the actual wheat biomass data for validation.Based on the remote sensing indices(RSIs),structural features(SFs),and CFs,this study proposed a new feature named AUR-50(multi-source combination based on convolutional feature optimization)to estimate the wheat AGB.The results show that AUR-50 could estimate the wheat AGB more accurately than RSIs and SFs,and the average R^(2) exceeded 0.77.In the overwintering period,AUR-50_(MS)(multi-source combination with convolutional feature optimization using multispectral imagery)had the highest estimation accuracy(R^(2) of 0.88).In addition,AUR-50 reduced the effect of the vegetation index saturation on the biomass estimation accuracy by adding CFs,where the highest R^(2) was 0.69 at the flowering stage.The results of this study provide an effective method to evaluate the AGB in wheat with high throughput and a research reference for the phenotypic parameters of other crops.展开更多
In recent years,gait-based emotion recognition has been widely applied in the field of computer vision.However,existing gait emotion recognition methods typically rely on complete human skeleton data,and their accurac...In recent years,gait-based emotion recognition has been widely applied in the field of computer vision.However,existing gait emotion recognition methods typically rely on complete human skeleton data,and their accuracy significantly declines when the data is occluded.To enhance the accuracy of gait emotion recognition under occlusion,this paper proposes a Multi-scale Suppression Graph ConvolutionalNetwork(MS-GCN).TheMS-GCN consists of three main components:Joint Interpolation Module(JI Moudle),Multi-scale Temporal Convolution Network(MS-TCN),and Suppression Graph Convolutional Network(SGCN).The JI Module completes the spatially occluded skeletal joints using the(K-Nearest Neighbors)KNN interpolation method.The MS-TCN employs convolutional kernels of various sizes to comprehensively capture the emotional information embedded in the gait,compensating for the temporal occlusion of gait information.The SGCN extracts more non-prominent human gait features by suppressing the extraction of key body part features,thereby reducing the negative impact of occlusion on emotion recognition results.The proposed method is evaluated on two comprehensive datasets:Emotion-Gait,containing 4227 real gaits from sources like BML,ICT-Pollick,and ELMD,and 1000 synthetic gaits generated using STEP-Gen technology,and ELMB,consisting of 3924 gaits,with 1835 labeled with emotions such as“Happy,”“Sad,”“Angry,”and“Neutral.”On the standard datasets Emotion-Gait and ELMB,the proposed method achieved accuracies of 0.900 and 0.896,respectively,attaining performance comparable to other state-ofthe-artmethods.Furthermore,on occlusion datasets,the proposedmethod significantly mitigates the performance degradation caused by occlusion compared to other methods,the accuracy is significantly higher than that of other methods.展开更多
The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method f...The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method for infrared and visible image fusion is proposed.The encoder designed according to the optimization objective consists of a base encoder and a detail encoder,which is used to extract low-frequency and high-frequency information from the image.This extraction may lead to some information not being captured,so a compensation encoder is proposed to supplement the missing information.Multi-scale decomposition is also employed to extract image features more comprehensively.The decoder combines low-frequency,high-frequency and supplementary information to obtain multi-scale features.Subsequently,the attention strategy and fusion module are introduced to perform multi-scale fusion for image reconstruction.Experimental results on three datasets show that the fused images generated by this network effectively retain salient targets while being more consistent with human visual perception.展开更多
Faulty-feeder detection in neutral point noneffectively grounded distribution networks consistently attracts research attention since it directly affects quality and safety of energy supply.Most modern research on fau...Faulty-feeder detection in neutral point noneffectively grounded distribution networks consistently attracts research attention since it directly affects quality and safety of energy supply.Most modern research on faulty-feeder detection tends to apply more complex digital signal processing techniques and deeper neural networks in order to better extract and learn as many detailed characteristics as possible.However,these approaches may easily result in overfitting and high computational cost,which cannot meet requirements for detection accuracy and efficiency in practical applications.This paper proposes an innovative waveform encoding method and details a simple convolutional neural network(CNN)with one layer of convolution used for identification,which seeks to improve detection accuracy and efficiency simultaneously.First,sparse characteristics of waveforms are utilized to encode into compact vectors,and a waveform-vector matrix is generated.Second,to deduce waveform-vector matrix,a simple CNN with multi-scale filters and one layer of convolution is established.Finally,a methodology for faulty-feeder detection is proposed,and both detection accuracy and efficiency are considerably enhanced.Comparative studies have confirmed clear superiority of the developed method,which outperforms existing approaches in both detection accuracy and efficiency,thus highlighting its significant potential for application.展开更多
Accurate traffic flow prediction has a profound impact on modern traffic management. Traffic flow has complex spatial-temporal correlations and periodicity, which poses difficulties for precise prediction. To address ...Accurate traffic flow prediction has a profound impact on modern traffic management. Traffic flow has complex spatial-temporal correlations and periodicity, which poses difficulties for precise prediction. To address this problem, a Multi-head Self-attention and Spatial-Temporal Graph Convolutional Network (MSSTGCN) for multiscale traffic flow prediction is proposed. Firstly, to capture the hidden traffic periodicity of traffic flow, traffic flow is divided into three kinds of periods, including hourly, daily, and weekly data. Secondly, a graph attention residual layer is constructed to learn the global spatial features across regions. Local spatial-temporal dependence is captured by using a T-GCN module. Thirdly, a transformer layer is introduced to learn the long-term dependence in time. A position embedding mechanism is introduced to label position information for all traffic sequences. Thus, this multi-head self-attention mechanism can recognize the sequence order and allocate weights for different time nodes. Experimental results on four real-world datasets show that the MSSTGCN performs better than the baseline methods and can be successfully adapted to traffic prediction tasks.展开更多
Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware reso...Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline.展开更多
There is instability in the distributed energy storage cloud group end region on the power grid side.In order to avoid large-scale fluctuating charging and discharging in the power grid environment and make the capaci...There is instability in the distributed energy storage cloud group end region on the power grid side.In order to avoid large-scale fluctuating charging and discharging in the power grid environment and make the capacitor components showa continuous and stable charging and discharging state,a hierarchical time-sharing configuration algorithm of distributed energy storage cloud group end region on the power grid side based on multi-scale and multi feature convolution neural network is proposed.Firstly,a voltage stability analysis model based onmulti-scale and multi feature convolution neural network is constructed,and the multi-scale and multi feature convolution neural network is optimized based on Self-OrganizingMaps(SOM)algorithm to analyze the voltage stability of the cloud group end region of distributed energy storage on the grid side under the framework of credibility.According to the optimal scheduling objectives and network size,the distributed robust optimal configuration control model is solved under the framework of coordinated optimal scheduling at multiple time scales;Finally,the time series characteristics of regional power grid load and distributed generation are analyzed.According to the regional hierarchical time-sharing configuration model of“cloud”,“group”and“end”layer,the grid side distributed energy storage cloud group end regional hierarchical time-sharing configuration algorithm is realized.The experimental results show that after applying this algorithm,the best grid side distributed energy storage configuration scheme can be determined,and the stability of grid side distributed energy storage cloud group end region layered timesharing configuration can be improved.展开更多
With the rapid expansion of drone applications,accurate detection of objects in aerial imagery has become crucial for intelligent transportation,urban management,and emergency rescue missions.However,existing methods ...With the rapid expansion of drone applications,accurate detection of objects in aerial imagery has become crucial for intelligent transportation,urban management,and emergency rescue missions.However,existing methods face numerous challenges in practical deployment,including scale variation handling,feature degradation,and complex backgrounds.To address these issues,we propose Edge-enhanced and Detail-Capturing You Only Look Once(EHDC-YOLO),a novel framework for object detection in Unmanned Aerial Vehicle(UAV)imagery.Based on the You Only Look Once version 11 nano(YOLOv11n)baseline,EHDC-YOLO systematically introduces several architectural enhancements:(1)a Multi-Scale Edge Enhancement(MSEE)module that leverages multi-scale pooling and edge information to enhance boundary feature extraction;(2)an Enhanced Feature Pyramid Network(EFPN)that integrates P2-level features with Cross Stage Partial(CSP)structures and OmniKernel convolutions for better fine-grained representation;and(3)Dynamic Head(DyHead)with multi-dimensional attention mechanisms for enhanced cross-scale modeling and perspective adaptability.Comprehensive experiments on the Vision meets Drones for Detection(VisDrone-DET)2019 dataset demonstrate that EHDC-YOLO achieves significant improvements,increasing mean Average Precision(mAP)@0.5 from 33.2%to 46.1%(an absolute improvement of 12.9 percentage points)and mAP@0.5:0.95 from 19.5%to 28.0%(an absolute improvement of 8.5 percentage points)compared with the YOLOv11n baseline,while maintaining a reasonable parameter count(2.81 M vs the baseline’s 2.58 M).Further ablation studies confirm the effectiveness of each proposed component,while visualization results highlight EHDC-YOLO’s superior performance in detecting objects and handling occlusions in complex drone scenarios.展开更多
Dear Editor,D2This letter presents a node feature similarity preserving graph convolutional framework P G.Graph neural networks(GNNs)have garnered significant attention for their efficacy in learning graph representat...Dear Editor,D2This letter presents a node feature similarity preserving graph convolutional framework P G.Graph neural networks(GNNs)have garnered significant attention for their efficacy in learning graph representations across diverse real-world applications.展开更多
Tomato is a major economic crop worldwide,and diseases on tomato leaves can significantly reduce both yield and quality.Traditional manual inspection is inefficient and highly subjective,making it difficult to meet th...Tomato is a major economic crop worldwide,and diseases on tomato leaves can significantly reduce both yield and quality.Traditional manual inspection is inefficient and highly subjective,making it difficult to meet the requirements of early disease identification in complex natural environments.To address this issue,this study proposes an improved YOLO11-based model,YOLO-SPDNet(Scale Sequence Fusion,Position-Channel Attention,and Dual Enhancement Network).The model integrates the SEAM(Self-Ensembling Attention Mechanism)semantic enhancement module,the MLCA(Mixed Local Channel Attention)lightweight attention mechanism,and the SPA(Scale-Position-Detail Awareness)module composed of SSFF(Scale Sequence Feature Fusion),TFE(Triple Feature Encoding),and CPAM(Channel and Position Attention Mechanism).These enhancements strengthen fine-grained lesion detection while maintaining model lightweightness.Experimental results show that YOLO-SPDNet achieves an accuracy of 91.8%,a recall of 86.5%,and an mAP@0.5 of 90.6%on the test set,with a computational complexity of 12.5 GFLOPs.Furthermore,the model reaches a real-time inference speed of 987 FPS,making it suitable for deployment on mobile agricultural terminals and online monitoring systems.Comparative analysis and ablation studies further validate the reliability and practical applicability of the proposed model in complex natural scenes.展开更多
This paper introduces a fuzzy C-means-based pooling layer for convolutional neural networks that explicitly models local uncertainty and ambiguity.Conventional pooling operations,such as max and average,apply rigid ag...This paper introduces a fuzzy C-means-based pooling layer for convolutional neural networks that explicitly models local uncertainty and ambiguity.Conventional pooling operations,such as max and average,apply rigid aggregation and often discard fine-grained boundary information.In contrast,our method computes soft membershipswithin each receptive field and aggregates cluster-wise responses throughmembership-weighted pooling,thereby preserving informative structure while reducing dimensionality.Being differentiable,the proposed layer operates as standard two-dimensional pooling.We evaluate our approach across various CNN backbones and open datasets,including CIFAR-10/100,STL-10,LFW,and ImageNette,and further probe small training set restrictions on MNIST and Fashion-MNIST.In these settings,the proposed pooling consistently improves accuracy and weighted F1 over conventional baselines,with particularly strong gains when training data are scarce.Even with less than 1%of the training set,ourmethodmaintains reliable performance,indicating improved sample efficiency and robustness to noisy or ambiguous local patterns.Overall,integrating soft memberships into the pooling operator provides a practical and generalizable inductive bias that enhances robustness and generalization in modern CNN pipelines.展开更多
Phishing email detection represents a critical research challenge in cybersecurity.To address this,this paper proposes a novel Double-S(statistical-semantic)feature model based on three core entities involved in email...Phishing email detection represents a critical research challenge in cybersecurity.To address this,this paper proposes a novel Double-S(statistical-semantic)feature model based on three core entities involved in email communication:the sender,recipient,and email content.We employ strategic game theory to analyze the offensive strategies of phishing attackers and defensive strategies of protectors,extracting statistical features from these entities.We also leverage the Qwen large language model to excavate implicit semantic features(e.g.,emotional manipulation and social engineering tactics)from email content.By integrating statistical and semantic features,our model achieves a robust representation of phishing emails.We introduce a hybrid detection model that integrates a convolutional neural network(CNN)module with the XGBoost(Extreme Gradient Boosting)classifier,effectively capturing local correlations in high-dimensional features.Experimental results on real-world phishing email datasets demonstrate the superiority of our approach,achieving an F1-score of 0.9587,precision of 0.9591,and recall of 0.9583,representing improvements of 1.3%–10.6%compared to state-of-the-art methods.展开更多
Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using d...Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using deep learning algorithms further enhances performance;nevertheless,it is challenging due to the nature of small-scale and imbalanced PD datasets.This paper proposed a convolutional neural network-based deep support vector machine(CNN-DSVM)to automate the feature extraction process using CNN and extend the conventional SVM to a DSVM for better classification performance in small-scale PD datasets.A customized kernel function reduces the impact of biased classification towards the majority class(healthy candidates in our consideration).An improved generative adversarial network(IGAN)was designed to generate additional training data to enhance the model’s performance.For performance evaluation,the proposed algorithm achieves a sensitivity of 97.6%and a specificity of 97.3%.The performance comparison is evaluated from five perspectives,including comparisons with different data generation algorithms,feature extraction techniques,kernel functions,and existing works.Results reveal the effectiveness of the IGAN algorithm,which improves the sensitivity and specificity by 4.05%–4.72%and 4.96%–5.86%,respectively;and the effectiveness of the CNN-DSVM algorithm,which improves the sensitivity by 1.24%–57.4%and specificity by 1.04%–163%and reduces biased detection towards the majority class.The ablation experiments confirm the effectiveness of individual components.Two future research directions have also been suggested.展开更多
Deep Learning has been widely used to model soft sensors in modern industrial processes with nonlinear variables and uncertainty.Due to the outstanding ability for high-level feature extraction,stacked autoencoder(SAE...Deep Learning has been widely used to model soft sensors in modern industrial processes with nonlinear variables and uncertainty.Due to the outstanding ability for high-level feature extraction,stacked autoencoder(SAE)has been widely used to improve the model accuracy of soft sensors.However,with the increase of network layers,SAE may encounter serious information loss issues,which affect the modeling performance of soft sensors.Besides,there are typically very few labeled samples in the data set,which brings challenges to traditional neural networks to solve.In this paper,a multi-scale feature fused stacked autoencoder(MFF-SAE)is suggested for feature representation related to hierarchical output,where stacked autoencoder,mutual information(MI)and multi-scale feature fusion(MFF)strategies are integrated.Based on correlation analysis between output and input variables,critical hidden variables are extracted from the original variables in each autoencoder's input layer,which are correspondingly given varying weights.Besides,an integration strategy based on multi-scale feature fusion is adopted to mitigate the impact of information loss with the deepening of the network layers.Then,the MFF-SAE method is designed and stacked to form deep networks.Two practical industrial processes are utilized to evaluate the performance of MFF-SAE.Results from simulations indicate that in comparison to other cutting-edge techniques,the proposed method may considerably enhance the accuracy of soft sensor modeling,where the suggested method reduces the root mean square error(RMSE)by 71.8%,17.1%and 64.7%,15.1%,respectively.展开更多
基金supported,in part,by the National Nature Science Foundation of China under Grant 62272236,62376128in part,by the Natural Science Foundation of Jiangsu Province under Grant BK20201136,BK20191401.
文摘Video emotion recognition is widely used due to its alignment with the temporal characteristics of human emotional expression,but existingmodels have significant shortcomings.On the one hand,Transformermultihead self-attention modeling of global temporal dependency has problems of high computational overhead and feature similarity.On the other hand,fixed-size convolution kernels are often used,which have weak perception ability for emotional regions of different scales.Therefore,this paper proposes a video emotion recognition model that combines multi-scale region-aware convolution with temporal interactive sampling.In terms of space,multi-branch large-kernel stripe convolution is used to perceive emotional region features at different scales,and attention weights are generated for each scale feature.In terms of time,multi-layer odd-even down-sampling is performed on the time series,and oddeven sub-sequence interaction is performed to solve the problem of feature similarity,while reducing computational costs due to the linear relationship between sampling and convolution overhead.This paper was tested on CMU-MOSI,CMU-MOSEI,and Hume Reaction.The Acc-2 reached 83.4%,85.2%,and 81.2%,respectively.The experimental results show that the model can significantly improve the accuracy of emotion recognition.
基金funded by the National Natural Science Foundation of China(No.52204407)the Natural Science Foundation of Jiangsu Province(No.BK20220595)the China Postdoctoral Science Foundation(No.2022M723689).
文摘This study proposes a multi-scale simplified residual convolutional neural network(MS-SRCNN)for the precise prediction of Mg-Nd binary alloy compositions from scanning electron microscope(SEM)images.A multi-scale data structure is established by spatially aligning and stacking SEM images at different magnifications.The MS-SRCNN significantly reduces computational runtime by over 90%compared to traditional architectures like ResNet50,VGG16,and VGG19,without compromising prediction accuracy.The model demonstrates more excellent predictive performance,achieving a>5%increase in R^(2) compared to single-scale models.Furthermore,the MS-SRCNN exhibits robust composition prediction capability across other Mg-based binary alloys,including Mg-La,Mg-Sn,Mg-Ce,Mg-Sm,Mg-Ag,and Mg-Y,thereby emphasizing its generalization and extrapolation potential.This research establishes a non-destructive,microstructure-informed composition analysis framework,reduces characterization time compared to traditional experiment methods and provides insights into the composition-microstructure relationship in diverse material systems.
基金funded by the Deanship of Scientific Research at Northern Border University,Arar,Saudi Arabia through research group No.(RG-NBU-2022-1234).
文摘Transportation systems are experiencing a significant transformation due to the integration of advanced technologies, including artificial intelligence and machine learning. In the context of intelligent transportation systems (ITS) and Advanced Driver Assistance Systems (ADAS), the development of efficient and reliable traffic light detection mechanisms is crucial for enhancing road safety and traffic management. This paper presents an optimized convolutional neural network (CNN) framework designed to detect traffic lights in real-time within complex urban environments. Leveraging multi-scale pyramid feature maps, the proposed model addresses key challenges such as the detection of small, occluded, and low-resolution traffic lights amidst complex backgrounds. The integration of dilated convolutions, Region of Interest (ROI) alignment, and Soft Non-Maximum Suppression (Soft-NMS) further improves detection accuracy and reduces false positives. By optimizing computational efficiency and parameter complexity, the framework is designed to operate seamlessly on embedded systems, ensuring robust performance in real-world applications. Extensive experiments using real-world datasets demonstrate that our model significantly outperforms existing methods, providing a scalable solution for ITS and ADAS applications. This research contributes to the advancement of Artificial Intelligence-driven (AI-driven) pattern recognition in transportation systems and offers a mathematical approach to improving efficiency and safety in logistics and transportation networks.
基金financially supported byChongqingUniversity of Technology Graduate Innovation Foundation(Grant No.gzlcx20253267).
文摘Camouflaged Object Detection(COD)aims to identify objects that share highly similar patterns—such as texture,intensity,and color—with their surrounding environment.Due to their intrinsic resemblance to the background,camouflaged objects often exhibit vague boundaries and varying scales,making it challenging to accurately locate targets and delineate their indistinct edges.To address this,we propose a novel camouflaged object detection network called Edge-Guided and Multi-scale Fusion Network(EGMFNet),which leverages edge-guided multi-scale integration for enhanced performance.The model incorporates two innovative components:a Multi-scale Fusion Module(MSFM)and an Edge-Guided Attention Module(EGA).These designs exploit multi-scale features to uncover subtle cues between candidate objects and the background while emphasizing camouflaged object boundaries.Moreover,recognizing the rich contextual information in fused features,we introduce a Dual-Branch Global Context Module(DGCM)to refine features using extensive global context,thereby generatingmore informative representations.Experimental results on four benchmark datasets demonstrate that EGMFNet outperforms state-of-the-art methods across five evaluation metrics.Specifically,on COD10K,our EGMFNet-P improves F_(β)by 4.8 points and reduces mean absolute error(MAE)by 0.006 compared with ZoomNeXt;on NC4K,it achieves a 3.6-point increase in F_(β).OnCAMO and CHAMELEON,it obtains 4.5-point increases in F_(β),respectively.These consistent gains substantiate the superiority and robustness of EGMFNet.
文摘Driven by rapid advances in deep learning,object detection has been widely adopted across diverse application scenarios.However,in low-light conditions,critical visual cues of target objects are severely degraded,posing a significant challenge for accurate low-light object detection.Existing methods struggle to preserve discriminative features while maintaining semantic consistency between low-light and normal-light images.For this purpose,this study proposes a DL-YOLO model specially tailored for low-light detection.To mitigate target feature attenuation introduced by repeated downsampling,we design aMulti-Scale FeatureConvolution(MSF-Conv)module that captures rich,multi-level details via multi-scale feature learning,thereby reducing model complexity and computational cost.For feature fusion,we integrated the C3k2-DWRmodule by embedding the Dilation-wise Residual(DWR)mechanism into the 2-core optimized Cross Stage Partial(C3)framework,achieving efficient feature integration.In addition,we replace conventional localization losses with WIoU(Weighted Intersection over Union),which dynamically adjusts gradient gain according to sample quality,thereby improving localization robustness and precision.Experiments on the ExDark dataset demonstrate that DL-YOLO delivers strong low-light detection performance.The relevant code is published at https://github.com/cym0997/DL-YOLO.
基金supported by the National Key Research and Development of China(No.2022YFB2503400).
文摘Semantic segmentation for mixed scenes of aerial remote sensing and road traffic is one of the key technologies for visual perception of flying cars.The State-of-the-Art(SOTA)semantic segmentation methods have made remarkable achievements in both fine-grained segmentation and real-time performance.However,when faced with the huge differences in scale and semantic categories brought about by the mixed scenes of aerial remote sensing and road traffic,they still face great challenges and there is little related research.Addressing the above issue,this paper proposes a semantic segmentation model specifically for mixed datasets of aerial remote sensing and road traffic scenes.First,a novel decoding-recoding multi-scale feature iterative refinement structure is proposed,which utilizes the re-integration and continuous enhancement of multi-scale information to effectively deal with the huge scale differences between cross-domain scenes,while using a fully convolutional structure to ensure the lightweight and real-time requirements.Second,a welldesigned cross-window attention mechanism combined with a global information integration decoding block forms an enhanced global context perception,which can effectively capture the long-range dependencies and multi-scale global context information of different scenes,thereby achieving fine-grained semantic segmentation.The proposed method is tested on a large-scale mixed dataset of aerial remote sensing and road traffic scenes.The results confirm that it can effectively deal with the problem of large-scale differences in cross-domain scenes.Its segmentation accuracy surpasses that of the SOTA methods,which meets the real-time requirements.
基金supported by the Postgraduate Research&Practice Innovation Program of Jiangsu Province,China(SJCX23_1973)the National Natural Science Foundation of China(32172110,32071945)+7 种基金the Key Research and Development Program(Modern Agriculture)of Jiangsu Province,China(BE2022342-2,BE2020319)the Anhui Province Crop Intelligent Planting and Processing Technology Engineering Research Center Open Project,China(ZHKF04)the National Key Research and Development Program of China(2023YFD2300201,2023YFD1202200)the Special Funds for Scientific and Technological Innovation of Jiangsu Province,China(BE2022425)the Priority Academic Program Development of Jiangsu Higher Education Institutions,China(PAPD)the Central Publicinterest Scientific Institution Basal Research Fund,China(JBYW-AII-2023-08)the Science and Technology Innovation Project of the Chinese Academy of Agricultural Sciences(CAAS-CS-202201)the Special Fund for Independent Innovation of Agriculture Science and Technology in Jiangsu Province,China(CX(22)3112)。
文摘The wheat above-ground biomass(AGB)is an important index that shows the life activity of vegetation,which is of great significance for wheat growth monitoring and yield prediction.Traditional biomass estimation methods specifically include sample surveys and harvesting statistics.Although these methods have high estimation accuracy,they are time-consuming,destructive,and difficult to implement to monitor the biomass at a large scale.The main objective of this study is to optimize the traditional remote sensing methods to estimate the wheat AGBbased on improved convolutional features(CFs).Low-cost unmanned aerial vehicles(UAV)were used as the main data acquisition equipment.This study acquired image data acquired by RGB camera(RGB)and multi-spectral(MS)image data of the wheat population canopy for two wheat varieties and five key growth stages.Then,field measurements were conducted to obtain the actual wheat biomass data for validation.Based on the remote sensing indices(RSIs),structural features(SFs),and CFs,this study proposed a new feature named AUR-50(multi-source combination based on convolutional feature optimization)to estimate the wheat AGB.The results show that AUR-50 could estimate the wheat AGB more accurately than RSIs and SFs,and the average R^(2) exceeded 0.77.In the overwintering period,AUR-50_(MS)(multi-source combination with convolutional feature optimization using multispectral imagery)had the highest estimation accuracy(R^(2) of 0.88).In addition,AUR-50 reduced the effect of the vegetation index saturation on the biomass estimation accuracy by adding CFs,where the highest R^(2) was 0.69 at the flowering stage.The results of this study provide an effective method to evaluate the AGB in wheat with high throughput and a research reference for the phenotypic parameters of other crops.
基金supported by the National Natural Science Foundation of China(62272049,62236006,62172045)the Key Projects of Beijing Union University(ZKZD202301).
文摘In recent years,gait-based emotion recognition has been widely applied in the field of computer vision.However,existing gait emotion recognition methods typically rely on complete human skeleton data,and their accuracy significantly declines when the data is occluded.To enhance the accuracy of gait emotion recognition under occlusion,this paper proposes a Multi-scale Suppression Graph ConvolutionalNetwork(MS-GCN).TheMS-GCN consists of three main components:Joint Interpolation Module(JI Moudle),Multi-scale Temporal Convolution Network(MS-TCN),and Suppression Graph Convolutional Network(SGCN).The JI Module completes the spatially occluded skeletal joints using the(K-Nearest Neighbors)KNN interpolation method.The MS-TCN employs convolutional kernels of various sizes to comprehensively capture the emotional information embedded in the gait,compensating for the temporal occlusion of gait information.The SGCN extracts more non-prominent human gait features by suppressing the extraction of key body part features,thereby reducing the negative impact of occlusion on emotion recognition results.The proposed method is evaluated on two comprehensive datasets:Emotion-Gait,containing 4227 real gaits from sources like BML,ICT-Pollick,and ELMD,and 1000 synthetic gaits generated using STEP-Gen technology,and ELMB,consisting of 3924 gaits,with 1835 labeled with emotions such as“Happy,”“Sad,”“Angry,”and“Neutral.”On the standard datasets Emotion-Gait and ELMB,the proposed method achieved accuracies of 0.900 and 0.896,respectively,attaining performance comparable to other state-ofthe-artmethods.Furthermore,on occlusion datasets,the proposedmethod significantly mitigates the performance degradation caused by occlusion compared to other methods,the accuracy is significantly higher than that of other methods.
基金Supported by the Henan Province Key Research and Development Project(231111211300)the Central Government of Henan Province Guides Local Science and Technology Development Funds(Z20231811005)+2 种基金Henan Province Key Research and Development Project(231111110100)Henan Provincial Outstanding Foreign Scientist Studio(GZS2024006)Henan Provincial Joint Fund for Scientific and Technological Research and Development Plan(Application and Overcoming Technical Barriers)(242103810028)。
文摘The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method for infrared and visible image fusion is proposed.The encoder designed according to the optimization objective consists of a base encoder and a detail encoder,which is used to extract low-frequency and high-frequency information from the image.This extraction may lead to some information not being captured,so a compensation encoder is proposed to supplement the missing information.Multi-scale decomposition is also employed to extract image features more comprehensively.The decoder combines low-frequency,high-frequency and supplementary information to obtain multi-scale features.Subsequently,the attention strategy and fusion module are introduced to perform multi-scale fusion for image reconstruction.Experimental results on three datasets show that the fused images generated by this network effectively retain salient targets while being more consistent with human visual perception.
文摘Faulty-feeder detection in neutral point noneffectively grounded distribution networks consistently attracts research attention since it directly affects quality and safety of energy supply.Most modern research on faulty-feeder detection tends to apply more complex digital signal processing techniques and deeper neural networks in order to better extract and learn as many detailed characteristics as possible.However,these approaches may easily result in overfitting and high computational cost,which cannot meet requirements for detection accuracy and efficiency in practical applications.This paper proposes an innovative waveform encoding method and details a simple convolutional neural network(CNN)with one layer of convolution used for identification,which seeks to improve detection accuracy and efficiency simultaneously.First,sparse characteristics of waveforms are utilized to encode into compact vectors,and a waveform-vector matrix is generated.Second,to deduce waveform-vector matrix,a simple CNN with multi-scale filters and one layer of convolution is established.Finally,a methodology for faulty-feeder detection is proposed,and both detection accuracy and efficiency are considerably enhanced.Comparative studies have confirmed clear superiority of the developed method,which outperforms existing approaches in both detection accuracy and efficiency,thus highlighting its significant potential for application.
基金supported by the National Natural Science Foundation of China(Grant Nos.62472149,62376089,62202147)Hubei Provincial Science and Technology Plan Project(2023BCB04100).
文摘Accurate traffic flow prediction has a profound impact on modern traffic management. Traffic flow has complex spatial-temporal correlations and periodicity, which poses difficulties for precise prediction. To address this problem, a Multi-head Self-attention and Spatial-Temporal Graph Convolutional Network (MSSTGCN) for multiscale traffic flow prediction is proposed. Firstly, to capture the hidden traffic periodicity of traffic flow, traffic flow is divided into three kinds of periods, including hourly, daily, and weekly data. Secondly, a graph attention residual layer is constructed to learn the global spatial features across regions. Local spatial-temporal dependence is captured by using a T-GCN module. Thirdly, a transformer layer is introduced to learn the long-term dependence in time. A position embedding mechanism is introduced to label position information for all traffic sequences. Thus, this multi-head self-attention mechanism can recognize the sequence order and allocate weights for different time nodes. Experimental results on four real-world datasets show that the MSSTGCN performs better than the baseline methods and can be successfully adapted to traffic prediction tasks.
文摘Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline.
基金supported by State Grid Corporation Limited Science and Technology Project Funding(Contract No.SGCQSQ00YJJS2200380).
文摘There is instability in the distributed energy storage cloud group end region on the power grid side.In order to avoid large-scale fluctuating charging and discharging in the power grid environment and make the capacitor components showa continuous and stable charging and discharging state,a hierarchical time-sharing configuration algorithm of distributed energy storage cloud group end region on the power grid side based on multi-scale and multi feature convolution neural network is proposed.Firstly,a voltage stability analysis model based onmulti-scale and multi feature convolution neural network is constructed,and the multi-scale and multi feature convolution neural network is optimized based on Self-OrganizingMaps(SOM)algorithm to analyze the voltage stability of the cloud group end region of distributed energy storage on the grid side under the framework of credibility.According to the optimal scheduling objectives and network size,the distributed robust optimal configuration control model is solved under the framework of coordinated optimal scheduling at multiple time scales;Finally,the time series characteristics of regional power grid load and distributed generation are analyzed.According to the regional hierarchical time-sharing configuration model of“cloud”,“group”and“end”layer,the grid side distributed energy storage cloud group end regional hierarchical time-sharing configuration algorithm is realized.The experimental results show that after applying this algorithm,the best grid side distributed energy storage configuration scheme can be determined,and the stability of grid side distributed energy storage cloud group end region layered timesharing configuration can be improved.
文摘With the rapid expansion of drone applications,accurate detection of objects in aerial imagery has become crucial for intelligent transportation,urban management,and emergency rescue missions.However,existing methods face numerous challenges in practical deployment,including scale variation handling,feature degradation,and complex backgrounds.To address these issues,we propose Edge-enhanced and Detail-Capturing You Only Look Once(EHDC-YOLO),a novel framework for object detection in Unmanned Aerial Vehicle(UAV)imagery.Based on the You Only Look Once version 11 nano(YOLOv11n)baseline,EHDC-YOLO systematically introduces several architectural enhancements:(1)a Multi-Scale Edge Enhancement(MSEE)module that leverages multi-scale pooling and edge information to enhance boundary feature extraction;(2)an Enhanced Feature Pyramid Network(EFPN)that integrates P2-level features with Cross Stage Partial(CSP)structures and OmniKernel convolutions for better fine-grained representation;and(3)Dynamic Head(DyHead)with multi-dimensional attention mechanisms for enhanced cross-scale modeling and perspective adaptability.Comprehensive experiments on the Vision meets Drones for Detection(VisDrone-DET)2019 dataset demonstrate that EHDC-YOLO achieves significant improvements,increasing mean Average Precision(mAP)@0.5 from 33.2%to 46.1%(an absolute improvement of 12.9 percentage points)and mAP@0.5:0.95 from 19.5%to 28.0%(an absolute improvement of 8.5 percentage points)compared with the YOLOv11n baseline,while maintaining a reasonable parameter count(2.81 M vs the baseline’s 2.58 M).Further ablation studies confirm the effectiveness of each proposed component,while visualization results highlight EHDC-YOLO’s superior performance in detecting objects and handling occlusions in complex drone scenarios.
基金supported by the National Natural Science Foundation of China(62402399)the New Chongqing Youth Innovation Talent Project(CSTB2024NSCQ-QCXMX0035)。
文摘Dear Editor,D2This letter presents a node feature similarity preserving graph convolutional framework P G.Graph neural networks(GNNs)have garnered significant attention for their efficacy in learning graph representations across diverse real-world applications.
基金Tianmin Tianyuan Boutique Vegetable Industry Technology Service Station(Grant No.2024120011003081)Development of Environmental Monitoring and Traceability System for Wuqing Agricultural Production Areas(Grant No.2024120011001866)。
文摘Tomato is a major economic crop worldwide,and diseases on tomato leaves can significantly reduce both yield and quality.Traditional manual inspection is inefficient and highly subjective,making it difficult to meet the requirements of early disease identification in complex natural environments.To address this issue,this study proposes an improved YOLO11-based model,YOLO-SPDNet(Scale Sequence Fusion,Position-Channel Attention,and Dual Enhancement Network).The model integrates the SEAM(Self-Ensembling Attention Mechanism)semantic enhancement module,the MLCA(Mixed Local Channel Attention)lightweight attention mechanism,and the SPA(Scale-Position-Detail Awareness)module composed of SSFF(Scale Sequence Feature Fusion),TFE(Triple Feature Encoding),and CPAM(Channel and Position Attention Mechanism).These enhancements strengthen fine-grained lesion detection while maintaining model lightweightness.Experimental results show that YOLO-SPDNet achieves an accuracy of 91.8%,a recall of 86.5%,and an mAP@0.5 of 90.6%on the test set,with a computational complexity of 12.5 GFLOPs.Furthermore,the model reaches a real-time inference speed of 987 FPS,making it suitable for deployment on mobile agricultural terminals and online monitoring systems.Comparative analysis and ablation studies further validate the reliability and practical applicability of the proposed model in complex natural scenes.
文摘This paper introduces a fuzzy C-means-based pooling layer for convolutional neural networks that explicitly models local uncertainty and ambiguity.Conventional pooling operations,such as max and average,apply rigid aggregation and often discard fine-grained boundary information.In contrast,our method computes soft membershipswithin each receptive field and aggregates cluster-wise responses throughmembership-weighted pooling,thereby preserving informative structure while reducing dimensionality.Being differentiable,the proposed layer operates as standard two-dimensional pooling.We evaluate our approach across various CNN backbones and open datasets,including CIFAR-10/100,STL-10,LFW,and ImageNette,and further probe small training set restrictions on MNIST and Fashion-MNIST.In these settings,the proposed pooling consistently improves accuracy and weighted F1 over conventional baselines,with particularly strong gains when training data are scarce.Even with less than 1%of the training set,ourmethodmaintains reliable performance,indicating improved sample efficiency and robustness to noisy or ambiguous local patterns.Overall,integrating soft memberships into the pooling operator provides a practical and generalizable inductive bias that enhances robustness and generalization in modern CNN pipelines.
基金supported by the National Key Research and Development Program of China(No.2023YFB3105700).
文摘Phishing email detection represents a critical research challenge in cybersecurity.To address this,this paper proposes a novel Double-S(statistical-semantic)feature model based on three core entities involved in email communication:the sender,recipient,and email content.We employ strategic game theory to analyze the offensive strategies of phishing attackers and defensive strategies of protectors,extracting statistical features from these entities.We also leverage the Qwen large language model to excavate implicit semantic features(e.g.,emotional manipulation and social engineering tactics)from email content.By integrating statistical and semantic features,our model achieves a robust representation of phishing emails.We introduce a hybrid detection model that integrates a convolutional neural network(CNN)module with the XGBoost(Extreme Gradient Boosting)classifier,effectively capturing local correlations in high-dimensional features.Experimental results on real-world phishing email datasets demonstrate the superiority of our approach,achieving an F1-score of 0.9587,precision of 0.9591,and recall of 0.9583,representing improvements of 1.3%–10.6%compared to state-of-the-art methods.
基金The work described in this paper was fully supported by a grant from Hong Kong Metropolitan University(RIF/2021/05).
文摘Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using deep learning algorithms further enhances performance;nevertheless,it is challenging due to the nature of small-scale and imbalanced PD datasets.This paper proposed a convolutional neural network-based deep support vector machine(CNN-DSVM)to automate the feature extraction process using CNN and extend the conventional SVM to a DSVM for better classification performance in small-scale PD datasets.A customized kernel function reduces the impact of biased classification towards the majority class(healthy candidates in our consideration).An improved generative adversarial network(IGAN)was designed to generate additional training data to enhance the model’s performance.For performance evaluation,the proposed algorithm achieves a sensitivity of 97.6%and a specificity of 97.3%.The performance comparison is evaluated from five perspectives,including comparisons with different data generation algorithms,feature extraction techniques,kernel functions,and existing works.Results reveal the effectiveness of the IGAN algorithm,which improves the sensitivity and specificity by 4.05%–4.72%and 4.96%–5.86%,respectively;and the effectiveness of the CNN-DSVM algorithm,which improves the sensitivity by 1.24%–57.4%and specificity by 1.04%–163%and reduces biased detection towards the majority class.The ablation experiments confirm the effectiveness of individual components.Two future research directions have also been suggested.
基金supported by the National Key Research and Development Program of China(2023YFB3307800)National Natural Science Foundation of China(62394343,62373155)+2 种基金Major Science and Technology Project of Xinjiang(No.2022A01006-4)State Key Laboratory of Industrial Control Technology,China(Grant No.ICT2024A26)Fundamental Research Funds for the Central Universities.
文摘Deep Learning has been widely used to model soft sensors in modern industrial processes with nonlinear variables and uncertainty.Due to the outstanding ability for high-level feature extraction,stacked autoencoder(SAE)has been widely used to improve the model accuracy of soft sensors.However,with the increase of network layers,SAE may encounter serious information loss issues,which affect the modeling performance of soft sensors.Besides,there are typically very few labeled samples in the data set,which brings challenges to traditional neural networks to solve.In this paper,a multi-scale feature fused stacked autoencoder(MFF-SAE)is suggested for feature representation related to hierarchical output,where stacked autoencoder,mutual information(MI)and multi-scale feature fusion(MFF)strategies are integrated.Based on correlation analysis between output and input variables,critical hidden variables are extracted from the original variables in each autoencoder's input layer,which are correspondingly given varying weights.Besides,an integration strategy based on multi-scale feature fusion is adopted to mitigate the impact of information loss with the deepening of the network layers.Then,the MFF-SAE method is designed and stacked to form deep networks.Two practical industrial processes are utilized to evaluate the performance of MFF-SAE.Results from simulations indicate that in comparison to other cutting-edge techniques,the proposed method may considerably enhance the accuracy of soft sensor modeling,where the suggested method reduces the root mean square error(RMSE)by 71.8%,17.1%and 64.7%,15.1%,respectively.