Recent studies describe a number of difficulties associated with attention deficit in children with reading disabilities. Information about visual-spatial attention mainly arises from studies using event-related poten...Recent studies describe a number of difficulties associated with attention deficit in children with reading disabilities. Information about visual-spatial attention mainly arises from studies using event-related potentials (ERPs) during Posner’s spatial cueing paradigm. This study aims to use neurofeedback with a special protocol for treating children with reading disabilities, and moreo-ver, to evaluate visual-spatial attention ability by means of Posner paradigm task and ERPs. The study was conducted in a single subject design in 20 sessions. Participants were 2 male children, aged between 10 - 12 years old, who completed twelve 30-min neurofeedback sessions. Repeated measurements were performed during the baseline, treatment, and post treatment phases. Results showed some improvement in Posner paradigm parameters (correct response, valid and invalid reaction times). Furthermore, grand average ERPs for both of the participants in each of the four conditions (Valid-right, Invalid-right, Valid-left and Invalid-left) were analyzed. The analysis of P3 component showed a reduction in latency, indicating an improvement in the timing of cognitive processes. In addition, the graphs showed a decrease in amplitude level, which meant easier processing than before.展开更多
随着光伏发电在全球能源体系中占比不断提升,超短期光伏发电量预测对电力系统调度与安全运行至关重要。然而,光伏发电量受多因素影响,具有显著随机性与波动性。为此,提出了一种基于TCN-BiLSTM-Attention模型的超短期光伏发电量预测方法...随着光伏发电在全球能源体系中占比不断提升,超短期光伏发电量预测对电力系统调度与安全运行至关重要。然而,光伏发电量受多因素影响,具有显著随机性与波动性。为此,提出了一种基于TCN-BiLSTM-Attention模型的超短期光伏发电量预测方法。首先通过皮尔逊相关分析筛选关键特征,并利用孤立森林算法检测异常值,结合线性插值法和标准化完成数据预处理。随后,通过时间卷积网络(Temporal Convolutional Network,TCN)提取时序特征,再利用双向长短期记忆网络(Bidirectional Long Short-Term Memory,BiLSTM)网络捕获前后向时间依赖关系,并在输出端引入注意力机制聚焦关键时间步特征。最后,在Desert Knowledge Australia Solar Centre(DKASC)数据集上的对比实验表明,与传统LSTM、BiLSTM模型相比,提出的TCN-BiLSTM-Attention模型在预测精度、稳定性等方面均表现出一定优势。展开更多
Brain tumors require precise segmentation for diagnosis and treatment plans due to their complex morphology and heterogeneous characteristics.While MRI-based automatic brain tumor segmentation technology reduces the b...Brain tumors require precise segmentation for diagnosis and treatment plans due to their complex morphology and heterogeneous characteristics.While MRI-based automatic brain tumor segmentation technology reduces the burden on medical staff and provides quantitative information,existing methodologies and recent models still struggle to accurately capture and classify the fine boundaries and diverse morphologies of tumors.In order to address these challenges and maximize the performance of brain tumor segmentation,this research introduces a novel SwinUNETR-based model by integrating a new decoder block,the Hierarchical Channel-wise Attention Decoder(HCAD),into a powerful SwinUNETR encoder.The HCAD decoder block utilizes hierarchical features and channelspecific attention mechanisms to further fuse information at different scales transmitted from the encoder and preserve spatial details throughout the reconstruction phase.Rigorous evaluations on the recent BraTS GLI datasets demonstrate that the proposed SwinHCAD model achieved superior and improved segmentation accuracy on both the Dice score and HD95 metrics across all tumor subregions(WT,TC,and ET)compared to baseline models.In particular,the rationale and contribution of the model design were clarified through ablation studies to verify the effectiveness of the proposed HCAD decoder block.The results of this study are expected to greatly contribute to enhancing the efficiency of clinical diagnosis and treatment planning by increasing the precision of automated brain tumor segmentation.展开更多
Graph Federated Learning(GFL)has shown great potential in privacy protection and distributed intelligence through distributed collaborative training of graph-structured data without sharing raw information.However,exi...Graph Federated Learning(GFL)has shown great potential in privacy protection and distributed intelligence through distributed collaborative training of graph-structured data without sharing raw information.However,existing GFL approaches often lack the capability for comprehensive feature extraction and adaptive optimization,particularly in non-independent and identically distributed(NON-IID)scenarios where balancing global structural understanding and local node-level detail remains a challenge.To this end,this paper proposes a novel framework called GFL-SAR(Graph Federated Collaborative Learning Framework Based on Structural Amplification and Attention Refinement),which enhances the representation learning capability of graph data through a dual-branch collaborative design.Specifically,we propose the Structural Insight Amplifier(SIA),which utilizes an improved Graph Convolutional Network(GCN)to strengthen structural awareness and improve modeling of topological patterns.In parallel,we propose the Attentive Relational Refiner(ARR),which employs an enhanced Graph Attention Network(GAT)to perform fine-grained modeling of node relationships and neighborhood features,thereby improving the expressiveness of local interactions and preserving critical contextual information.GFL-SAR effectively integrates multi-scale features from every branch via feature fusion and federated optimization,thereby addressing existing GFL limitations in structural modeling and feature representation.Experiments on standard benchmark datasets including Cora,Citeseer,Polblogs,and Cora_ML demonstrate that GFL-SAR achieves superior performance in classification accuracy,convergence speed,and robustness compared to existing methods,confirming its effectiveness and generalizability in GFL tasks.展开更多
Clock synchronization has important applications in multi-agent collaboration(such as drone light shows,intelligent transportation systems,and game AI),group decision-making,and emergency rescue operations.Synchroniza...Clock synchronization has important applications in multi-agent collaboration(such as drone light shows,intelligent transportation systems,and game AI),group decision-making,and emergency rescue operations.Synchronization method based on pulse-coupled oscillators(PCOs)provides an effective solution for clock synchronization in wireless networks.However,the existing clock synchronization algorithms in multi-agent ad hoc networks are difficult to meet the requirements of high precision and high stability of synchronization clock in group cooperation.Hence,this paper constructs a network model,named DAUNet(unsupervised neural network based on dual attention),to enhance clock synchronization accuracy in multi-agent wireless ad hoc networks.Specifically,we design an unsupervised distributed neural network framework as the backbone,building upon classical PCO-based synchronization methods.This framework resolves issues such as prolonged time synchronization message exchange between nodes,difficulties in centralized node coordination,and challenges in distributed training.Furthermore,we introduce a dual-attention mechanism as the core module of DAUNet.By integrating a Multi-Head Attention module and a Gated Attention module,the model significantly improves information extraction capabilities while reducing computational complexity,effectively mitigating synchronization inaccuracies and instability in multi-agent ad hoc networks.To evaluate the effectiveness of the proposed model,comparative experiments and ablation studies were conducted against classical methods and existing deep learning models.The research results show that,compared with the deep learning networks based on DASA and LSTM,DAUNet can reduce the mean normalized phase difference(NPD)by 1 to 2 orders of magnitude.Compared with the attention models based on additive attention and self-attention mechanisms,the performance of DAUNet has improved by more than ten times.This study demonstrates DAUNet’s potential in advancing multi-agent ad hoc networking technologies.展开更多
Lithology identificationwhile drilling technology can obtain rock information in real-time.However,traditional lithology identificationmodels often face limitations in feature extraction and adaptability to complex ge...Lithology identificationwhile drilling technology can obtain rock information in real-time.However,traditional lithology identificationmodels often face limitations in feature extraction and adaptability to complex geological conditions,limiting their accuracy in challenging environments.To address these challenges,a deep learning model for lithology identificationwhile drilling is proposed.The proposed model introduces a dual attention mechanism in the long short-term memory(LSTM)network,effectively enhancing the ability to capture spatial and channel dimension information.Subsequently,the crayfishoptimization algorithm(COA)is applied to optimize the model network structure,thereby enhancing its lithology identificationcapability.Laboratory test results demonstrate that the proposed model achieves 97.15%accuracy on the testing set,significantlyoutperforming the traditional support vector machine(SVM)method(81.77%).Field tests under actual drilling conditions demonstrate an average accuracy of 91.96%for the proposed model,representing a 14.31%improvement over the LSTM model alone.The proposed model demonstrates robust adaptability and generalization ability across diverse operational scenarios.This research offers reliable technical support for lithology identification while drilling.展开更多
Salient object detection(SOD)models struggle to simultaneously preserve global structure,maintain sharp object boundaries,and sustain computational efficiency in complex scenes.In this study,we propose SPSALNet,a task...Salient object detection(SOD)models struggle to simultaneously preserve global structure,maintain sharp object boundaries,and sustain computational efficiency in complex scenes.In this study,we propose SPSALNet,a task-driven two-stage(macro–micro)architecture that restructures the SOD process around superpixel representations.In the proposed approach,a“split-and-enhance”principle,introduced to our knowledge for the first time in the SOD literature,hierarchically classifies superpixels and then applies targeted refinement only to ambiguous or error-prone regions.At the macro stage,the image is partitioned into content-adaptive superpixel regions,and each superpixel is represented by a high-dimensional region-level feature vector.These representations define a regional decomposition problem in which superpixels are assigned to three classes:background,object interior,and transition regions.Superpixel tokens interact with a global feature vector from a deep network backbone through a cross-attention module and are projected into an enriched embedding space that jointly encodes local topology and global context.At the micro stage,the model employs a U-Net-based refinement process that allocates computational resources only to ambiguous transition regions.The image and distance–similarity maps derived from superpixels are processed through a dual-encoder pathway.Subsequently,channel-aware fusion blocks adaptively combine information from these two sources,producing sharper and more stable object boundaries.Experimental results show that SPSALNet achieves high accuracy with lower computational cost compared to recent competing methods.On the PASCAL-S and DUT-OMRON datasets,SPSALNet exhibits a clear performance advantage across all key metrics,and it ranks first on accuracy-oriented measures on HKU-IS.On the challenging DUT-OMRON benchmark,SPSALNet reaches a MAE of 0.034.Across all datasets,it preserves object boundaries and regional structure in a stable and competitive manner.展开更多
Accurate wind speed prediction is crucial for stabilizing power grids with high wind energy penetration.This study presents a novel machine learning model that integrates clustering,deep learning,and transfer learning...Accurate wind speed prediction is crucial for stabilizing power grids with high wind energy penetration.This study presents a novel machine learning model that integrates clustering,deep learning,and transfer learning to mitigate accuracy degradation in 24-h forecasting.Initially,an optimized DB-SCAN(Density-Based Spatial Clustering of Applications with Noise)algorithm clusters wind fields based on wind direction,probability density,and spectral features,enhancing physical interpretability and reducing training complexity.Subsequently,a ResNet(Residual Network)extracts multi-scale patterns from decomposed wind signals,while transfer learning adapts the backbone network across clusters,cutting training time by over 90%.Finally,a CBAM(Convolutional Block Attention Module)attention mechanism is employed to prioritize features for LSTM-based prediction.Tested on the 2015 Jena wind speed dataset,the model demonstrates superior accuracy and robustness compared to state-of-the-art baselines.Key innovations include:(a)Physics-informed clustering for interpretable wind regime classification;(b)Transfer learning with deep feature extraction,preserving accuracy while minimizing training time;and(c)On the 2016 Jena wind speed dataset,the model achieves MAPE(Mean Absolute Percentage Error)values of 16.82%and 18.02%for the Weibull-shaped and Gaussian-shaped wind speed clusters,respectively,demonstrating the model’s robust generalization capacity.This framework offers an efficient and effective solution for long-term wind forecasting.展开更多
In the context of modern software development characterized by increasing complexity and compressed development cycles,traditional static vulnerability detection methods face prominent challenges including high false ...In the context of modern software development characterized by increasing complexity and compressed development cycles,traditional static vulnerability detection methods face prominent challenges including high false positive rates and missed detections of complex logic due to their over-reliance on rule templates.This paper proposes a Syntax-Aware Hierarchical Attention Network(SAHAN)model,which achieves high-precision vulnerability detection through grammar-rule-driven multi-granularity code slicing and hierarchical semantic fusion mechanisms.The SAHAN model first generates Syntax Independent Units(SIUs),which slices the code based on Abstract Syntax Tree(AST)and predefined grammar rules,retaining vulnerability-sensitive contexts.Following this,through a hierarchical attention mechanism,the local syntax-aware layer encodes fine-grained patterns within SIUs,while the global semantic correlation layer captures vulnerability chains across SIUs,achieving synergistic modeling of syntax and semantics.Experiments show that on benchmark datasets like QEMU,SAHAN significantly improves detection performance by 4.8%to 13.1%on average compared to baseline models such as Devign and VulDeePecker.展开更多
Reliable traffic flow prediction is crucial for mitigating urban congestion.This paper proposes Attentionbased spatiotemporal Interactive Dynamic Graph Convolutional Network(AIDGCN),a novel architecture integrating In...Reliable traffic flow prediction is crucial for mitigating urban congestion.This paper proposes Attentionbased spatiotemporal Interactive Dynamic Graph Convolutional Network(AIDGCN),a novel architecture integrating Interactive Dynamic Graph Convolution Network(IDGCN)with Temporal Multi-Head Trend-Aware Attention.Its core innovation lies in IDGCN,which uniquely splits sequences into symmetric intervals for interactive feature sharing via dynamic graphs,and a novel attention mechanism incorporating convolutional operations to capture essential local traffic trends—addressing a critical gap in standard attention for continuous data.For 15-and 60-min forecasting on METR-LA,AIDGCN achieves MAEs of 0.75%and 0.39%,and RMSEs of 1.32%and 0.14%,respectively.In the 60-min long-term forecasting of the PEMS-BAY dataset,the AIDGCN out-performs the MRA-BGCN method by 6.28%,4.93%,and 7.17%in terms of MAE,RMSE,and MAPE,respectively.Experimental results demonstrate the superiority of our pro-posed model over state-of-the-art methods.展开更多
[目的/意义]针对温室温湿度预测中多传感器数据融合可靠性低、传统模型忽略温湿度动态耦合,以及参数调优依赖人工经验等问题。[方法]首先,对传统卡尔曼(Kalman)滤波算法实施改进,通过动态调整过程噪声协方差和观测噪声协方差,结合新息...[目的/意义]针对温室温湿度预测中多传感器数据融合可靠性低、传统模型忽略温湿度动态耦合,以及参数调优依赖人工经验等问题。[方法]首先,对传统卡尔曼(Kalman)滤波算法实施改进,通过动态调整过程噪声协方差和观测噪声协方差,结合新息方差动态分配多传感器权重。其次,针对温湿度的强耦合性及其协同控制的需求,构建多输出长短期记忆-注意力机制(Long Short-Term Memory-Attention,LSTM-Attention)模型,以温湿度协同预测为目标,引入注意力机制自适应加权关键环境因子,并采用灰狼优化算法(Grey Wolf Optimizer,GWO)自动对超参数进行寻优。[结果和讨论]提出的自适应卡尔曼滤波算法在多点温湿度融合中的平均绝对偏差分别为1.59℃和8.64%,比传统卡尔曼滤波算法分别降低1.24%、8.57%。以该算法融合结果作为模型训练集,模型在温湿度预测中决定系数R2分别达到98.2%和99.3%,比传统Kalman提升4.7%和4.3%。GWO-LSTM-Atten⁃tion模型的温湿度预测均方根误差分别为0.7768℃和2.0564%,比LSTM、LSTM-Attention时间序列预测模型分别降低15.6%、6.6%,湿度分别降低29.2%、5.7%。[结论]提出的自适应卡尔曼融合算法能够有效抑制异常值影响,可在非平稳环境变化下实现多传感器数据可靠融合。在温室多环境因子预测中,GWO-LSTM-Attention模型温湿度预测值在未来可作为控制温室环境的重要参考,进而实现对温室环境的实时调控。展开更多
文摘Recent studies describe a number of difficulties associated with attention deficit in children with reading disabilities. Information about visual-spatial attention mainly arises from studies using event-related potentials (ERPs) during Posner’s spatial cueing paradigm. This study aims to use neurofeedback with a special protocol for treating children with reading disabilities, and moreo-ver, to evaluate visual-spatial attention ability by means of Posner paradigm task and ERPs. The study was conducted in a single subject design in 20 sessions. Participants were 2 male children, aged between 10 - 12 years old, who completed twelve 30-min neurofeedback sessions. Repeated measurements were performed during the baseline, treatment, and post treatment phases. Results showed some improvement in Posner paradigm parameters (correct response, valid and invalid reaction times). Furthermore, grand average ERPs for both of the participants in each of the four conditions (Valid-right, Invalid-right, Valid-left and Invalid-left) were analyzed. The analysis of P3 component showed a reduction in latency, indicating an improvement in the timing of cognitive processes. In addition, the graphs showed a decrease in amplitude level, which meant easier processing than before.
文摘随着光伏发电在全球能源体系中占比不断提升,超短期光伏发电量预测对电力系统调度与安全运行至关重要。然而,光伏发电量受多因素影响,具有显著随机性与波动性。为此,提出了一种基于TCN-BiLSTM-Attention模型的超短期光伏发电量预测方法。首先通过皮尔逊相关分析筛选关键特征,并利用孤立森林算法检测异常值,结合线性插值法和标准化完成数据预处理。随后,通过时间卷积网络(Temporal Convolutional Network,TCN)提取时序特征,再利用双向长短期记忆网络(Bidirectional Long Short-Term Memory,BiLSTM)网络捕获前后向时间依赖关系,并在输出端引入注意力机制聚焦关键时间步特征。最后,在Desert Knowledge Australia Solar Centre(DKASC)数据集上的对比实验表明,与传统LSTM、BiLSTM模型相比,提出的TCN-BiLSTM-Attention模型在预测精度、稳定性等方面均表现出一定优势。
基金supported by Institute of Information&Communications Technology Planning&Evaluation(IITP)under the Metaverse Support Program to Nurture the Best Talents(IITP-2024-RS-2023-00254529)grant funded by the Korea government(MSIT).
文摘Brain tumors require precise segmentation for diagnosis and treatment plans due to their complex morphology and heterogeneous characteristics.While MRI-based automatic brain tumor segmentation technology reduces the burden on medical staff and provides quantitative information,existing methodologies and recent models still struggle to accurately capture and classify the fine boundaries and diverse morphologies of tumors.In order to address these challenges and maximize the performance of brain tumor segmentation,this research introduces a novel SwinUNETR-based model by integrating a new decoder block,the Hierarchical Channel-wise Attention Decoder(HCAD),into a powerful SwinUNETR encoder.The HCAD decoder block utilizes hierarchical features and channelspecific attention mechanisms to further fuse information at different scales transmitted from the encoder and preserve spatial details throughout the reconstruction phase.Rigorous evaluations on the recent BraTS GLI datasets demonstrate that the proposed SwinHCAD model achieved superior and improved segmentation accuracy on both the Dice score and HD95 metrics across all tumor subregions(WT,TC,and ET)compared to baseline models.In particular,the rationale and contribution of the model design were clarified through ablation studies to verify the effectiveness of the proposed HCAD decoder block.The results of this study are expected to greatly contribute to enhancing the efficiency of clinical diagnosis and treatment planning by increasing the precision of automated brain tumor segmentation.
基金supported by National Natural Science Foundation of China(62466045)Inner Mongolia Natural Science Foundation Project(2021LHMS06003)Inner Mongolia University Basic Research Business Fee Project(114).
文摘Graph Federated Learning(GFL)has shown great potential in privacy protection and distributed intelligence through distributed collaborative training of graph-structured data without sharing raw information.However,existing GFL approaches often lack the capability for comprehensive feature extraction and adaptive optimization,particularly in non-independent and identically distributed(NON-IID)scenarios where balancing global structural understanding and local node-level detail remains a challenge.To this end,this paper proposes a novel framework called GFL-SAR(Graph Federated Collaborative Learning Framework Based on Structural Amplification and Attention Refinement),which enhances the representation learning capability of graph data through a dual-branch collaborative design.Specifically,we propose the Structural Insight Amplifier(SIA),which utilizes an improved Graph Convolutional Network(GCN)to strengthen structural awareness and improve modeling of topological patterns.In parallel,we propose the Attentive Relational Refiner(ARR),which employs an enhanced Graph Attention Network(GAT)to perform fine-grained modeling of node relationships and neighborhood features,thereby improving the expressiveness of local interactions and preserving critical contextual information.GFL-SAR effectively integrates multi-scale features from every branch via feature fusion and federated optimization,thereby addressing existing GFL limitations in structural modeling and feature representation.Experiments on standard benchmark datasets including Cora,Citeseer,Polblogs,and Cora_ML demonstrate that GFL-SAR achieves superior performance in classification accuracy,convergence speed,and robustness compared to existing methods,confirming its effectiveness and generalizability in GFL tasks.
文摘Clock synchronization has important applications in multi-agent collaboration(such as drone light shows,intelligent transportation systems,and game AI),group decision-making,and emergency rescue operations.Synchronization method based on pulse-coupled oscillators(PCOs)provides an effective solution for clock synchronization in wireless networks.However,the existing clock synchronization algorithms in multi-agent ad hoc networks are difficult to meet the requirements of high precision and high stability of synchronization clock in group cooperation.Hence,this paper constructs a network model,named DAUNet(unsupervised neural network based on dual attention),to enhance clock synchronization accuracy in multi-agent wireless ad hoc networks.Specifically,we design an unsupervised distributed neural network framework as the backbone,building upon classical PCO-based synchronization methods.This framework resolves issues such as prolonged time synchronization message exchange between nodes,difficulties in centralized node coordination,and challenges in distributed training.Furthermore,we introduce a dual-attention mechanism as the core module of DAUNet.By integrating a Multi-Head Attention module and a Gated Attention module,the model significantly improves information extraction capabilities while reducing computational complexity,effectively mitigating synchronization inaccuracies and instability in multi-agent ad hoc networks.To evaluate the effectiveness of the proposed model,comparative experiments and ablation studies were conducted against classical methods and existing deep learning models.The research results show that,compared with the deep learning networks based on DASA and LSTM,DAUNet can reduce the mean normalized phase difference(NPD)by 1 to 2 orders of magnitude.Compared with the attention models based on additive attention and self-attention mechanisms,the performance of DAUNet has improved by more than ten times.This study demonstrates DAUNet’s potential in advancing multi-agent ad hoc networking technologies.
基金supported by the National Key Research and Development Program for Young Scientists,Chin(Grant No.2021YFC2900400)the Sichuan-Chongqing Science and Technology Innovation Cooperation Program Project,China(Grant No.2024TIAD-CYKJCXX0269)the National Natural Science Foundation of China,China(Grant No.52304123).
文摘Lithology identificationwhile drilling technology can obtain rock information in real-time.However,traditional lithology identificationmodels often face limitations in feature extraction and adaptability to complex geological conditions,limiting their accuracy in challenging environments.To address these challenges,a deep learning model for lithology identificationwhile drilling is proposed.The proposed model introduces a dual attention mechanism in the long short-term memory(LSTM)network,effectively enhancing the ability to capture spatial and channel dimension information.Subsequently,the crayfishoptimization algorithm(COA)is applied to optimize the model network structure,thereby enhancing its lithology identificationcapability.Laboratory test results demonstrate that the proposed model achieves 97.15%accuracy on the testing set,significantlyoutperforming the traditional support vector machine(SVM)method(81.77%).Field tests under actual drilling conditions demonstrate an average accuracy of 91.96%for the proposed model,representing a 14.31%improvement over the LSTM model alone.The proposed model demonstrates robust adaptability and generalization ability across diverse operational scenarios.This research offers reliable technical support for lithology identification while drilling.
文摘Salient object detection(SOD)models struggle to simultaneously preserve global structure,maintain sharp object boundaries,and sustain computational efficiency in complex scenes.In this study,we propose SPSALNet,a task-driven two-stage(macro–micro)architecture that restructures the SOD process around superpixel representations.In the proposed approach,a“split-and-enhance”principle,introduced to our knowledge for the first time in the SOD literature,hierarchically classifies superpixels and then applies targeted refinement only to ambiguous or error-prone regions.At the macro stage,the image is partitioned into content-adaptive superpixel regions,and each superpixel is represented by a high-dimensional region-level feature vector.These representations define a regional decomposition problem in which superpixels are assigned to three classes:background,object interior,and transition regions.Superpixel tokens interact with a global feature vector from a deep network backbone through a cross-attention module and are projected into an enriched embedding space that jointly encodes local topology and global context.At the micro stage,the model employs a U-Net-based refinement process that allocates computational resources only to ambiguous transition regions.The image and distance–similarity maps derived from superpixels are processed through a dual-encoder pathway.Subsequently,channel-aware fusion blocks adaptively combine information from these two sources,producing sharper and more stable object boundaries.Experimental results show that SPSALNet achieves high accuracy with lower computational cost compared to recent competing methods.On the PASCAL-S and DUT-OMRON datasets,SPSALNet exhibits a clear performance advantage across all key metrics,and it ranks first on accuracy-oriented measures on HKU-IS.On the challenging DUT-OMRON benchmark,SPSALNet reaches a MAE of 0.034.Across all datasets,it preserves object boundaries and regional structure in a stable and competitive manner.
基金funded by Science and Technology Research and Development Program Project of China Railway Group Limited(No.2023-Major-02)National Natural Science Foundation of China(Grant No.52378200)Sichuan Science and Technology Program(Grant No.2024NSFSC0017).
文摘Accurate wind speed prediction is crucial for stabilizing power grids with high wind energy penetration.This study presents a novel machine learning model that integrates clustering,deep learning,and transfer learning to mitigate accuracy degradation in 24-h forecasting.Initially,an optimized DB-SCAN(Density-Based Spatial Clustering of Applications with Noise)algorithm clusters wind fields based on wind direction,probability density,and spectral features,enhancing physical interpretability and reducing training complexity.Subsequently,a ResNet(Residual Network)extracts multi-scale patterns from decomposed wind signals,while transfer learning adapts the backbone network across clusters,cutting training time by over 90%.Finally,a CBAM(Convolutional Block Attention Module)attention mechanism is employed to prioritize features for LSTM-based prediction.Tested on the 2015 Jena wind speed dataset,the model demonstrates superior accuracy and robustness compared to state-of-the-art baselines.Key innovations include:(a)Physics-informed clustering for interpretable wind regime classification;(b)Transfer learning with deep feature extraction,preserving accuracy while minimizing training time;and(c)On the 2016 Jena wind speed dataset,the model achieves MAPE(Mean Absolute Percentage Error)values of 16.82%and 18.02%for the Weibull-shaped and Gaussian-shaped wind speed clusters,respectively,demonstrating the model’s robust generalization capacity.This framework offers an efficient and effective solution for long-term wind forecasting.
基金supported by the research start-up funds for invited doctor of Lanzhou University of Technology under Grant 14/062402。
文摘In the context of modern software development characterized by increasing complexity and compressed development cycles,traditional static vulnerability detection methods face prominent challenges including high false positive rates and missed detections of complex logic due to their over-reliance on rule templates.This paper proposes a Syntax-Aware Hierarchical Attention Network(SAHAN)model,which achieves high-precision vulnerability detection through grammar-rule-driven multi-granularity code slicing and hierarchical semantic fusion mechanisms.The SAHAN model first generates Syntax Independent Units(SIUs),which slices the code based on Abstract Syntax Tree(AST)and predefined grammar rules,retaining vulnerability-sensitive contexts.Following this,through a hierarchical attention mechanism,the local syntax-aware layer encodes fine-grained patterns within SIUs,while the global semantic correlation layer captures vulnerability chains across SIUs,achieving synergistic modeling of syntax and semantics.Experiments show that on benchmark datasets like QEMU,SAHAN significantly improves detection performance by 4.8%to 13.1%on average compared to baseline models such as Devign and VulDeePecker.
文摘Reliable traffic flow prediction is crucial for mitigating urban congestion.This paper proposes Attentionbased spatiotemporal Interactive Dynamic Graph Convolutional Network(AIDGCN),a novel architecture integrating Interactive Dynamic Graph Convolution Network(IDGCN)with Temporal Multi-Head Trend-Aware Attention.Its core innovation lies in IDGCN,which uniquely splits sequences into symmetric intervals for interactive feature sharing via dynamic graphs,and a novel attention mechanism incorporating convolutional operations to capture essential local traffic trends—addressing a critical gap in standard attention for continuous data.For 15-and 60-min forecasting on METR-LA,AIDGCN achieves MAEs of 0.75%and 0.39%,and RMSEs of 1.32%and 0.14%,respectively.In the 60-min long-term forecasting of the PEMS-BAY dataset,the AIDGCN out-performs the MRA-BGCN method by 6.28%,4.93%,and 7.17%in terms of MAE,RMSE,and MAPE,respectively.Experimental results demonstrate the superiority of our pro-posed model over state-of-the-art methods.
文摘[目的/意义]针对温室温湿度预测中多传感器数据融合可靠性低、传统模型忽略温湿度动态耦合,以及参数调优依赖人工经验等问题。[方法]首先,对传统卡尔曼(Kalman)滤波算法实施改进,通过动态调整过程噪声协方差和观测噪声协方差,结合新息方差动态分配多传感器权重。其次,针对温湿度的强耦合性及其协同控制的需求,构建多输出长短期记忆-注意力机制(Long Short-Term Memory-Attention,LSTM-Attention)模型,以温湿度协同预测为目标,引入注意力机制自适应加权关键环境因子,并采用灰狼优化算法(Grey Wolf Optimizer,GWO)自动对超参数进行寻优。[结果和讨论]提出的自适应卡尔曼滤波算法在多点温湿度融合中的平均绝对偏差分别为1.59℃和8.64%,比传统卡尔曼滤波算法分别降低1.24%、8.57%。以该算法融合结果作为模型训练集,模型在温湿度预测中决定系数R2分别达到98.2%和99.3%,比传统Kalman提升4.7%和4.3%。GWO-LSTM-Atten⁃tion模型的温湿度预测均方根误差分别为0.7768℃和2.0564%,比LSTM、LSTM-Attention时间序列预测模型分别降低15.6%、6.6%,湿度分别降低29.2%、5.7%。[结论]提出的自适应卡尔曼融合算法能够有效抑制异常值影响,可在非平稳环境变化下实现多传感器数据可靠融合。在温室多环境因子预测中,GWO-LSTM-Attention模型温湿度预测值在未来可作为控制温室环境的重要参考,进而实现对温室环境的实时调控。