期刊文献+
共找到30,218篇文章
< 1 2 250 >
每页显示 20 50 100
DEEP NEURAL NETWORKS COMBINING MULTI-TASK LEARNING FOR SOLVING DELAY INTEGRO-DIFFERENTIAL EQUATIONS 被引量:1
1
作者 WANG Chen-yao SHI Feng 《数学杂志》 2025年第1期13-38,共26页
Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay di... Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay differential equations(DDEs)and delay integrodifferential equations(DIDEs)with constant delays,primarily due to their low regularity at delayinduced breaking points.In this paper,a DNN method that combines multi-task learning(MTL)which is proposed to solve both the forward and inverse problems of DIDEs.The core idea of this approach is to divide the original equation into multiple tasks based on the delay,using auxiliary outputs to represent the integral terms,followed by the use of MTL to seamlessly incorporate the properties at the breaking points into the loss function.Furthermore,given the increased training dificulty associated with multiple tasks and outputs,we employ a sequential training scheme to reduce training complexity and provide reference solutions for subsequent tasks.This approach significantly enhances the approximation accuracy of solving DIDEs with DNNs,as demonstrated by comparisons with traditional DNN methods.We validate the effectiveness of this method through several numerical experiments,test various parameter sharing structures in MTL and compare the testing results of these structures.Finally,this method is implemented to solve the inverse problem of nonlinear DIDE and the results show that the unknown parameters of DIDE can be discovered with sparse or noisy data. 展开更多
关键词 Delay integro-differential equation multi-task learning parameter sharing structure deep neural network sequential training scheme
在线阅读 下载PDF
A Novel Self-Supervised Learning Network for Binocular Disparity Estimation 被引量:1
2
作者 Jiawei Tian Yu Zhou +5 位作者 Xiaobing Chen Salman A.AlQahtani Hongrong Chen Bo Yang Siyu Lu Wenfeng Zheng 《Computer Modeling in Engineering & Sciences》 SCIE EI 2025年第1期209-229,共21页
Two-dimensional endoscopic images are susceptible to interferences such as specular reflections and monotonous texture illumination,hindering accurate three-dimensional lesion reconstruction by surgical robots.This st... Two-dimensional endoscopic images are susceptible to interferences such as specular reflections and monotonous texture illumination,hindering accurate three-dimensional lesion reconstruction by surgical robots.This study proposes a novel end-to-end disparity estimation model to address these challenges.Our approach combines a Pseudo-Siamese neural network architecture with pyramid dilated convolutions,integrating multi-scale image information to enhance robustness against lighting interferences.This study introduces a Pseudo-Siamese structure-based disparity regression model that simplifies left-right image comparison,improving accuracy and efficiency.The model was evaluated using a dataset of stereo endoscopic videos captured by the Da Vinci surgical robot,comprising simulated silicone heart sequences and real heart video data.Experimental results demonstrate significant improvement in the network’s resistance to lighting interference without substantially increasing parameters.Moreover,the model exhibited faster convergence during training,contributing to overall performance enhancement.This study advances endoscopic image processing accuracy and has potential implications for surgical robot applications in complex environments. 展开更多
关键词 Parallax estimation parallax regression model self-supervised learning Pseudo-Siamese neural network pyramid dilated convolution binocular disparity estimation
在线阅读 下载PDF
A Survey of Cooperative Multi-agent Reinforcement Learning for Multi-task Scenarios 被引量:1
3
作者 Jiajun CHAI Zijie ZHAO +1 位作者 Yuanheng ZHU Dongbin ZHAO 《Artificial Intelligence Science and Engineering》 2025年第2期98-121,共24页
Cooperative multi-agent reinforcement learning(MARL)is a key technology for enabling cooperation in complex multi-agent systems.It has achieved remarkable progress in areas such as gaming,autonomous driving,and multi-... Cooperative multi-agent reinforcement learning(MARL)is a key technology for enabling cooperation in complex multi-agent systems.It has achieved remarkable progress in areas such as gaming,autonomous driving,and multi-robot control.Empowering cooperative MARL with multi-task decision-making capabilities is expected to further broaden its application scope.In multi-task scenarios,cooperative MARL algorithms need to address 3 types of multi-task problems:reward-related multi-task,arising from different reward functions;multi-domain multi-task,caused by differences in state and action spaces,state transition functions;and scalability-related multi-task,resulting from the dynamic variation in the number of agents.Most existing studies focus on scalability-related multitask problems.However,with the increasing integration between large language models(LLMs)and multi-agent systems,a growing number of LLM-based multi-agent systems have emerged,enabling more complex multi-task cooperation.This paper provides a comprehensive review of the latest advances in this field.By combining multi-task reinforcement learning with cooperative MARL,we categorize and analyze the 3 major types of multi-task problems under multi-agent settings,offering more fine-grained classifications and summarizing key insights for each.In addition,we summarize commonly used benchmarks and discuss future directions of research in this area,which hold promise for further enhancing the multi-task cooperation capabilities of multi-agent systems and expanding their practical applications in the real world. 展开更多
关键词 multi-task multi-agent reinforcement learning large language models
在线阅读 下载PDF
Multi-QoS routing algorithm based on reinforcement learning for LEO satellite networks 被引量:1
4
作者 ZHANG Yifan DONG Tao +1 位作者 LIU Zhihui JIN Shichao 《Journal of Systems Engineering and Electronics》 2025年第1期37-47,共11页
Low Earth orbit(LEO)satellite networks exhibit distinct characteristics,e.g.,limited resources of individual satellite nodes and dynamic network topology,which have brought many challenges for routing algorithms.To sa... Low Earth orbit(LEO)satellite networks exhibit distinct characteristics,e.g.,limited resources of individual satellite nodes and dynamic network topology,which have brought many challenges for routing algorithms.To satisfy quality of service(QoS)requirements of various users,it is critical to research efficient routing strategies to fully utilize satellite resources.This paper proposes a multi-QoS information optimized routing algorithm based on reinforcement learning for LEO satellite networks,which guarantees high level assurance demand services to be prioritized under limited satellite resources while considering the load balancing performance of the satellite networks for low level assurance demand services to ensure the full and effective utilization of satellite resources.An auxiliary path search algorithm is proposed to accelerate the convergence of satellite routing algorithm.Simulation results show that the generated routing strategy can timely process and fully meet the QoS demands of high assurance services while effectively improving the load balancing performance of the link. 展开更多
关键词 low Earth orbit(LEO)satellite network reinforcement learning multi-quality of service(QoS) routing algorithm
在线阅读 下载PDF
Explainable AI Based Multi-Task Learning Method for Stroke Prognosis
5
作者 Nan Ding Xingyu Zeng +1 位作者 Jianping Wu Liutao Zhao 《Computers, Materials & Continua》 2025年第9期5299-5315,共17页
Predicting the health status of stroke patients at different stages of the disease is a critical clinical task.The onset and development of stroke are affected by an array of factors,encompassing genetic predispositio... Predicting the health status of stroke patients at different stages of the disease is a critical clinical task.The onset and development of stroke are affected by an array of factors,encompassing genetic predisposition,environmental exposure,unhealthy lifestyle habits,and existing medical conditions.Although existing machine learning-based methods for predicting stroke patients’health status have made significant progress,limitations remain in terms of prediction accuracy,model explainability,and system optimization.This paper proposes a multi-task learning approach based on Explainable Artificial Intelligence(XAI)for predicting the health status of stroke patients.First,we design a comprehensive multi-task learning framework that utilizes the task correlation of predicting various health status indicators in patients,enabling the parallel prediction of multiple health indicators.Second,we develop a multi-task Area Under Curve(AUC)optimization algorithm based on adaptive low-rank representation,which removes irrelevant information from the model structure to enhance the performance of multi-task AUC optimization.Additionally,the model’s explainability is analyzed through the stability analysis of SHAP values.Experimental results demonstrate that our approach outperforms comparison algorithms in key prognostic metrics F1 score and Efficiency. 展开更多
关键词 Explainable AI stroke prognosis multi-task learning AUC optimization
在线阅读 下载PDF
Joint Retrieval of PM_(2.5) Concentration and Aerosol Optical Depth over China Using Multi-Task Learning on FY-4A AGRI
6
作者 Bo LI Disong FU +4 位作者 Ling YANG Xuehua FAN Dazhi YANG Hongrong SHI Xiang’ao XIA 《Advances in Atmospheric Sciences》 2025年第1期94-110,共17页
Aerosol optical depth(AOD)and fine particulate matter with a diameter of less than or equal to 2.5μm(PM_(2.5))play crucial roles in air quality,human health,and climate change.However,the complex correlation of AOD–... Aerosol optical depth(AOD)and fine particulate matter with a diameter of less than or equal to 2.5μm(PM_(2.5))play crucial roles in air quality,human health,and climate change.However,the complex correlation of AOD–PM_(2.5)and the limitations of existing algorithms pose a significant challenge in realizing the accurate joint retrieval of these two parameters at the same location.On this point,a multi-task learning(MTL)model,which enables the joint retrieval of PM_(2.5)concentration and AOD,is proposed and applied on the top-of-the-atmosphere reflectance data gathered by the Fengyun-4A Advanced Geosynchronous Radiation Imager(FY-4A AGRI),and compared to that of two single-task learning models—namely,Random Forest(RF)and Deep Neural Network(DNN).Specifically,MTL achieves a coefficient of determination(R^(2))of 0.88 and a root-mean-square error(RMSE)of 0.10 in AOD retrieval.In comparison to RF,the R^(2)increases by 0.04,the RMSE decreases by 0.02,and the percentage of retrieval results falling within the expected error range(Within-EE)rises by 5.55%.The R^(2)and RMSE of PM_(2.5)retrieval by MTL are 0.84 and 13.76μg m~(-3)respectively.Compared with RF,the R^(2)increases by 0.06,the RMSE decreases by 4.55μg m~(-3),and the Within-EE increases by 7.28%.Additionally,compared to DNN,MTL shows an increase of 0.01 in R^(2)and a decrease of 0.02 in RMSE in AOD retrieval,with a corresponding increase of 2.89%in Within-EE.For PM_(2.5)retrieval,MTL exhibits an increase of 0.05 in R^(2),a decrease of 1.76μg m~(-3)in RMSE,and an increase of 6.83%in Within-EE.The evaluation suggests that MTL is able to provide simultaneously improved AOD and PM_(2.5)retrievals,demonstrating a significant advantage in efficiently capturing the spatial distribution of PM_(2.5)concentration and AOD. 展开更多
关键词 AOD PM_(2.5) FY-4A multi-task learning joint retrieval
在线阅读 下载PDF
Skillful bias correction of offshore near-surface wind field forecasting based on a multi-task machine learning model
7
作者 Qiyang Liu Anboyu Guo +5 位作者 Fengxue Qiao Xinjian Ma Yan-An Liu Yong Huang Rui Wang Chunyan Sheng 《Atmospheric and Oceanic Science Letters》 2025年第5期28-35,共8页
Accurate short-term forecast of offshore wind fields is still challenging for numerical weather prediction models.Based on three years of 48-hour forecast data from the European Centre for Medium-Range Weather Forecas... Accurate short-term forecast of offshore wind fields is still challenging for numerical weather prediction models.Based on three years of 48-hour forecast data from the European Centre for Medium-Range Weather Forecasts Integrated Forecasting System global model(ECMWF-IFS)over 14 offshore weather stations along the coast of Shandong Province,this study introduces a multi-task learning(MTL)model(TabNet-MTL),which significantly improves the forecast bias of near-surface wind direction and speed simultaneously.TabNet-MTL adopts the feature engineering method,utilizes mean square error as the loss function,and employs the 5-fold cross validation method to ensure the generalization ability of the trained model.It demonstrates superior skills in wind field correction across different forecast lead times over all stations compared to its single-task version(TabNet-STL)and three other popular single-task learning models(Random Forest,LightGBM,and XGBoost).Results show that it significantly reduces root mean square error of the ECMWF-IFS wind speed forecast from 2.20 to 1.25 m s−1,and increases the forecast accuracy of wind direction from 50%to 65%.As an explainable deep learning model,the weather stations and long-term temporal statistics of near-surface wind speed are identified as the most influential variables for TabNet-MTL in constructing its feature engineering. 展开更多
关键词 Forecast bias correction Wind field multi-task learning Feature engineering Explainable AI
在线阅读 下载PDF
MAMGBR: Group-Buying Recommendation Model Based on Multi-Head Attention Mechanism and Multi-Task Learning
8
作者 Zongzhe Xu Ming Yu 《Computers, Materials & Continua》 2025年第8期2805-2826,共22页
As the group-buying model shows significant progress in attracting new users,enhancing user engagement,and increasing platform profitability,providing personalized recommendations for group-buying users has emerged as... As the group-buying model shows significant progress in attracting new users,enhancing user engagement,and increasing platform profitability,providing personalized recommendations for group-buying users has emerged as a new challenge in the field of recommendation systems.This paper introduces a group-buying recommendation model based on multi-head attention mechanisms and multi-task learning,termed the Multi-head Attention Mechanisms and Multi-task Learning Group-Buying Recommendation(MAMGBR)model,specifically designed to optimize group-buying recommendations on e-commerce platforms.The core dataset of this study comes from the Chinese maternal and infant e-commerce platform“Beibei,”encompassing approximately 430,000 successful groupbuying actions and over 120,000 users.Themodel focuses on twomain tasks:recommending items for group organizers(Task Ⅰ)and recommending participants for a given group-buying event(Task Ⅱ).In model evaluation,MAMGBR achieves an MRR@10 of 0.7696 for Task I,marking a 20.23%improvement over baseline models.Furthermore,in Task II,where complex interaction patterns prevail,MAMGBR utilizes auxiliary loss functions to effectively model the multifaceted roles of users,items,and participants,leading to a 24.08%increase in MRR@100 under a 1:99 sample ratio.Experimental results show that compared to benchmark models,such as NGCF and EATNN,MAMGBR’s integration ofmulti-head attentionmechanisms,expert networks,and gating mechanisms enables more accurate modeling of user preferences and social associations within group-buying scenarios,significantly enhancing recommendation accuracy and platform group-buying success rates. 展开更多
关键词 Group-buying recommendation multi-head attention mechanism multi-task learning
在线阅读 下载PDF
Secure Malicious Node Detection in Decentralized Healthcare Networks Using Cloud and Edge Computing with Blockchain-Enabled Federated Learning
9
作者 Raj Sonani Reham Alhejaili +2 位作者 Pushpalika Chatterjee Khalid Hamad Alnafisah Jehad Ali 《Computer Modeling in Engineering & Sciences》 2025年第9期3169-3189,共21页
Healthcare networks are transitioning from manual records to electronic health records,but this shift introduces vulnerabilities such as secure communication issues,privacy concerns,and the presence of malicious nodes... Healthcare networks are transitioning from manual records to electronic health records,but this shift introduces vulnerabilities such as secure communication issues,privacy concerns,and the presence of malicious nodes.Existing machine and deep learning-based anomalies detection methods often rely on centralized training,leading to reduced accuracy and potential privacy breaches.Therefore,this study proposes a Blockchain-based-Federated Learning architecture for Malicious Node Detection(BFL-MND)model.It trains models locally within healthcare clusters,sharing only model updates instead of patient data,preserving privacy and improving accuracy.Cloud and edge computing enhance the model’s scalability,while blockchain ensures secure,tamper-proof access to health data.Using the PhysioNet dataset,the proposed model achieves an accuracy of 0.95,F1 score of 0.93,precision of 0.94,and recall of 0.96,outperforming baseline models like random forest(0.88),adaptive boosting(0.90),logistic regression(0.86),perceptron(0.83),and deep neural networks(0.92). 展开更多
关键词 Authentication blockchain deep learning federated learning healthcare network machine learning wearable sensor nodes
在线阅读 下载PDF
MolP-PC:a multi-view fusion and multi-task learning framework for drug ADMET property prediction
10
作者 Sishu Li Jing Fan +2 位作者 Haiyang He Ruifeng Zhou Jun Liao 《Chinese Journal of Natural Medicines》 2025年第11期1293-1300,共8页
The accurate prediction of drug absorption,distribution,metabolism,excretion,and toxicity(ADMET)properties represents a crucial step in early drug development for reducing failure risk.Current deep learning approaches... The accurate prediction of drug absorption,distribution,metabolism,excretion,and toxicity(ADMET)properties represents a crucial step in early drug development for reducing failure risk.Current deep learning approaches face challenges with data sparsity and information loss due to single-molecule representation limitations and isolated predictive tasks.This research proposes molecular properties prediction with parallel-view and collaborative learning(MolP-PC),a multi-view fusion and multi-task deep learning framework that integrates 1D molecular fingerprints(MFs),2D molecular graphs,and 3D geometric representations,incorporating an attention-gated fusion mechanism and multi-task adaptive learning strategy for precise ADMET property predictions.Experimental results demonstrate that MolP-PC achieves optimal performance in 27 of 54 tasks,with its multi-task learning(MTL)mechanism significantly enhancing predictive performance on small-scale datasets and surpassing single-task models in 41 of 54 tasks.Additional ablation studies and interpretability analyses confirm the significance of multi-view fusion in capturing multi-dimensional molecular information and enhancing model generalization.A case study examining the anticancer compound Oroxylin A demonstrates MolP-PC’s effective generalization in predicting key pharmacokinetic parameters such as half-life(T0.5)and clearance(CL),indicating its practical utility in drug modeling.However,the model exhibits a tendency to underestimate volume of distribution(VD),indicating potential for improvement in analyzing compounds with high tissue distribution.This study presents an efficient and interpretable approach for ADMET property prediction,establishing a novel framework for molecular optimization and risk assessment in drug development. 展开更多
关键词 Molecular ADMET prediction Multi-view fusion Attention mechanism multi-task deep learning
原文传递
An Ensembled Multi-Layer Automatic-Constructed Weighted Online Broad Learning System for Fault Detection in Cellular Networks
11
作者 Wang Qi Pan Zhiwen +1 位作者 Liu Nan You Xiaohu 《China Communications》 2025年第8期150-167,共18页
6G is desired to support more intelligence networks and this trend attaches importance to the self-healing capability if degradation emerges in the cellular networks.As a primary component of selfhealing networks,faul... 6G is desired to support more intelligence networks and this trend attaches importance to the self-healing capability if degradation emerges in the cellular networks.As a primary component of selfhealing networks,fault detection is investigated in this paper.Considering the fast response and low timeand-computational consumption,it is the first time that the Online Broad Learning System(OBLS)is applied to identify outages in cellular networks.In addition,the Automatic-constructed Online Broad Learning System(AOBLS)is put forward to rationalize its structure and consequently avoid over-fitting and under-fitting.Furthermore,a multi-layer classification structure is proposed to further improve the classification performance.To face the challenges caused by imbalanced data in fault detection problems,a novel weighting strategy is derived to achieve the Multilayer Automatic-constructed Weighted Online Broad Learning System(MAWOBLS)and ensemble learning with retrained Support Vector Machine(SVM),denoted as EMAWOBLS,for superior treatment with this imbalance issue.Simulation results show that the proposed algorithm has excellent performance in detecting faults with satisfactory time usage. 展开更多
关键词 broad learning system(BLS) cell outage detection cellular network fault detection ensemble learning imbalanced classification online broad learning system(OBLS) self-healing network weighted broad learning system(WBLS)
在线阅读 下载PDF
When Communication Networks Meet Federated Learning for Intelligence Interconnecting:A Comprehensive Survey and Future Perspective
12
作者 Sha Zongxuan Huo Ru +3 位作者 Sun Chuang Wang Shuo Huang Tao F.Richard Yu 《China Communications》 2025年第7期74-94,共21页
With the rapid development of network technologies,a large number of deployed edge devices and information systems generate massive amounts of data which provide good support for the advancement of data-driven intelli... With the rapid development of network technologies,a large number of deployed edge devices and information systems generate massive amounts of data which provide good support for the advancement of data-driven intelligent models.However,these data often contain sensitive information of users.Federated learning(FL),as a privacy preservation machine learning setting,allows users to obtain a well-trained model without sending the privacy-sensitive local data to the central server.Despite the promising prospect of FL,several significant research challenges need to be addressed before widespread deployment,including network resource allocation,model security,model convergence,etc.In this paper,we first provide a brief survey on some of these works that have been done on FL and discuss the motivations of the Communication Networks(CNs)and FL to mutually enable each other.We analyze the support of network technologies for FL,which requires frequent communication and emphasizes security,as well as the studies on the intelligence of many network scenarios and the improvement of network performance and security by the methods based on FL.At last,some challenges and broader perspectives are explored. 展开更多
关键词 communication networks federated learning intelligence interconnecting machine learning privacy preservation
在线阅读 下载PDF
APFed: Adaptive personalized federated learning for intrusion detection in maritime meteorological sensor networks
13
作者 Xin Su Guifu Zhang 《Digital Communications and Networks》 2025年第2期401-411,共11页
With the rapid development of advanced networking and computing technologies such as the Internet of Things, network function virtualization, and 5G infrastructure, new development opportunities are emerging for Marit... With the rapid development of advanced networking and computing technologies such as the Internet of Things, network function virtualization, and 5G infrastructure, new development opportunities are emerging for Maritime Meteorological Sensor Networks(MMSNs). However, the increasing number of intelligent devices joining the MMSN poses a growing threat to network security. Current Artificial Intelligence(AI) intrusion detection techniques turn intrusion detection into a classification problem, where AI excels. These techniques assume sufficient high-quality instances for model construction, which is often unsatisfactory for real-world operation with limited attack instances and constantly evolving characteristics. This paper proposes an Adaptive Personalized Federated learning(APFed) framework that allows multiple MMSN owners to engage in collaborative training. By employing an adaptive personalized update and a shared global classifier, the adverse effects of imbalanced, Non-Independent and Identically Distributed(Non-IID) data are mitigated, enabling the intrusion detection model to possess personalized capabilities and good global generalization. In addition, a lightweight intrusion detection model is proposed to detect various attacks with an effective adaptation to the MMSN environment. Finally, extensive experiments on a classical network dataset show that the attack classification accuracy is improved by about 5% compared to most baselines in the global scenarios. 展开更多
关键词 Intrusion detection Maritime meteorological sensor network Federated learning Personalized model Deep learning
在线阅读 下载PDF
Container cluster placement in edge computing based on reinforcement learning incorporating graph convolutional networks scheme
14
作者 Zhuo Chen Bowen Zhu Chuan Zhou 《Digital Communications and Networks》 2025年第1期60-70,共11页
Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilizat... Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilization efficiency. To meet the diverse needs of tasks, it usually needs to instantiate multiple network functions in the form of containers interconnect various generated containers to build a Container Cluster(CC). Then CCs will be deployed on edge service nodes with relatively limited resources. However, the increasingly complex and timevarying nature of tasks brings great challenges to optimal placement of CC. This paper regards the charges for various resources occupied by providing services as revenue, the service efficiency and energy consumption as cost, thus formulates a Mixed Integer Programming(MIP) model to describe the optimal placement of CC on edge service nodes. Furthermore, an Actor-Critic based Deep Reinforcement Learning(DRL) incorporating Graph Convolutional Networks(GCN) framework named as RL-GCN is proposed to solve the optimization problem. The framework obtains an optimal placement strategy through self-learning according to the requirements and objectives of the placement of CC. Particularly, through the introduction of GCN, the features of the association relationship between multiple containers in CCs can be effectively extracted to improve the quality of placement.The experiment results show that under different scales of service nodes and task requests, the proposed method can obtain the improved system performance in terms of placement error ratio, time efficiency of solution output and cumulative system revenue compared with other representative baseline methods. 展开更多
关键词 Edge computing network virtualization Container cluster Deep reinforcement learning Graph convolutional network
在线阅读 下载PDF
A multi-task learning method for blast furnace gas forecasting based on coupling correlation analysis and inverted transformer
15
作者 Sheng Xie Jing-shu Zhang +2 位作者 Da-tao Shi Yang Guo Qi Zhang 《Journal of Iron and Steel Research International》 2025年第10期3280-3297,共18页
Accurate forecasting of blast furnace gas(BFG)production is an essential prerequisite for reasonable energy scheduling and management to reduce carbon emissions.Coupling forecasting between BFG generation and consumpt... Accurate forecasting of blast furnace gas(BFG)production is an essential prerequisite for reasonable energy scheduling and management to reduce carbon emissions.Coupling forecasting between BFG generation and consumption dynamics was taken as the research object.A multi-task learning(MTL)method for BFG forecasting was proposed,which integrated a coupling correlation coefficient(CCC)and an inverted transformer structure.The CCC method could enhance key information extraction by establishing relationships between multiple prediction targets and relevant factors,while MTL effectively captured the inherent correlations between BFG generation and consumption.Finally,a real-world case study was conducted to compare the proposed model with four benchmark models.Results indicated significant reductions in average mean absolute percentage error by 33.37%,achieving 1.92%,with a computational time of 76 s.The sensitivity analysis of hyperparameters such as learning rate,batch size,and units of the long short-term memory layer highlights the importance of hyperparameter tuning. 展开更多
关键词 Byproduct gases forecasting Coupling correlation coefficient multi-task learning Inverted transformer Bi-directional long short-term memory Blast furnace gas
原文传递
Addressing Modern Cybersecurity Challenges: A Hybrid Machine Learning and Deep Learning Approach for Network Intrusion Detection
16
作者 Khadija Bouzaachane El Mahdi El Guarmah +1 位作者 Abdullah M.Alnajim Sheroz Khan 《Computers, Materials & Continua》 2025年第8期2391-2410,共20页
The rapid increase in the number of Internet of Things(IoT)devices,coupled with a rise in sophisticated cyberattacks,demands robust intrusion detection systems.This study presents a holistic,intelligent intrusion dete... The rapid increase in the number of Internet of Things(IoT)devices,coupled with a rise in sophisticated cyberattacks,demands robust intrusion detection systems.This study presents a holistic,intelligent intrusion detection system.It uses a combined method that integrates machine learning(ML)and deep learning(DL)techniques to improve the protection of contemporary information technology(IT)systems.Unlike traditional signature-based or singlemodel methods,this system integrates the strengths of ensemble learning for binary classification and deep learning for multi-class classification.This combination provides a more nuanced and adaptable defense.The research utilizes the NF-UQ-NIDS-v2 dataset,a recent,comprehensive benchmark for evaluating network intrusion detection systems(NIDS).Our methodological framework employs advanced artificial intelligence techniques.Specifically,we use ensemble learning algorithms(Random Forest,Gradient Boosting,AdaBoost,and XGBoost)for binary classification.Deep learning architectures are also employed to address the complexities of multi-class classification,allowing for fine-grained identification of intrusion types.To mitigate class imbalance,a common problem in multi-class intrusion detection that biases model performance,we use oversampling and data augmentation.These techniques ensure equitable class representation.The results demonstrate the efficacy of the proposed hybrid ML-DL system.It achieves significant improvements in intrusion detection accuracy and reliability.This research contributes substantively to cybersecurity by providing a more robust and adaptable intrusion detection solution. 展开更多
关键词 network intrusion detection systems(NIDS) NF-UQ-NIDS-v2 dataset ensemble learning decision tree K-means SMOTE deep learning
在线阅读 下载PDF
Deep Q-Learning Driven Protocol for Enhanced Border Surveillance with Extended Wireless Sensor Network Lifespan
17
作者 Nimisha Rajput Amit Kumar +3 位作者 Raghavendra Pal Nishu Gupta Mikko Uitto Jukka Mäkelä 《Computer Modeling in Engineering & Sciences》 2025年第6期3839-3859,共21页
Wireless Sensor Networks(WSNs)play a critical role in automated border surveillance systems,where continuous monitoring is essential.However,limited energy resources in sensor nodes lead to frequent network failures a... Wireless Sensor Networks(WSNs)play a critical role in automated border surveillance systems,where continuous monitoring is essential.However,limited energy resources in sensor nodes lead to frequent network failures and reduced coverage over time.To address this issue,this paper presents an innovative energy-efficient protocol based on deep Q-learning(DQN),specifically developed to prolong the operational lifespan of WSNs used in border surveillance.By harnessing the adaptive power of DQN,the proposed protocol dynamically adjusts node activity and communication patterns.This approach ensures optimal energy usage while maintaining high coverage,connectivity,and data accuracy.The proposed system is modeled with 100 sensor nodes deployed over a 1000 m×1000 m area,featuring a strategically positioned sink node.Our method outperforms traditional approaches,achieving significant enhancements in network lifetime and energy utilization.Through extensive simulations,it is observed that the network lifetime increases by 9.75%,throughput increases by 8.85%and average delay decreases by 9.45%in comparison to the similar recent protocols.It demonstrates the robustness and efficiency of our protocol in real-world scenarios,highlighting its potential to revolutionize border surveillance operations. 展开更多
关键词 Wireless sensor networks(WSNs) energy efficiency reinforcement learning network lifetime dynamic node management autonomous surveillance
在线阅读 下载PDF
Machine Learning Enabled Reusable Adhesion,Entangled Network‑Based Hydrogel for Long‑Term,High‑Fidelity EEG Recording and Attention Assessment
18
作者 Kai Zheng Chengcheng Zheng +9 位作者 Lixian Zhu Bihai Yang Xiaokun Jin Su Wang Zikai Song Jingyu Liu Yan Xiong Fuze Tian Ran Cai Bin Hu 《Nano-Micro Letters》 2025年第11期514-529,共16页
Due to their high mechanical compliance and excellent biocompatibility,conductive hydrogels exhibit significant potential for applications in flexible electronics.However,as the demand for high sensitivity,superior me... Due to their high mechanical compliance and excellent biocompatibility,conductive hydrogels exhibit significant potential for applications in flexible electronics.However,as the demand for high sensitivity,superior mechanical properties,and strong adhesion performance continues to grow,many conventional fabrication methods remain complex and costly.Herein,we propose a simple and efficient strategy to construct an entangled network hydrogel through a liquid-metal-induced cross-linking reaction,hydrogel demonstrates outstanding properties,including exceptional stretchability(1643%),high tensile strength(366.54 kPa),toughness(350.2 kJ m^(−3)),and relatively low mechanical hysteresis.The hydrogel exhibits long-term stable reusable adhesion(104 kPa),enabling conformal and stable adhesion to human skin.This capability allows it to effectively capture high-quality epidermal electrophysiological signals with high signal-to-noise ratio(25.2 dB)and low impedance(310 ohms).Furthermore,by integrating advanced machine learning algorithms,achieving an attention classification accuracy of 91.38%,which will significantly impact fields like education,healthcare,and artificial intelligence. 展开更多
关键词 Entangled network Reusable adhesion Epidermal sensor Machine learning Attention assessment
在线阅读 下载PDF
Graph Neural Networks Empowered Origin-Destination Learning for Urban Traffic Prediction
19
作者 Chuanting Zhang Guoqing Ma +1 位作者 Liang Zhang Basem Shihada 《CAAI Transactions on Intelligence Technology》 2025年第4期1062-1076,共15页
Urban traffic prediction with high precision is always the unremitting pursuit of intelligent transportation systems and is instrumental in bringing smart cities into reality.The fundamental challenges for traffic pre... Urban traffic prediction with high precision is always the unremitting pursuit of intelligent transportation systems and is instrumental in bringing smart cities into reality.The fundamental challenges for traffic prediction lie in the accurate modelling of spatial and temporal traffic dynamics.Existing approaches mainly focus on modelling the traffic data itself,but do not explore the traffic correlations implicit in origin-destination(OD)data.In this paper,we propose STOD-Net,a dynamic spatial-temporal OD feature-enhanced deep network,to simultaneously predict the in-traffic and out-traffic for each and every region of a city.We model the OD data as dynamic graphs and adopt graph neural networks in STOD-Net to learn a low-dimensional representation for each region.As per the region feature,we design a gating mechanism and operate it on the traffic feature learning to explicitly capture spatial correlations.To further capture the complicated spatial and temporal dependencies among different regions,we propose a novel joint feature,learning block in STOD-Net and transfer the hybrid OD features to each block to make the learning process spatiotemporal-aware.We evaluate the effectiveness of STOD-Net on two benchmark datasets,and experimental results demonstrate that it outperforms the state-of-the-art by approximately 5%in terms of prediction accuracy and considerably improves prediction stability up to 80%in terms of standard deviation. 展开更多
关键词 deep neural networks origin-destination learning spatial-temporal modeling traffic prediction
在线阅读 下载PDF
BDS-3 Satellite Orbit Prediction Method Based on Ensemble Learning and Neural Networks
20
作者 Ruibo Wei Yao Kong +2 位作者 Mengzhao Li Feng Liu Fang Cheng 《Computers, Materials & Continua》 2025年第7期1507-1528,共22页
To address uncertainties in satellite orbit error prediction,this study proposes a novel ensemble learning-based orbit prediction method specifically designed for the BeiDou navigation satellite system(BDS).Building o... To address uncertainties in satellite orbit error prediction,this study proposes a novel ensemble learning-based orbit prediction method specifically designed for the BeiDou navigation satellite system(BDS).Building on ephemeris data and perturbation corrections,two new models are proposed:attention-enhanced BPNN(AEBP)and Transformer-ResNet-BiLSTM(TR-BiLSTM).These models effectively capture both local and global dependencies in satellite orbit data.To further enhance prediction accuracy and stability,the outputs of these two models were integrated using the gradient boosting decision tree(GBDT)ensemble learning method,which was optimized through a grid search.The main contribution of this approach is the synergistic combination of deep learning models and GBDT,which significantly improves both the accuracy and robustness of satellite orbit predictions.This model was validated using broadcast ephemeris data from the BDS-3 MEO and inclined geosynchronous orbit(IGSO)satellites.The results show that the proposed method achieves an error correction rate of 65.4%.This ensemble learning-based approach offers a highly effective solution for high-precision and stable satellite orbit predictions. 展开更多
关键词 BDS satellite orbit ensemble learning neural networks orbit error
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部