期刊文献+
共找到30,770篇文章
< 1 2 250 >
每页显示 20 50 100
DEEP NEURAL NETWORKS COMBINING MULTI-TASK LEARNING FOR SOLVING DELAY INTEGRO-DIFFERENTIAL EQUATIONS 被引量:1
1
作者 WANG Chen-yao SHI Feng 《数学杂志》 2025年第1期13-38,共26页
Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay di... Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay differential equations(DDEs)and delay integrodifferential equations(DIDEs)with constant delays,primarily due to their low regularity at delayinduced breaking points.In this paper,a DNN method that combines multi-task learning(MTL)which is proposed to solve both the forward and inverse problems of DIDEs.The core idea of this approach is to divide the original equation into multiple tasks based on the delay,using auxiliary outputs to represent the integral terms,followed by the use of MTL to seamlessly incorporate the properties at the breaking points into the loss function.Furthermore,given the increased training dificulty associated with multiple tasks and outputs,we employ a sequential training scheme to reduce training complexity and provide reference solutions for subsequent tasks.This approach significantly enhances the approximation accuracy of solving DIDEs with DNNs,as demonstrated by comparisons with traditional DNN methods.We validate the effectiveness of this method through several numerical experiments,test various parameter sharing structures in MTL and compare the testing results of these structures.Finally,this method is implemented to solve the inverse problem of nonlinear DIDE and the results show that the unknown parameters of DIDE can be discovered with sparse or noisy data. 展开更多
关键词 Delay integro-differential equation multi-task learning parameter sharing structure deep neural network sequential training scheme
在线阅读 下载PDF
A Novel Self-Supervised Learning Network for Binocular Disparity Estimation 被引量:1
2
作者 Jiawei Tian Yu Zhou +5 位作者 Xiaobing Chen Salman A.AlQahtani Hongrong Chen Bo Yang Siyu Lu Wenfeng Zheng 《Computer Modeling in Engineering & Sciences》 SCIE EI 2025年第1期209-229,共21页
Two-dimensional endoscopic images are susceptible to interferences such as specular reflections and monotonous texture illumination,hindering accurate three-dimensional lesion reconstruction by surgical robots.This st... Two-dimensional endoscopic images are susceptible to interferences such as specular reflections and monotonous texture illumination,hindering accurate three-dimensional lesion reconstruction by surgical robots.This study proposes a novel end-to-end disparity estimation model to address these challenges.Our approach combines a Pseudo-Siamese neural network architecture with pyramid dilated convolutions,integrating multi-scale image information to enhance robustness against lighting interferences.This study introduces a Pseudo-Siamese structure-based disparity regression model that simplifies left-right image comparison,improving accuracy and efficiency.The model was evaluated using a dataset of stereo endoscopic videos captured by the Da Vinci surgical robot,comprising simulated silicone heart sequences and real heart video data.Experimental results demonstrate significant improvement in the network’s resistance to lighting interference without substantially increasing parameters.Moreover,the model exhibited faster convergence during training,contributing to overall performance enhancement.This study advances endoscopic image processing accuracy and has potential implications for surgical robot applications in complex environments. 展开更多
关键词 Parallax estimation parallax regression model self-supervised learning Pseudo-Siamese neural network pyramid dilated convolution binocular disparity estimation
在线阅读 下载PDF
Harnessing deep learning for the discovery of latent patterns in multi-omics medical data
3
作者 Okechukwu Paul-Chima Ugwu Fabian COgenyi +8 位作者 Chinyere Nkemjika Anyanwu Melvin Nnaemeka Ugwu Esther Ugo Alum Mariam Basajja Joseph Obiezu Chukwujekwu Ezeonwumelu Daniel Ejim Uti Ibe Michael Usman Chukwuebuka Gabriel Eze Simeon Ikechukwu Egba 《Medical Data Mining》 2026年第1期32-45,共14页
The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities... The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities and obstacles.The huge and diversified nature of these datasets cannot always be managed using traditional data analysis methods.As a consequence,deep learning has emerged as a strong tool for analysing numerous omics data due to its ability to handle complex and non-linear relationships.This paper explores the fundamental concepts of deep learning and how they are used in multi-omics medical data mining.We demonstrate how autoencoders,variational autoencoders,multimodal models,attention mechanisms,transformers,and graph neural networks enable pattern analysis and recognition across all omics data.Deep learning has been found to be effective in illness classification,biomarker identification,gene network learning,and therapeutic efficacy prediction.We also consider critical problems like as data quality,model explainability,whether findings can be repeated,and computational power requirements.We now consider future elements of combining omics with clinical and imaging data,explainable AI,federated learning,and real-time diagnostics.Overall,this study emphasises the need of collaborating across disciplines to advance deep learning-based multi-omics research for precision medicine and comprehending complicated disorders. 展开更多
关键词 deep learning multi-omics integration biomedical data mining precision medicine graph neural networks autoencoders and transformers
在线阅读 下载PDF
Research on the visualization method of lithology intelligent recognition based on deep learning using mine tunnel images
4
作者 Aiai Wang Shuai Cao +1 位作者 Erol Yilmaz Hui Cao 《International Journal of Minerals,Metallurgy and Materials》 2026年第1期141-152,共12页
An image processing and deep learning method for identifying different types of rock images was proposed.Preprocessing,such as rock image acquisition,gray scaling,Gaussian blurring,and feature dimensionality reduction... An image processing and deep learning method for identifying different types of rock images was proposed.Preprocessing,such as rock image acquisition,gray scaling,Gaussian blurring,and feature dimensionality reduction,was conducted to extract useful feature information and recognize and classify rock images using Tensor Flow-based convolutional neural network(CNN)and Py Qt5.A rock image dataset was established and separated into workouts,confirmation sets,and test sets.The framework was subsequently compiled and trained.The categorization approach was evaluated using image data from the validation and test datasets,and key metrics,such as accuracy,precision,and recall,were analyzed.Finally,the classification model conducted a probabilistic analysis of the measured data to determine the equivalent lithological type for each image.The experimental results indicated that the method combining deep learning,Tensor Flow-based CNN,and Py Qt5 to recognize and classify rock images has an accuracy rate of up to 98.8%,and can be successfully utilized for rock image recognition.The system can be extended to geological exploration,mine engineering,and other rock and mineral resource development to more efficiently and accurately recognize rock samples.Moreover,it can match them with the intelligent support design system to effectively improve the reliability and economy of the support scheme.The system can serve as a reference for supporting the design of other mining and underground space projects. 展开更多
关键词 rock picture recognition convolutional neural network intelligent support for roadways deep learning lithology determination
在线阅读 下载PDF
A Survey of Cooperative Multi-agent Reinforcement Learning for Multi-task Scenarios 被引量:1
5
作者 Jiajun CHAI Zijie ZHAO +1 位作者 Yuanheng ZHU Dongbin ZHAO 《Artificial Intelligence Science and Engineering》 2025年第2期98-121,共24页
Cooperative multi-agent reinforcement learning(MARL)is a key technology for enabling cooperation in complex multi-agent systems.It has achieved remarkable progress in areas such as gaming,autonomous driving,and multi-... Cooperative multi-agent reinforcement learning(MARL)is a key technology for enabling cooperation in complex multi-agent systems.It has achieved remarkable progress in areas such as gaming,autonomous driving,and multi-robot control.Empowering cooperative MARL with multi-task decision-making capabilities is expected to further broaden its application scope.In multi-task scenarios,cooperative MARL algorithms need to address 3 types of multi-task problems:reward-related multi-task,arising from different reward functions;multi-domain multi-task,caused by differences in state and action spaces,state transition functions;and scalability-related multi-task,resulting from the dynamic variation in the number of agents.Most existing studies focus on scalability-related multitask problems.However,with the increasing integration between large language models(LLMs)and multi-agent systems,a growing number of LLM-based multi-agent systems have emerged,enabling more complex multi-task cooperation.This paper provides a comprehensive review of the latest advances in this field.By combining multi-task reinforcement learning with cooperative MARL,we categorize and analyze the 3 major types of multi-task problems under multi-agent settings,offering more fine-grained classifications and summarizing key insights for each.In addition,we summarize commonly used benchmarks and discuss future directions of research in this area,which hold promise for further enhancing the multi-task cooperation capabilities of multi-agent systems and expanding their practical applications in the real world. 展开更多
关键词 multi-task multi-agent reinforcement learning large language models
在线阅读 下载PDF
MolP-PC:a multi-view fusion and multi-task learning framework for drug ADMET property prediction 被引量:1
6
作者 Sishu Li Jing Fan +2 位作者 Haiyang He Ruifeng Zhou Jun Liao 《Chinese Journal of Natural Medicines》 2025年第11期1293-1300,共8页
The accurate prediction of drug absorption,distribution,metabolism,excretion,and toxicity(ADMET)properties represents a crucial step in early drug development for reducing failure risk.Current deep learning approaches... The accurate prediction of drug absorption,distribution,metabolism,excretion,and toxicity(ADMET)properties represents a crucial step in early drug development for reducing failure risk.Current deep learning approaches face challenges with data sparsity and information loss due to single-molecule representation limitations and isolated predictive tasks.This research proposes molecular properties prediction with parallel-view and collaborative learning(MolP-PC),a multi-view fusion and multi-task deep learning framework that integrates 1D molecular fingerprints(MFs),2D molecular graphs,and 3D geometric representations,incorporating an attention-gated fusion mechanism and multi-task adaptive learning strategy for precise ADMET property predictions.Experimental results demonstrate that MolP-PC achieves optimal performance in 27 of 54 tasks,with its multi-task learning(MTL)mechanism significantly enhancing predictive performance on small-scale datasets and surpassing single-task models in 41 of 54 tasks.Additional ablation studies and interpretability analyses confirm the significance of multi-view fusion in capturing multi-dimensional molecular information and enhancing model generalization.A case study examining the anticancer compound Oroxylin A demonstrates MolP-PC’s effective generalization in predicting key pharmacokinetic parameters such as half-life(T0.5)and clearance(CL),indicating its practical utility in drug modeling.However,the model exhibits a tendency to underestimate volume of distribution(VD),indicating potential for improvement in analyzing compounds with high tissue distribution.This study presents an efficient and interpretable approach for ADMET property prediction,establishing a novel framework for molecular optimization and risk assessment in drug development. 展开更多
关键词 Molecular ADMET prediction Multi-view fusion Attention mechanism multi-task deep learning
原文传递
Machine Learning Enabled Reusable Adhesion,Entangled Network‑Based Hydrogel for Long‑Term,High‑Fidelity EEG Recording and Attention Assessment 被引量:1
7
作者 Kai Zheng Chengcheng Zheng +9 位作者 Lixian Zhu Bihai Yang Xiaokun Jin Su Wang Zikai Song Jingyu Liu Yan Xiong Fuze Tian Ran Cai Bin Hu 《Nano-Micro Letters》 2025年第11期514-529,共16页
Due to their high mechanical compliance and excellent biocompatibility,conductive hydrogels exhibit significant potential for applications in flexible electronics.However,as the demand for high sensitivity,superior me... Due to their high mechanical compliance and excellent biocompatibility,conductive hydrogels exhibit significant potential for applications in flexible electronics.However,as the demand for high sensitivity,superior mechanical properties,and strong adhesion performance continues to grow,many conventional fabrication methods remain complex and costly.Herein,we propose a simple and efficient strategy to construct an entangled network hydrogel through a liquid-metal-induced cross-linking reaction,hydrogel demonstrates outstanding properties,including exceptional stretchability(1643%),high tensile strength(366.54 kPa),toughness(350.2 kJ m^(−3)),and relatively low mechanical hysteresis.The hydrogel exhibits long-term stable reusable adhesion(104 kPa),enabling conformal and stable adhesion to human skin.This capability allows it to effectively capture high-quality epidermal electrophysiological signals with high signal-to-noise ratio(25.2 dB)and low impedance(310 ohms).Furthermore,by integrating advanced machine learning algorithms,achieving an attention classification accuracy of 91.38%,which will significantly impact fields like education,healthcare,and artificial intelligence. 展开更多
关键词 Entangled network Reusable adhesion Epidermal sensor Machine learning Attention assessment
在线阅读 下载PDF
Personalized Generative AI Services Through Federated Learning in 6G Edge Networks 被引量:1
8
作者 Li Zeshen Chen Zihan +1 位作者 Hu Xinyi Howard H.Yang 《China Communications》 2025年第7期1-13,共13页
Network architectures assisted by Generative Artificial Intelligence(GAI)are envisioned as foundational elements of sixth-generation(6G)communication system.To deliver ubiquitous intelligent services and meet diverse ... Network architectures assisted by Generative Artificial Intelligence(GAI)are envisioned as foundational elements of sixth-generation(6G)communication system.To deliver ubiquitous intelligent services and meet diverse service requirements,6G network architecture should offer personalized services to various mobile devices.Federated learning(FL)with personalized local training,as a privacypreserving machine learning(ML)approach,can be applied to address these challenges.In this paper,we propose a meta-learning-based personalized FL(PFL)method that improves both communication and computation efficiency by utilizing over-the-air computations.Its“pretraining-and-fine-tuning”principle makes it particularly suitable for enabling edge nodes to access personalized GAI services while preserving local privacy.Experiment results demonstrate the outperformance and efficacy of the proposed algorithm,and notably indicate enhanced communication efficiency without compromising accuracy. 展开更多
关键词 generative artificial intelligence personalized federated learning 6G networks
在线阅读 下载PDF
Multi-QoS routing algorithm based on reinforcement learning for LEO satellite networks 被引量:1
9
作者 ZHANG Yifan DONG Tao +1 位作者 LIU Zhihui JIN Shichao 《Journal of Systems Engineering and Electronics》 2025年第1期37-47,共11页
Low Earth orbit(LEO)satellite networks exhibit distinct characteristics,e.g.,limited resources of individual satellite nodes and dynamic network topology,which have brought many challenges for routing algorithms.To sa... Low Earth orbit(LEO)satellite networks exhibit distinct characteristics,e.g.,limited resources of individual satellite nodes and dynamic network topology,which have brought many challenges for routing algorithms.To satisfy quality of service(QoS)requirements of various users,it is critical to research efficient routing strategies to fully utilize satellite resources.This paper proposes a multi-QoS information optimized routing algorithm based on reinforcement learning for LEO satellite networks,which guarantees high level assurance demand services to be prioritized under limited satellite resources while considering the load balancing performance of the satellite networks for low level assurance demand services to ensure the full and effective utilization of satellite resources.An auxiliary path search algorithm is proposed to accelerate the convergence of satellite routing algorithm.Simulation results show that the generated routing strategy can timely process and fully meet the QoS demands of high assurance services while effectively improving the load balancing performance of the link. 展开更多
关键词 low Earth orbit(LEO)satellite network reinforcement learning multi-quality of service(QoS) routing algorithm
在线阅读 下载PDF
Explainable AI Based Multi-Task Learning Method for Stroke Prognosis
10
作者 Nan Ding Xingyu Zeng +1 位作者 Jianping Wu Liutao Zhao 《Computers, Materials & Continua》 2025年第9期5299-5315,共17页
Predicting the health status of stroke patients at different stages of the disease is a critical clinical task.The onset and development of stroke are affected by an array of factors,encompassing genetic predispositio... Predicting the health status of stroke patients at different stages of the disease is a critical clinical task.The onset and development of stroke are affected by an array of factors,encompassing genetic predisposition,environmental exposure,unhealthy lifestyle habits,and existing medical conditions.Although existing machine learning-based methods for predicting stroke patients’health status have made significant progress,limitations remain in terms of prediction accuracy,model explainability,and system optimization.This paper proposes a multi-task learning approach based on Explainable Artificial Intelligence(XAI)for predicting the health status of stroke patients.First,we design a comprehensive multi-task learning framework that utilizes the task correlation of predicting various health status indicators in patients,enabling the parallel prediction of multiple health indicators.Second,we develop a multi-task Area Under Curve(AUC)optimization algorithm based on adaptive low-rank representation,which removes irrelevant information from the model structure to enhance the performance of multi-task AUC optimization.Additionally,the model’s explainability is analyzed through the stability analysis of SHAP values.Experimental results demonstrate that our approach outperforms comparison algorithms in key prognostic metrics F1 score and Efficiency. 展开更多
关键词 Explainable AI stroke prognosis multi-task learning AUC optimization
在线阅读 下载PDF
Joint Retrieval of PM_(2.5) Concentration and Aerosol Optical Depth over China Using Multi-Task Learning on FY-4A AGRI
11
作者 Bo LI Disong FU +4 位作者 Ling YANG Xuehua FAN Dazhi YANG Hongrong SHI Xiang’ao XIA 《Advances in Atmospheric Sciences》 2025年第1期94-110,共17页
Aerosol optical depth(AOD)and fine particulate matter with a diameter of less than or equal to 2.5μm(PM_(2.5))play crucial roles in air quality,human health,and climate change.However,the complex correlation of AOD–... Aerosol optical depth(AOD)and fine particulate matter with a diameter of less than or equal to 2.5μm(PM_(2.5))play crucial roles in air quality,human health,and climate change.However,the complex correlation of AOD–PM_(2.5)and the limitations of existing algorithms pose a significant challenge in realizing the accurate joint retrieval of these two parameters at the same location.On this point,a multi-task learning(MTL)model,which enables the joint retrieval of PM_(2.5)concentration and AOD,is proposed and applied on the top-of-the-atmosphere reflectance data gathered by the Fengyun-4A Advanced Geosynchronous Radiation Imager(FY-4A AGRI),and compared to that of two single-task learning models—namely,Random Forest(RF)and Deep Neural Network(DNN).Specifically,MTL achieves a coefficient of determination(R^(2))of 0.88 and a root-mean-square error(RMSE)of 0.10 in AOD retrieval.In comparison to RF,the R^(2)increases by 0.04,the RMSE decreases by 0.02,and the percentage of retrieval results falling within the expected error range(Within-EE)rises by 5.55%.The R^(2)and RMSE of PM_(2.5)retrieval by MTL are 0.84 and 13.76μg m~(-3)respectively.Compared with RF,the R^(2)increases by 0.06,the RMSE decreases by 4.55μg m~(-3),and the Within-EE increases by 7.28%.Additionally,compared to DNN,MTL shows an increase of 0.01 in R^(2)and a decrease of 0.02 in RMSE in AOD retrieval,with a corresponding increase of 2.89%in Within-EE.For PM_(2.5)retrieval,MTL exhibits an increase of 0.05 in R^(2),a decrease of 1.76μg m~(-3)in RMSE,and an increase of 6.83%in Within-EE.The evaluation suggests that MTL is able to provide simultaneously improved AOD and PM_(2.5)retrievals,demonstrating a significant advantage in efficiently capturing the spatial distribution of PM_(2.5)concentration and AOD. 展开更多
关键词 AOD PM_(2.5) FY-4A multi-task learning joint retrieval
在线阅读 下载PDF
Skillful bias correction of offshore near-surface wind field forecasting based on a multi-task machine learning model
12
作者 Qiyang Liu Anboyu Guo +5 位作者 Fengxue Qiao Xinjian Ma Yan-An Liu Yong Huang Rui Wang Chunyan Sheng 《Atmospheric and Oceanic Science Letters》 2025年第5期28-35,共8页
Accurate short-term forecast of offshore wind fields is still challenging for numerical weather prediction models.Based on three years of 48-hour forecast data from the European Centre for Medium-Range Weather Forecas... Accurate short-term forecast of offshore wind fields is still challenging for numerical weather prediction models.Based on three years of 48-hour forecast data from the European Centre for Medium-Range Weather Forecasts Integrated Forecasting System global model(ECMWF-IFS)over 14 offshore weather stations along the coast of Shandong Province,this study introduces a multi-task learning(MTL)model(TabNet-MTL),which significantly improves the forecast bias of near-surface wind direction and speed simultaneously.TabNet-MTL adopts the feature engineering method,utilizes mean square error as the loss function,and employs the 5-fold cross validation method to ensure the generalization ability of the trained model.It demonstrates superior skills in wind field correction across different forecast lead times over all stations compared to its single-task version(TabNet-STL)and three other popular single-task learning models(Random Forest,LightGBM,and XGBoost).Results show that it significantly reduces root mean square error of the ECMWF-IFS wind speed forecast from 2.20 to 1.25 m s−1,and increases the forecast accuracy of wind direction from 50%to 65%.As an explainable deep learning model,the weather stations and long-term temporal statistics of near-surface wind speed are identified as the most influential variables for TabNet-MTL in constructing its feature engineering. 展开更多
关键词 Forecast bias correction Wind field multi-task learning Feature engineering Explainable AI
在线阅读 下载PDF
DKP-ADS:Domain knowledge prompt combined with multi-task learning for assessment of foliar disease severity in staple crops
13
作者 Yujiao Dan Xingcai Wu +5 位作者 Ya Yu Ziang Zou R.D.S.M Gunarathna Peijia Yu Yuanyuan Xiao Qi Wang 《The Crop Journal》 2025年第6期1939-1954,共16页
Staple crops are the cornerstone of the food supply but are frequently threatened by plant diseases.Effective disease management,including disease identification and severity assessment,helps to better address these c... Staple crops are the cornerstone of the food supply but are frequently threatened by plant diseases.Effective disease management,including disease identification and severity assessment,helps to better address these challenges.Currently,methods for disease severity assessment typically rely on calculating the area proportion of disease segmentation regions or using classification networks for severity assessment.However,these methods require large amounts of labeled data and fail to quantify lesion proportions when using classification networks,leading to inaccurate evaluations.To address these issues,we propose an automated framework for disease severity assessment that combines multi-task learning and knowledge-driven large-model segmentation techniques.This framework includes an image information processor,a lesion and leaf segmentation module,and a disease severity assessment module.First,the image information processor utilizes a multi-task learning strategy to analyze input images comprehensively,ensuring a deep understanding of disease characteristics.Second,the lesion and leaf segmentation module employ prompt-driven large-model technology to accurately segment diseased areas and entire leaves,providing detailed visual analysis.Finally,the disease severity assessment module objectively evaluates the severity of the disease based on professional grading standards by calculating lesion area proportions.Additionally,we have developed a comprehensive database of diseased leaf images from major crops,including several task-specific datasets.Experimental results demonstrate that our framework can accurately identify and assess the types and severity of crop diseases,even without extensive labeled data.Codes and data are available at http://dkp-ads.samlab.cn/. 展开更多
关键词 Domain knowledge Prompt-driven multi-task learning Staple crop Assessment of disease severity
在线阅读 下载PDF
MAMGBR: Group-Buying Recommendation Model Based on Multi-Head Attention Mechanism and Multi-Task Learning
14
作者 Zongzhe Xu Ming Yu 《Computers, Materials & Continua》 2025年第8期2805-2826,共22页
As the group-buying model shows significant progress in attracting new users,enhancing user engagement,and increasing platform profitability,providing personalized recommendations for group-buying users has emerged as... As the group-buying model shows significant progress in attracting new users,enhancing user engagement,and increasing platform profitability,providing personalized recommendations for group-buying users has emerged as a new challenge in the field of recommendation systems.This paper introduces a group-buying recommendation model based on multi-head attention mechanisms and multi-task learning,termed the Multi-head Attention Mechanisms and Multi-task Learning Group-Buying Recommendation(MAMGBR)model,specifically designed to optimize group-buying recommendations on e-commerce platforms.The core dataset of this study comes from the Chinese maternal and infant e-commerce platform“Beibei,”encompassing approximately 430,000 successful groupbuying actions and over 120,000 users.Themodel focuses on twomain tasks:recommending items for group organizers(Task Ⅰ)and recommending participants for a given group-buying event(Task Ⅱ).In model evaluation,MAMGBR achieves an MRR@10 of 0.7696 for Task I,marking a 20.23%improvement over baseline models.Furthermore,in Task II,where complex interaction patterns prevail,MAMGBR utilizes auxiliary loss functions to effectively model the multifaceted roles of users,items,and participants,leading to a 24.08%increase in MRR@100 under a 1:99 sample ratio.Experimental results show that compared to benchmark models,such as NGCF and EATNN,MAMGBR’s integration ofmulti-head attentionmechanisms,expert networks,and gating mechanisms enables more accurate modeling of user preferences and social associations within group-buying scenarios,significantly enhancing recommendation accuracy and platform group-buying success rates. 展开更多
关键词 Group-buying recommendation multi-head attention mechanism multi-task learning
在线阅读 下载PDF
Secure Malicious Node Detection in Decentralized Healthcare Networks Using Cloud and Edge Computing with Blockchain-Enabled Federated Learning
15
作者 Raj Sonani Reham Alhejaili +2 位作者 Pushpalika Chatterjee Khalid Hamad Alnafisah Jehad Ali 《Computer Modeling in Engineering & Sciences》 2025年第9期3169-3189,共21页
Healthcare networks are transitioning from manual records to electronic health records,but this shift introduces vulnerabilities such as secure communication issues,privacy concerns,and the presence of malicious nodes... Healthcare networks are transitioning from manual records to electronic health records,but this shift introduces vulnerabilities such as secure communication issues,privacy concerns,and the presence of malicious nodes.Existing machine and deep learning-based anomalies detection methods often rely on centralized training,leading to reduced accuracy and potential privacy breaches.Therefore,this study proposes a Blockchain-based-Federated Learning architecture for Malicious Node Detection(BFL-MND)model.It trains models locally within healthcare clusters,sharing only model updates instead of patient data,preserving privacy and improving accuracy.Cloud and edge computing enhance the model’s scalability,while blockchain ensures secure,tamper-proof access to health data.Using the PhysioNet dataset,the proposed model achieves an accuracy of 0.95,F1 score of 0.93,precision of 0.94,and recall of 0.96,outperforming baseline models like random forest(0.88),adaptive boosting(0.90),logistic regression(0.86),perceptron(0.83),and deep neural networks(0.92). 展开更多
关键词 Authentication blockchain deep learning federated learning healthcare network machine learning wearable sensor nodes
在线阅读 下载PDF
Nuclear mass based on the multi-task learning neural network method 被引量:11
16
作者 Xing-Chen Ming Hong-Fei Zhang +3 位作者 Rui-Rui Xu Xiao-Dong Sun Yuan Tian Zhi-Gang Ge 《Nuclear Science and Techniques》 SCIE EI CAS CSCD 2022年第4期96-103,共8页
The global nuclear mass based on the macroscopic-microscopic model was studied by applying a newly designed multi-task learning artificial neural network(MTL-ANN). First, the reported nuclear binding energies of 2095 ... The global nuclear mass based on the macroscopic-microscopic model was studied by applying a newly designed multi-task learning artificial neural network(MTL-ANN). First, the reported nuclear binding energies of 2095 nuclei(Z ≥ 8, N ≥ 8) released in the latest Atomic Mass Evaluation AME2020 and the deviations between the fitting result of the liquid drop model(LDM)and data from AME2020 for each nucleus were obtained.To compensate for the deviations and investigate the possible ignored physics in the LDM, the MTL-ANN method was introduced in the model. Compared to the single-task learning(STL) method, this new network has a powerful ability to simultaneously learn multi-nuclear properties,such as the binding energies and single neutron and proton separation energies. Moreover, it is highly effective in reducing the risk of overfitting and achieving better predictions. Consequently, good predictions can be obtained using this nuclear mass model for both the training and validation datasets and for the testing dataset. In detail, the global root mean square(RMS) of the binding energy is effectively reduced from approximately 2.4 MeV of LDM to the current 0.2 MeV, and the RMS of Sn, Spcan also reach approximately 0.2 MeV. Moreover, compared to STL, for the training and validation sets, 3-9% improvement can be achieved with the binding energy, and 20-30% improvement for S_(n), S_(p);for the testing sets, the reduction in deviations can even reach 30-40%, which significantly illustrates the advantage of the current MTL. 展开更多
关键词 Macroscopic–microscopic model Binding energy Neural network multi-task learning
在线阅读 下载PDF
AS-SOMTF:A novel multi-task learning model for water level prediction by satellite remoting
17
作者 Xin Su Zijian Qin +3 位作者 Weikang Feng Ziyang Gong Christian Esposito Sokjoon Lee 《Digital Communications and Networks》 2025年第5期1554-1566,共13页
Satellite communication technology has emerged as a key solution to address the challenges of data transmission in remote areas.By overcoming the limitations of traditional terrestrial communication networks,it enable... Satellite communication technology has emerged as a key solution to address the challenges of data transmission in remote areas.By overcoming the limitations of traditional terrestrial communication networks,it enables long-distance data transmission anytime and anywhere,ensuring the timely and accurate delivery of water level data,which is particularly crucial for fishway water level monitoring.To enhance the effectiveness of fishway water level monitoring,this study proposes a multi-task learning model,AS-SOMTF,designed for real-time and comprehensive prediction.The model integrates auxiliary sequences with primary input sequences to capture complex relationships and dependencies,thereby improving representational capacity.In addition,a novel timeseries embedding algorithm,AS-SOM,is introduced,which combines generative inference and pooling operations to optimize prediction efficiency for long sequences.This innovation not only ensures the timely transmission of water level data but also enhances the accuracy of real-time monitoring.Compared with traditional models such as Transformer and Long Short-Term Memory(LSTM)networks,the proposed model achieves improvements of 3.8%and 1.4%in prediction accuracy,respectively.These advancements provide more precise technical support for water level forecasting and resource management in the Diqing Tibetan Autonomous Prefecture of the Lancang River,contributing to ecosystem protection and improved operational safety. 展开更多
关键词 Fish passages Water-level prediction Time series forecasting multi-task learning Hierarchical clustering Satellite communication
在线阅读 下载PDF
When Communication Networks Meet Federated Learning for Intelligence Interconnecting:A Comprehensive Survey and Future Perspective
18
作者 Sha Zongxuan Huo Ru +3 位作者 Sun Chuang Wang Shuo Huang Tao F.Richard Yu 《China Communications》 2025年第7期74-94,共21页
With the rapid development of network technologies,a large number of deployed edge devices and information systems generate massive amounts of data which provide good support for the advancement of data-driven intelli... With the rapid development of network technologies,a large number of deployed edge devices and information systems generate massive amounts of data which provide good support for the advancement of data-driven intelligent models.However,these data often contain sensitive information of users.Federated learning(FL),as a privacy preservation machine learning setting,allows users to obtain a well-trained model without sending the privacy-sensitive local data to the central server.Despite the promising prospect of FL,several significant research challenges need to be addressed before widespread deployment,including network resource allocation,model security,model convergence,etc.In this paper,we first provide a brief survey on some of these works that have been done on FL and discuss the motivations of the Communication Networks(CNs)and FL to mutually enable each other.We analyze the support of network technologies for FL,which requires frequent communication and emphasizes security,as well as the studies on the intelligence of many network scenarios and the improvement of network performance and security by the methods based on FL.At last,some challenges and broader perspectives are explored. 展开更多
关键词 communication networks federated learning intelligence interconnecting machine learning privacy preservation
在线阅读 下载PDF
An Ensembled Multi-Layer Automatic-Constructed Weighted Online Broad Learning System for Fault Detection in Cellular Networks
19
作者 Wang Qi Pan Zhiwen +1 位作者 Liu Nan You Xiaohu 《China Communications》 2025年第8期150-167,共18页
6G is desired to support more intelligence networks and this trend attaches importance to the self-healing capability if degradation emerges in the cellular networks.As a primary component of selfhealing networks,faul... 6G is desired to support more intelligence networks and this trend attaches importance to the self-healing capability if degradation emerges in the cellular networks.As a primary component of selfhealing networks,fault detection is investigated in this paper.Considering the fast response and low timeand-computational consumption,it is the first time that the Online Broad Learning System(OBLS)is applied to identify outages in cellular networks.In addition,the Automatic-constructed Online Broad Learning System(AOBLS)is put forward to rationalize its structure and consequently avoid over-fitting and under-fitting.Furthermore,a multi-layer classification structure is proposed to further improve the classification performance.To face the challenges caused by imbalanced data in fault detection problems,a novel weighting strategy is derived to achieve the Multilayer Automatic-constructed Weighted Online Broad Learning System(MAWOBLS)and ensemble learning with retrained Support Vector Machine(SVM),denoted as EMAWOBLS,for superior treatment with this imbalance issue.Simulation results show that the proposed algorithm has excellent performance in detecting faults with satisfactory time usage. 展开更多
关键词 broad learning system(BLS) cell outage detection cellular network fault detection ensemble learning imbalanced classification online broad learning system(OBLS) self-healing network weighted broad learning system(WBLS)
在线阅读 下载PDF
APFed: Adaptive personalized federated learning for intrusion detection in maritime meteorological sensor networks
20
作者 Xin Su Guifu Zhang 《Digital Communications and Networks》 2025年第2期401-411,共11页
With the rapid development of advanced networking and computing technologies such as the Internet of Things, network function virtualization, and 5G infrastructure, new development opportunities are emerging for Marit... With the rapid development of advanced networking and computing technologies such as the Internet of Things, network function virtualization, and 5G infrastructure, new development opportunities are emerging for Maritime Meteorological Sensor Networks(MMSNs). However, the increasing number of intelligent devices joining the MMSN poses a growing threat to network security. Current Artificial Intelligence(AI) intrusion detection techniques turn intrusion detection into a classification problem, where AI excels. These techniques assume sufficient high-quality instances for model construction, which is often unsatisfactory for real-world operation with limited attack instances and constantly evolving characteristics. This paper proposes an Adaptive Personalized Federated learning(APFed) framework that allows multiple MMSN owners to engage in collaborative training. By employing an adaptive personalized update and a shared global classifier, the adverse effects of imbalanced, Non-Independent and Identically Distributed(Non-IID) data are mitigated, enabling the intrusion detection model to possess personalized capabilities and good global generalization. In addition, a lightweight intrusion detection model is proposed to detect various attacks with an effective adaptation to the MMSN environment. Finally, extensive experiments on a classical network dataset show that the attack classification accuracy is improved by about 5% compared to most baselines in the global scenarios. 展开更多
关键词 Intrusion detection Maritime meteorological sensor network Federated learning Personalized model Deep learning
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部