期刊文献+
共找到13,254篇文章
< 1 2 250 >
每页显示 20 50 100
A pipelining task offloading strategy via delay-aware multi-agent reinforcement learning in Cybertwin-enabled 6G network
1
作者 Haiwen Niu Luhan Wang +3 位作者 Keliang Du Zhaoming Lu Xiangming Wen Yu Liu 《Digital Communications and Networks》 2025年第1期92-105,共14页
Cybertwin-enabled 6th Generation(6G)network is envisioned to support artificial intelligence-native management to meet changing demands of 6G applications.Multi-Agent Deep Reinforcement Learning(MADRL)technologies dri... Cybertwin-enabled 6th Generation(6G)network is envisioned to support artificial intelligence-native management to meet changing demands of 6G applications.Multi-Agent Deep Reinforcement Learning(MADRL)technologies driven by Cybertwins have been proposed for adaptive task offloading strategies.However,the existence of random transmission delay between Cybertwin-driven agents and underlying networks is not considered in related works,which destroys the standard Markov property and increases the decision reaction time to reduce the task offloading strategy performance.In order to address this problem,we propose a pipelining task offloading method to lower the decision reaction time and model it as a delay-aware Markov Decision Process(MDP).Then,we design a delay-aware MADRL algorithm to minimize the weighted sum of task execution latency and energy consumption.Firstly,the state space is augmented using the lastly-received state and historical actions to rebuild the Markov property.Secondly,Gate Transformer-XL is introduced to capture historical actions'importance and maintain the consistent input dimension dynamically changed due to random transmission delays.Thirdly,a sampling method and a new loss function with the difference between the current and target state value and the difference between real state-action value and augmented state-action value are designed to obtain state transition trajectories close to the real ones.Numerical results demonstrate that the proposed methods are effective in reducing reaction time and improving the task offloading performance in the random-delay Cybertwin-enabled 6G networks. 展开更多
关键词 Cybertwin multi-Agent Deep Reinforcement learning(MADRL) task offloading PIPELINING Delay-aware
在线阅读 下载PDF
Terminal Multitask Parallel Offloading Algorithm Based on Deep Reinforcement Learning
2
作者 Zhang Lincong Li Yang +2 位作者 Zhao Weinan Liu Xiangyu Guo Lei 《China Communications》 2025年第7期30-43,共14页
The advent of the internet-of-everything era has led to the increased use of mobile edge computing.The rise of artificial intelligence has provided many possibilities for the low-latency task-offloading demands of use... The advent of the internet-of-everything era has led to the increased use of mobile edge computing.The rise of artificial intelligence has provided many possibilities for the low-latency task-offloading demands of users,but existing technologies rigidly assume that there is only one task to be offloaded in each time slot at the terminal.In practical scenarios,there are often numerous computing tasks to be executed at the terminal,leading to a cumulative delay for subsequent task offloading.Therefore,the efficient processing of multiple computing tasks on the terminal has become highly challenging.To address the lowlatency offloading requirements for multiple computational tasks on terminal devices,we propose a terminal multitask parallel offloading algorithm based on deep reinforcement learning.Specifically,we first establish a mobile edge computing system model consisting of a single edge server and multiple terminal users.We then model the task offloading decision problem as a Markov decision process,and solve this problem using the Dueling Deep-Q Network algorithm to obtain the optimal offloading strategy.Experimental results demonstrate that,under the same constraints,our proposed algorithm reduces the average system latency. 展开更多
关键词 deep reinforcement learning mobile edge computing multitask parallel offloading task offloading
在线阅读 下载PDF
Multi-station multi-robot task assignment method based on deep reinforcement learning
3
作者 Junnan Zhang Ke Wang Chaoxu Mu 《CAAI Transactions on Intelligence Technology》 2025年第1期134-146,共13页
This paper focuses on the problem of multi-station multi-robot spot welding task assignment,and proposes a deep reinforcement learning(DRL)framework,which is made up of a public graph attention network and independent... This paper focuses on the problem of multi-station multi-robot spot welding task assignment,and proposes a deep reinforcement learning(DRL)framework,which is made up of a public graph attention network and independent policy networks.The graph of welding spots distribution is encoded using the graph attention network.Independent policy networks with attention mechanism as a decoder can handle the encoded graph and decide to assign robots to different tasks.The policy network is used to convert the large scale welding spots allocation problem to multiple small scale singlerobot welding path planning problems,and the path planning problem is quickly solved through existing methods.Then,the model is trained through reinforcement learning.In addition,the task balancing method is used to allocate tasks to multiple stations.The proposed algorithm is compared with classical algorithms,and the results show that the algorithm based on DRL can produce higher quality solutions. 展开更多
关键词 attention mechanism deep reinforcement learning graph neural network industrial robot task allocation
在线阅读 下载PDF
Reinforcement learning-enabled swarm intelligence method for computation task offloading in Internet-of-Things blockchain
4
作者 Zhuo Chen Jiahuan Yi +1 位作者 Yang Zhou Wei Luo 《Digital Communications and Networks》 2025年第3期912-924,共13页
Blockchain technology,based on decentralized data storage and distributed consensus design,has become a promising solution to address data security risks and provide privacy protection in the Internet-of-Things(IoT)du... Blockchain technology,based on decentralized data storage and distributed consensus design,has become a promising solution to address data security risks and provide privacy protection in the Internet-of-Things(IoT)due to its tamper-proof and non-repudiation features.Although blockchain typically does not require the endorsement of third-party trust organizations,it mostly needs to perform necessary mathematical calculations to prevent malicious attacks,which results in stricter requirements for computation resources on the participating devices.By offloading the computation tasks required to support blockchain consensus to edge service nodes or the cloud,while providing data privacy protection for IoT applications,it can effectively address the limitations of computation and energy resources in IoT devices.However,how to make reasonable offloading decisions for IoT devices remains an open issue.Due to the excellent self-learning ability of Reinforcement Learning(RL),this paper proposes a RL enabled Swarm Intelligence Optimization Algorithm(RLSIOA)that aims to improve the quality of initial solutions and achieve efficient optimization of computation task offloading decisions.The algorithm considers various factors that may affect the revenue obtained by IoT devices executing consensus algorithms(e.g.,Proof-of-Work),it optimizes the proportion of sub-tasks to be offloaded and the scale of computing resources to be rented from the edge and cloud to maximize the revenue of devices.Experimental results show that RLSIOA can obtain higher-quality offloading decision-making schemes at lower latency costs compared to representative benchmark algorithms. 展开更多
关键词 Blockchain task offloading Swarm intelligence Reinforcement learning
在线阅读 下载PDF
Pathfinder:Deep Reinforcement Learning-Based Scheduling for Multi-Robot Systems in Smart Factories with Mass Customization
5
作者 Chenxi Lyu Chen Dong +3 位作者 Qiancheng Xiong Yuzhong Chen Qian Weng Zhenyi Chen 《Computers, Materials & Continua》 2025年第8期3371-3391,共21页
The rapid advancement of Industry 4.0 has revolutionized manufacturing,shifting production from centralized control to decentralized,intelligent systems.Smart factories are now expected to achieve high adaptability an... The rapid advancement of Industry 4.0 has revolutionized manufacturing,shifting production from centralized control to decentralized,intelligent systems.Smart factories are now expected to achieve high adaptability and resource efficiency,particularly in mass customization scenarios where production schedules must accommodate dynamic and personalized demands.To address the challenges of dynamic task allocation,uncertainty,and realtime decision-making,this paper proposes Pathfinder,a deep reinforcement learning-based scheduling framework.Pathfinder models scheduling data through three key matrices:execution time(the time required for a job to complete),completion time(the actual time at which a job is finished),and efficiency(the performance of executing a single job).By leveraging neural networks,Pathfinder extracts essential features from these matrices,enabling intelligent decision-making in dynamic production environments.Unlike traditional approaches with fixed scheduling rules,Pathfinder dynamically selects from ten diverse scheduling rules,optimizing decisions based on real-time environmental conditions.To further enhance scheduling efficiency,a specialized reward function is designed to support dynamic task allocation and real-time adjustments.This function helps Pathfinder continuously refine its scheduling strategy,improving machine utilization and minimizing job completion times.Through reinforcement learning,Pathfinder adapts to evolving production demands,ensuring robust performance in real-world applications.Experimental results demonstrate that Pathfinder outperforms traditional scheduling approaches,offering improved coordination and efficiency in smart factories.By integrating deep reinforcement learning,adaptable scheduling strategies,and an innovative reward function,Pathfinder provides an effective solution to the growing challenges of multi-robot job scheduling in mass customization environments. 展开更多
关键词 Smart factory CUSTOMIZATION deep reinforcement learning production scheduling multi-robot system task allocation
在线阅读 下载PDF
Leveraging Machine Learning to Predict Hospital Porter Task Completion Time
6
作者 You-Jyun Yeh Edward T.-H.Chu +2 位作者 Chia-Rong Lee Jiun Hsu Hui-Mei Wu 《Computers, Materials & Continua》 2025年第11期3369-3391,共23页
Porters play a crucial role in hospitals because they ensure the efficient transportation of patients,medical equipment,and vital documents.Despite its importance,there is a lack of research addressing the prediction ... Porters play a crucial role in hospitals because they ensure the efficient transportation of patients,medical equipment,and vital documents.Despite its importance,there is a lack of research addressing the prediction of completion times for porter tasks.To address this gap,we utilized real-world porter delivery data from Taiwan University Hospital,China,Yunlin Branch,Taiwan Region of China.We first identified key features that can influence the duration of porter tasks.We then employed three widely-used machine learning algorithms:decision tree,random forest,and gradient boosting.To leverage the strengths of each algorithm,we finally adopted an ensemble modeling approach that aggregates their individual predictions.Our experimental results show that the proposed ensemble model can achieve a mean absolute error of 3 min in predicting task response time and 4.42 min in task completion time.The prediction error is around 50%lower compared to using only the historical average.These results demonstrate that our method significantly improves the accuracy of porter task time prediction,supporting better resource planning and patient care.It helps ward staff streamline workflows by reducing delays,enables porter managers to allocate resources more effectively,and shortens patient waiting times,contributing to a better care experience. 展开更多
关键词 Machine learning hospital porter task completion time predictive models healthcare
在线阅读 下载PDF
Strengthening human papillomavirus vaccination programs through multi-country peer learning:lessons from the CHIC initiative
7
作者 Christopher Morgan Mary Carol Jennings +8 位作者 Dur-e-Nayab Waheed Nicolas Theopold Anissa Sidibe Ana Bolio Elaine Charurat Felix Ricardo Burdier Emilie Karafillakis Shana Kagan Alex Vorsters 《Cancer Biology & Medicine》 2025年第9期997-1001,共5页
Introduction Human papillomavirus(HPV)vaccination is a cornerstone of cervical cancer prevention,particularly in low-and middle-income countries(LMICs),where the burden of disease remains high~1.The World Health Organ... Introduction Human papillomavirus(HPV)vaccination is a cornerstone of cervical cancer prevention,particularly in low-and middle-income countries(LMICs),where the burden of disease remains high~1.The World Health Organization(WHO)HPV Vaccine Introduction Clearing House reported that 147 countries(of 194 reporting)had fully introduced the HPV vaccine into their national schedules as of 20242.After COVID-19 pandemic disruptions,global coverage is again increasing. 展开更多
关键词 WHO HPV vaccine introduction clearing house multi country peer learning cervical cancer prevention CHIC initiative global coverage human papillomavirus vaccination human papillomavirus hpv vaccination low middle income countries
暂未订购
Multi-Robot Task Allocation Using Multimodal Multi-Objective Evolutionary Algorithm Based on Deep Reinforcement Learning 被引量:4
8
作者 苗镇华 黄文焘 +1 位作者 张依恋 范勤勤 《Journal of Shanghai Jiaotong university(Science)》 EI 2024年第3期377-387,共11页
The overall performance of multi-robot collaborative systems is significantly affected by the multi-robot task allocation.To improve the effectiveness,robustness,and safety of multi-robot collaborative systems,a multi... The overall performance of multi-robot collaborative systems is significantly affected by the multi-robot task allocation.To improve the effectiveness,robustness,and safety of multi-robot collaborative systems,a multimodal multi-objective evolutionary algorithm based on deep reinforcement learning is proposed in this paper.The improved multimodal multi-objective evolutionary algorithm is used to solve multi-robot task allo-cation problems.Moreover,a deep reinforcement learning strategy is used in the last generation to provide a high-quality path for each assigned robot via an end-to-end manner.Comparisons with three popular multimodal multi-objective evolutionary algorithms on three different scenarios of multi-robot task allocation problems are carried out to verify the performance of the proposed algorithm.The experimental test results show that the proposed algorithm can generate sufficient equivalent schemes to improve the availability and robustness of multi-robot collaborative systems in uncertain environments,and also produce the best scheme to improve the overall task execution efficiency of multi-robot collaborative systems. 展开更多
关键词 multi-robot task allocation multi-robot cooperation path planning multimodal multi-objective evo-lutionary algorithm deep reinforcement learning
原文传递
Policy Network-Based Dual-Agent Deep Reinforcement Learning for Multi-Resource Task Offloading in Multi-Access Edge Cloud Networks 被引量:1
9
作者 Feng Chuan Zhang Xu +2 位作者 Han Pengchao Ma Tianchun Gong Xiaoxue 《China Communications》 SCIE CSCD 2024年第4期53-73,共21页
The Multi-access Edge Cloud(MEC) networks extend cloud computing services and capabilities to the edge of the networks. By bringing computation and storage capabilities closer to end-users and connected devices, MEC n... The Multi-access Edge Cloud(MEC) networks extend cloud computing services and capabilities to the edge of the networks. By bringing computation and storage capabilities closer to end-users and connected devices, MEC networks can support a wide range of applications. MEC networks can also leverage various types of resources, including computation resources, network resources, radio resources,and location-based resources, to provide multidimensional resources for intelligent applications in 5/6G.However, tasks generated by users often consist of multiple subtasks that require different types of resources. It is a challenging problem to offload multiresource task requests to the edge cloud aiming at maximizing benefits due to the heterogeneity of resources provided by devices. To address this issue,we mathematically model the task requests with multiple subtasks. Then, the problem of task offloading of multi-resource task requests is proved to be NP-hard. Furthermore, we propose a novel Dual-Agent Deep Reinforcement Learning algorithm with Node First and Link features(NF_L_DA_DRL) based on the policy network, to optimize the benefits generated by offloading multi-resource task requests in MEC networks. Finally, simulation results show that the proposed algorithm can effectively improve the benefit of task offloading with higher resource utilization compared with baseline algorithms. 展开更多
关键词 benefit maximization deep reinforcement learning multi-access edge cloud task offloading
在线阅读 下载PDF
AI-Powered Threat Detection in Online Communities: A Multi-Modal Deep Learning Approach
10
作者 Ravi Teja Potla 《Journal of Computer and Communications》 2025年第2期155-171,共17页
The fast increase of online communities has brought about an increase in cyber threats inclusive of cyberbullying, hate speech, misinformation, and online harassment, making content moderation a pressing necessity. Tr... The fast increase of online communities has brought about an increase in cyber threats inclusive of cyberbullying, hate speech, misinformation, and online harassment, making content moderation a pressing necessity. Traditional single-modal AI-based detection systems, which analyze both text, photos, or movies in isolation, have established useless at taking pictures multi-modal threats, in which malicious actors spread dangerous content throughout a couple of formats. To cope with these demanding situations, we advise a multi-modal deep mastering framework that integrates Natural Language Processing (NLP), Convolutional Neural Networks (CNNs), and Long Short-Term Memory (LSTM) networks to become aware of and mitigate online threats effectively. Our proposed model combines BERT for text class, ResNet50 for photograph processing, and a hybrid LSTM-3-d CNN community for video content material analysis. We constructed a large-scale dataset comprising 500,000 textual posts, 200,000 offensive images, and 50,000 annotated motion pictures from more than one platform, which includes Twitter, Reddit, YouTube, and online gaming forums. The system became carefully evaluated using trendy gadget mastering metrics which include accuracy, precision, remember, F1-score, and ROC-AUC curves. Experimental outcomes demonstrate that our multi-modal method extensively outperforms single-modal AI classifiers, achieving an accuracy of 92.3%, precision of 91.2%, do not forget of 90.1%, and an AUC rating of 0.95. The findings validate the necessity of integrating multi-modal AI for actual-time, high-accuracy online chance detection and moderation. Future paintings will have consciousness on improving hostile robustness, enhancing scalability for real-world deployment, and addressing ethical worries associated with AI-driven content moderation. 展开更多
关键词 multi-Model AI Deep learning Natural Language Processing (NLP) Explainable AI (XI) Federated learning Cyber Threat Detection LSTM CNNS
在线阅读 下载PDF
Task Offloading and Resource Allocation in NOMA-VEC:A Multi-Agent Deep Graph Reinforcement Learning Algorithm
11
作者 Hu Yonghui Jin Zuodong +1 位作者 Qi Peng Tao Dan 《China Communications》 SCIE CSCD 2024年第8期79-88,共10页
Vehicular edge computing(VEC)is emerging as a promising solution paradigm to meet the requirements of compute-intensive applications in internet of vehicle(IoV).Non-orthogonal multiple access(NOMA)has advantages in im... Vehicular edge computing(VEC)is emerging as a promising solution paradigm to meet the requirements of compute-intensive applications in internet of vehicle(IoV).Non-orthogonal multiple access(NOMA)has advantages in improving spectrum efficiency and dealing with bandwidth scarcity and cost.It is an encouraging progress combining VEC and NOMA.In this paper,we jointly optimize task offloading decision and resource allocation to maximize the service utility of the NOMA-VEC system.To solve the optimization problem,we propose a multiagent deep graph reinforcement learning algorithm.The algorithm extracts the topological features and relationship information between agents from the system state as observations,outputs task offloading decision and resource allocation simultaneously with local policy network,which is updated by a local learner.Simulation results demonstrate that the proposed method achieves a 1.52%∼5.80%improvement compared with the benchmark algorithms in system service utility. 展开更多
关键词 edge computing graph convolutional network reinforcement learning task offloading
在线阅读 下载PDF
A Multi-Task Deep Learning Framework for Simultaneous Detection of Thoracic Pathology through Image Classification
12
作者 Nada Al Zahrani Ramdane Hedjar +4 位作者 Mohamed Mekhtiche Mohamed Bencherif Taha Al Fakih Fattoh Al-Qershi Muna Alrazghan 《Journal of Computer and Communications》 2024年第4期153-170,共18页
Thoracic diseases pose significant risks to an individual's chest health and are among the most perilous medical diseases. They can impact either one or both lungs, which leads to a severe impairment of a person’... Thoracic diseases pose significant risks to an individual's chest health and are among the most perilous medical diseases. They can impact either one or both lungs, which leads to a severe impairment of a person’s ability to breathe normally. Some notable examples of such diseases encompass pneumonia, lung cancer, coronavirus disease 2019 (COVID-19), tuberculosis, and chronic obstructive pulmonary disease (COPD). Consequently, early and precise detection of these diseases is paramount during the diagnostic process. Traditionally, the primary methods employed for the detection involve the use of X-ray imaging or computed tomography (CT) scans. Nevertheless, due to the scarcity of proficient radiologists and the inherent similarities between these diseases, the accuracy of detection can be compromised, leading to imprecise or erroneous results. To address this challenge, scientists have turned to computer-based solutions, aiming for swift and accurate diagnoses. The primary objective of this study is to develop two machine learning models, utilizing single-task and multi-task learning frameworks, to enhance classification accuracy. Within the multi-task learning architecture, two principal approaches exist soft parameter sharing and hard parameter sharing. Consequently, this research adopts a multi-task deep learning approach that leverages CNNs to achieve improved classification performance for the specified tasks. These tasks, focusing on pneumonia and COVID-19, are processed and learned simultaneously within a multi-task model. To assess the effectiveness of the trained model, it is rigorously validated using three different real-world datasets for training and testing. 展开更多
关键词 PNEUMONIA Thoracic Pathology COVID-19 Deep learning multi-task learning
暂未订购
基于MDP和Q-learning的绿色移动边缘计算任务卸载策略
13
作者 赵宏伟 吕盛凱 +2 位作者 庞芷茜 马子涵 李雨 《河南理工大学学报(自然科学版)》 北大核心 2025年第5期9-16,共8页
目的为了在汽车、空调等制造类工业互联网企业中实现碳中和,利用边缘计算任务卸载技术处理生产设备的任务卸载问题,以减少服务器的中心负载,减少数据中心的能源消耗和碳排放。方法提出一种基于马尔可夫决策过程(Markov decision process... 目的为了在汽车、空调等制造类工业互联网企业中实现碳中和,利用边缘计算任务卸载技术处理生产设备的任务卸载问题,以减少服务器的中心负载,减少数据中心的能源消耗和碳排放。方法提出一种基于马尔可夫决策过程(Markov decision process,MDP)和Q-learning的绿色边缘计算任务卸载策略,该策略考虑了计算频率、传输功率、碳排放等约束,基于云边端协同计算模型,将碳排放优化问题转化为混合整数线性规划模型,通过MDP和Q-learning求解模型,并对比随机分配算法、Q-learning算法、SARSA(state action reward state action)算法的收敛性能、碳排放与总时延。结果与已有的计算卸载策略相比,新策略对应的任务调度算法收敛比SARSA算法、Q-learning算法分别提高了5%,2%,收敛性更好;系统碳排放成本比Q-learning算法、SARSA算法分别减少了8%,22%;考虑终端数量多少,新策略比Q-learning算法、SARSA算法终端数量分别减少了6%,7%;系统总计算时延上,新策略明显低于其他算法,比随机分配算法、Q-learning算法、SARSA算法分别减少了27%,14%,22%。结论该策略能够合理优化卸载计算任务和资源分配,权衡时延、能耗,减少系统碳排放量。 展开更多
关键词 碳排放 边缘计算 强化学习 马尔可夫决策过程 任务卸载
在线阅读 下载PDF
Partial observation learning-based task offloading and spectrum allocation in UAV collaborative edge computing 被引量:1
14
作者 Chaoqiong Fan Xinyu Wu +1 位作者 Bin Li Chenglin Zhao 《Digital Communications and Networks》 CSCD 2024年第6期1635-1643,共9页
Capable of flexibly supporting diverse applications and providing computation services,the Mobile Edge Computing(MEC)-assisted Unmanned Aerial Vehicle(UAV)network is emerging as an innovational paradigm.In this paradi... Capable of flexibly supporting diverse applications and providing computation services,the Mobile Edge Computing(MEC)-assisted Unmanned Aerial Vehicle(UAV)network is emerging as an innovational paradigm.In this paradigm,the heterogeneous resources of the network,including computing and communication resources,should be allocated properly to reduce computation and communication latency as well as energy consumption.However,most existing works solely focus on the optimization issues with global information,which is generally difficult to obtain in real-world scenarios.In this paper,fully considering the incomplete information resulting from diverse types of tasks,we study the joint task offloading and spectrum allocation problem in UAV network,where free UAV nodes serve as helpers for cooperative computation.The objective is to jointly optimize offloading mode,collaboration pairing,and channel allocation to minimize the weighted network cost.To achieve the purpose with only partial observation,an extensive-form game is introduced to reformulate the problem,and a regret learning-based scheme is proposed to achieve the equilibrium solution.With retrospective improvement property and information set concept,the designed algorithm is capable of combating incomplete information and obtaining more precise allocation patterns for diverse tasks.Numerical results show that our proposed algorithm outperforms the benchmarks across various settings. 展开更多
关键词 UAV networks Edge computing task offloading Spectrum allocation Partial observation Regret learning
在线阅读 下载PDF
面向物流机器人的改进Q-Learning动态避障算法研究 被引量:1
15
作者 王力 赵全海 黄石磊 《计算机测量与控制》 2025年第3期267-274,共8页
为提升物流机器人(AMR)在复杂环境中的自主导航与避障能力,改善传统Q-Learning算法在动态环境中的收敛速度慢、路径规划不够优化等问题;研究引入模糊退火算法对Q-Learning算法进行路径节点和搜索路径优化,删除多余节点和非必要转折;并... 为提升物流机器人(AMR)在复杂环境中的自主导航与避障能力,改善传统Q-Learning算法在动态环境中的收敛速度慢、路径规划不够优化等问题;研究引入模糊退火算法对Q-Learning算法进行路径节点和搜索路径优化,删除多余节点和非必要转折;并为平衡好Q-Learning算法的探索和利用问题,提出以贪婪法优化搜索策略,并借助改进动态窗口法对进行路径节点和平滑加速改进,实现局部路径规划,以提高改进Q-Learning算法在AMR动态避障中的搜索性能和效率;结果表明,改进Q-Learning算法能有效优化搜索路径,能较好避开动态障碍物和静态障碍物,与其他算法的距离差幅至少大于1 m;改进算法在局部路径中的避障轨迹更趋近于期望值,最大搜索时间不超过3 s,优于其他算法,且其在不同场景下的避障路径长度和运动时间减少幅度均超过10%,避障成功率超过90%;研究方法能满足智慧仓储、智能制造等工程领域对物流机器人高效、安全作业的需求。 展开更多
关键词 物流机器人 Q-learning算法 DWA 多目标规划 障碍物 避障
在线阅读 下载PDF
Improving Multiple Sclerosis Disease Prediction Using Hybrid Deep Learning Model
16
作者 Stephen Ojo Moez Krichen +3 位作者 Meznah A.Alamro Alaeddine Mihoub Gabriel Avelino Sampedro Jaroslava Kniezova 《Computers, Materials & Continua》 SCIE EI 2024年第10期643-661,共19页
Myelin damage and a wide range of symptoms are caused by the immune system targeting the central nervous system in Multiple Sclerosis(MS),a chronic autoimmune neurological condition.It disrupts signals between the bra... Myelin damage and a wide range of symptoms are caused by the immune system targeting the central nervous system in Multiple Sclerosis(MS),a chronic autoimmune neurological condition.It disrupts signals between the brain and body,causing symptoms including tiredness,muscle weakness,and difficulty with memory and balance.Traditional methods for detecting MS are less precise and time-consuming,which is a major gap in addressing this problem.This gap has motivated the investigation of new methods to improve MS detection consistency and accuracy.This paper proposed a novel approach named FAD consisting of Deep Neural Network(DNN)fused with an Artificial Neural Network(ANN)to detect MS with more efficiency and accuracy,utilizing regularization and combat over-fitting.We use gene expression data for MS research in the GEO GSE17048 dataset.The dataset is preprocessed by performing encoding,standardization using min-max-scaler,and feature selection using Recursive Feature Elimination with Cross-Validation(RFECV)to optimize and refine the dataset.Meanwhile,for experimenting with the dataset,another deep-learning hybrid model is integrated with different ML models,including Random Forest(RF),Gradient Boosting(GB),XGBoost(XGB),K-Nearest Neighbors(KNN)and Decision Tree(DT).Results reveal that FAD performed exceptionally well on the dataset,which was evident with an accuracy of 96.55%and an F1-score of 96.71%.The use of the proposed FAD approach helps in achieving remarkable results with better accuracy than previous studies. 展开更多
关键词 multi Sclerosis(MS) machine learning deep learning artificial neural network healthcare
在线阅读 下载PDF
A Robust Approach for Multi Classification-Based Intrusion Detection through Stacking Deep Learning Models
17
作者 Samia Allaoua Chelloug 《Computers, Materials & Continua》 SCIE EI 2024年第6期4845-4861,共17页
Intrusion detection is a predominant task that monitors and protects the network infrastructure.Therefore,many datasets have been published and investigated by researchers to analyze and understand the problem of intr... Intrusion detection is a predominant task that monitors and protects the network infrastructure.Therefore,many datasets have been published and investigated by researchers to analyze and understand the problem of intrusion prediction and detection.In particular,the Network Security Laboratory-Knowledge Discovery in Databases(NSL-KDD)is an extensively used benchmark dataset for evaluating intrusion detection systems(IDSs)as it incorporates various network traffic attacks.It is worth mentioning that a large number of studies have tackled the problem of intrusion detection using machine learning models,but the performance of these models often decreases when evaluated on new attacks.This has led to the utilization of deep learning techniques,which have showcased significant potential for processing large datasets and therefore improving detection accuracy.For that reason,this paper focuses on the role of stacking deep learning models,including convolution neural network(CNN)and deep neural network(DNN)for improving the intrusion detection rate of the NSL-KDD dataset.Each base model is trained on the NSL-KDD dataset to extract significant features.Once the base models have been trained,the stacking process proceeds to the second stage,where a simple meta-model has been trained on the predictions generated from the proposed base models.The combination of the predictions allows the meta-model to distinguish different classes of attacks and increase the detection rate.Our experimental evaluations using the NSL-KDD dataset have shown the efficacy of stacking deep learning models for intrusion detection.The performance of the ensemble of base models,combined with the meta-model,exceeds the performance of individual models.Our stacking model has attained an accuracy of 99%and an average F1-score of 93%for the multi-classification scenario.Besides,the training time of the proposed ensemble model is lower than the training time of benchmark techniques,demonstrating its efficiency and robustness. 展开更多
关键词 Intrusion detection multi classification deep learning STACKING NSL-KDD
在线阅读 下载PDF
Multi-tasking to Address Diversity in Language Learning
18
作者 雷琨 《海外英语》 2014年第21期98-99,103,共3页
With focus now placed on the learner, more attention is given to his learning style, multiple intelligence and developing learning strategies to enable him to make sense of and use of the target language appropriately... With focus now placed on the learner, more attention is given to his learning style, multiple intelligence and developing learning strategies to enable him to make sense of and use of the target language appropriately in varied contexts and with different uses of the language. To attain this, the teacher is tasked with designing, monitoring and processing language learning activities for students to carry out and in the process learn by doing and reflecting on the learning process they went through as they interacted socially with each other. This paper describes a task named"The Fishbowl Technique"and found to be effective in large ESL classes in the secondary level in the Philippines. 展开更多
关键词 multi-tasking DIVERSITY learning STYLE the fishbow
在线阅读 下载PDF
A Distributed Cooperative Dynamic Task Planning Algorithm for Multiple Satellites Based on Multi-agent Hybrid Learning 被引量:16
19
作者 WANG Chong LI Jun JING Ning WANG Jun CHEN Hao 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2011年第4期493-505,共13页
Traditionally, heuristic re-planning algorithms are used to tackle the problem of dynamic task planning for multiple satellites. However, the traditional heuristic strategies depend on the concrete tasks, which often ... Traditionally, heuristic re-planning algorithms are used to tackle the problem of dynamic task planning for multiple satellites. However, the traditional heuristic strategies depend on the concrete tasks, which often affect the result’s optimality. Noticing that the historical information of cooperative task planning will impact the latter planning results, we propose a hybrid learning algorithm for dynamic multi-satellite task planning, which is based on the multi-agent reinforcement learning of policy iteration and the transfer learning. The reinforcement learning strategy of each satellite is described with neural networks. The policy neural network individuals with the best topological structure and weights are found by applying co-evolutionary search iteratively. To avoid the failure of the historical learning caused by the randomly occurring observation requests, a novel approach is proposed to balance the quality and efficiency of the task planning, which converts the historical learning strategy to the current initial learning strategy by applying the transfer learning algorithm. The simulations and analysis show the feasibility and adaptability of the proposed approach especially for the situation with randomly occurring observation requests. 展开更多
关键词 multiple satellites dynamic task planning problem multi-agent systems reinforcement learning neuroevolution of augmenting topologies transfer learning
原文传递
基于Q-Learning的双无人机覆盖路径规划
20
作者 陈佳雨 李文 +3 位作者 李泰融 李志茹 王子怡 陈鹏云 《遥测遥控》 2025年第4期96-104,共9页
覆盖路径规划的目标是确保无人机能够实现对目标区域的完全覆盖。在以往的研究中,无人机的工作模式为分别负责每个子区域的覆盖任务,而本研究中两架无人机在整个搜索区域中协同工作,能够在提高覆盖效率的基础上更加灵活地实现覆盖任务... 覆盖路径规划的目标是确保无人机能够实现对目标区域的完全覆盖。在以往的研究中,无人机的工作模式为分别负责每个子区域的覆盖任务,而本研究中两架无人机在整个搜索区域中协同工作,能够在提高覆盖效率的基础上更加灵活地实现覆盖任务。针对传统方法求解无人机覆盖路径规划易导致规划总代价高的问题,本文提出一种基于Q-Learning(Q学习)的双无人机覆盖路径规划算法。为了节省无人机完成覆盖搜索任务的时间,采用基于网格的旋转区域划分算法最小化待搜索区域。通过建立无人机覆盖路径规划模型,将路径规划转化为多目标函数优化问题,并利用Double-Q-Learning(双Q学习)算法均衡算法全局搜索与局部开发能力,以综合考虑距离代价和转折代价的总代价函数,对路径规划迭代寻优。仿真实验结果表明:所提算法规划的路径能够以更低的总代价在不同的目标区域中实现两架无人机对目标区域的完全覆盖。 展开更多
关键词 覆盖路径规划 双无人机 Double-Q-learning 协同控制 旋转区域 多目标函数
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部