针对无监督环境下传统网络异常诊断算法存在异常点定位和异常数据分类准确率低等不足,通过设计一种基于改进Q-learning算法的无线网络异常诊断方法:首先基于ADU(Asynchronous Data Unit异步数据单元)单元采集无线网络的数据流,并提取数...针对无监督环境下传统网络异常诊断算法存在异常点定位和异常数据分类准确率低等不足,通过设计一种基于改进Q-learning算法的无线网络异常诊断方法:首先基于ADU(Asynchronous Data Unit异步数据单元)单元采集无线网络的数据流,并提取数据包特征;然后构建Q-learning算法模型探索状态值和奖励值的平衡点,利用SA(Simulated Annealing模拟退火)算法从全局视角对下一时刻状态进行精确识别;最后确定训练样本的联合分布概率,提升输出值的逼近性能以达到平衡探索与代价之间的均衡。测试结果显示:改进Q-learning算法的网络异常定位准确率均值达99.4%,在不同类型网络异常的分类精度和分类效率等方面,也优于三种传统网络异常诊断方法。展开更多
The dwell scheduling problem for a multifunctional radar system is led to the formation of corresponding optimiza-tion problem.In order to solve the resulting optimization prob-lem,the dwell scheduling process in a sc...The dwell scheduling problem for a multifunctional radar system is led to the formation of corresponding optimiza-tion problem.In order to solve the resulting optimization prob-lem,the dwell scheduling process in a scheduling interval(SI)is formulated as a Markov decision process(MDP),where the state,action,and reward are specified for this dwell scheduling problem.Specially,the action is defined as scheduling the task on the left side,right side or in the middle of the radar idle time-line,which reduces the action space effectively and accelerates the convergence of the training.Through the above process,a model-free reinforcement learning framework is established.Then,an adaptive dwell scheduling method based on Q-learn-ing is proposed,where the converged Q value table after train-ing is utilized to instruct the scheduling process.Simulation results demonstrate that compared with existing dwell schedul-ing algorithms,the proposed one can achieve better scheduling performance considering the urgency criterion,the importance criterion and the desired execution time criterion comprehen-sively.The average running time shows the proposed algorithm has real-time performance.展开更多
目的:针对生物安全实验室空间密闭、障碍物形态多(球形、立方体、圆柱体、椭球体等)及精确操作要求极高的复杂环境特性,提出一种融合改进Q-learning和粒子群优化(particle swarm optimization,PSO)算法的机械臂轨迹规划与避障算法QPSO...目的:针对生物安全实验室空间密闭、障碍物形态多(球形、立方体、圆柱体、椭球体等)及精确操作要求极高的复杂环境特性,提出一种融合改进Q-learning和粒子群优化(particle swarm optimization,PSO)算法的机械臂轨迹规划与避障算法QPSO。方法:QPSO算法采用双层优化架构,上层利用改进的Q-learning算法实现路径决策,通过非线性动态温度玻尔兹曼探索策略平衡探索与利用;下层采用含动态权重和学习因子的PSO算法优化轨迹,并结合余弦定理碰撞检测策略保障避障安全性。为验证提出算法的可行性,进行算法性能分析和避障性能测试,并与标准PSO算法、遗传算法、萤火虫算法、改进快速扩展随机树(rapidly-exploring random tree star,RRT*)算法进行对比。结果:相比标准PSO算法、遗传算法、萤火虫算法和RRT*算法,提出的QPSO算法在收敛性能、轨迹长度和避障成功率方面均有显著优势,且在确保最短路径的同时可实现最大安全距离。结论:提出的QPSO算法能有效提升复杂环境下机械臂的轨迹规划和避障效果,可为生物安全实验室等类似环境的自动化实验操作提供可靠的技术支撑。展开更多
随着智能体在复杂动态环境中的路径规划需求日益增长,传统Q-Learning算法在收敛速度、避障效率及全局优化能力上的局限性逐渐凸显。针对Q-Learning算法在路径规划中的不足,本文提出一种结合动态学习率、自适应探索率与蒙特卡洛树搜索(Mo...随着智能体在复杂动态环境中的路径规划需求日益增长,传统Q-Learning算法在收敛速度、避障效率及全局优化能力上的局限性逐渐凸显。针对Q-Learning算法在路径规划中的不足,本文提出一种结合动态学习率、自适应探索率与蒙特卡洛树搜索(Monte Carlo Tree Search,MCTS)的改进方法。首先,通过引入指数衰减的动态学习率与探索率,以平衡算法在训练初期的探索能力与后期的策略稳定性;其次,将MCTS与Q-Learning结合,利用MCTS的全局搜索特性优化Q值更新过程;此外,融合启发式函数以改进奖励机制,引导智能体更高效地逼近目标。实验结果表明,改进算法的平均步数、收敛速度、稳定性等相较于传统算法提升显著,本研究为复杂环境下的智能体路径规划提供了一种高效、鲁棒的解决方案。展开更多
With the development of economic globalization,distributedmanufacturing is becomingmore andmore prevalent.Recently,integrated scheduling of distributed production and assembly has captured much concern.This research s...With the development of economic globalization,distributedmanufacturing is becomingmore andmore prevalent.Recently,integrated scheduling of distributed production and assembly has captured much concern.This research studies a distributed flexible job shop scheduling problem with assembly operations.Firstly,a mixed integer programming model is formulated to minimize the maximum completion time.Secondly,a Q-learning-assisted coevolutionary algorithmis presented to solve themodel:(1)Multiple populations are developed to seek required decisions simultaneously;(2)An encoding and decoding method based on problem features is applied to represent individuals;(3)A hybrid approach of heuristic rules and random methods is employed to acquire a high-quality population;(4)Three evolutionary strategies having crossover and mutation methods are adopted to enhance exploration capabilities;(5)Three neighborhood structures based on problem features are constructed,and a Q-learning-based iterative local search method is devised to improve exploitation abilities.The Q-learning approach is applied to intelligently select better neighborhood structures.Finally,a group of instances is constructed to perform comparison experiments.The effectiveness of the Q-learning approach is verified by comparing the developed algorithm with its variant without the Q-learning method.Three renowned meta-heuristic algorithms are used in comparison with the developed algorithm.The comparison results demonstrate that the designed method exhibits better performance in coping with the formulated problem.展开更多
文摘针对无监督环境下传统网络异常诊断算法存在异常点定位和异常数据分类准确率低等不足,通过设计一种基于改进Q-learning算法的无线网络异常诊断方法:首先基于ADU(Asynchronous Data Unit异步数据单元)单元采集无线网络的数据流,并提取数据包特征;然后构建Q-learning算法模型探索状态值和奖励值的平衡点,利用SA(Simulated Annealing模拟退火)算法从全局视角对下一时刻状态进行精确识别;最后确定训练样本的联合分布概率,提升输出值的逼近性能以达到平衡探索与代价之间的均衡。测试结果显示:改进Q-learning算法的网络异常定位准确率均值达99.4%,在不同类型网络异常的分类精度和分类效率等方面,也优于三种传统网络异常诊断方法。
基金supported by the National Natural Science Foundation of China(6177109562031007).
文摘The dwell scheduling problem for a multifunctional radar system is led to the formation of corresponding optimiza-tion problem.In order to solve the resulting optimization prob-lem,the dwell scheduling process in a scheduling interval(SI)is formulated as a Markov decision process(MDP),where the state,action,and reward are specified for this dwell scheduling problem.Specially,the action is defined as scheduling the task on the left side,right side or in the middle of the radar idle time-line,which reduces the action space effectively and accelerates the convergence of the training.Through the above process,a model-free reinforcement learning framework is established.Then,an adaptive dwell scheduling method based on Q-learn-ing is proposed,where the converged Q value table after train-ing is utilized to instruct the scheduling process.Simulation results demonstrate that compared with existing dwell schedul-ing algorithms,the proposed one can achieve better scheduling performance considering the urgency criterion,the importance criterion and the desired execution time criterion comprehen-sively.The average running time shows the proposed algorithm has real-time performance.
文摘目的:针对生物安全实验室空间密闭、障碍物形态多(球形、立方体、圆柱体、椭球体等)及精确操作要求极高的复杂环境特性,提出一种融合改进Q-learning和粒子群优化(particle swarm optimization,PSO)算法的机械臂轨迹规划与避障算法QPSO。方法:QPSO算法采用双层优化架构,上层利用改进的Q-learning算法实现路径决策,通过非线性动态温度玻尔兹曼探索策略平衡探索与利用;下层采用含动态权重和学习因子的PSO算法优化轨迹,并结合余弦定理碰撞检测策略保障避障安全性。为验证提出算法的可行性,进行算法性能分析和避障性能测试,并与标准PSO算法、遗传算法、萤火虫算法、改进快速扩展随机树(rapidly-exploring random tree star,RRT*)算法进行对比。结果:相比标准PSO算法、遗传算法、萤火虫算法和RRT*算法,提出的QPSO算法在收敛性能、轨迹长度和避障成功率方面均有显著优势,且在确保最短路径的同时可实现最大安全距离。结论:提出的QPSO算法能有效提升复杂环境下机械臂的轨迹规划和避障效果,可为生物安全实验室等类似环境的自动化实验操作提供可靠的技术支撑。
文摘随着智能体在复杂动态环境中的路径规划需求日益增长,传统Q-Learning算法在收敛速度、避障效率及全局优化能力上的局限性逐渐凸显。针对Q-Learning算法在路径规划中的不足,本文提出一种结合动态学习率、自适应探索率与蒙特卡洛树搜索(Monte Carlo Tree Search,MCTS)的改进方法。首先,通过引入指数衰减的动态学习率与探索率,以平衡算法在训练初期的探索能力与后期的策略稳定性;其次,将MCTS与Q-Learning结合,利用MCTS的全局搜索特性优化Q值更新过程;此外,融合启发式函数以改进奖励机制,引导智能体更高效地逼近目标。实验结果表明,改进算法的平均步数、收敛速度、稳定性等相较于传统算法提升显著,本研究为复杂环境下的智能体路径规划提供了一种高效、鲁棒的解决方案。
文摘With the development of economic globalization,distributedmanufacturing is becomingmore andmore prevalent.Recently,integrated scheduling of distributed production and assembly has captured much concern.This research studies a distributed flexible job shop scheduling problem with assembly operations.Firstly,a mixed integer programming model is formulated to minimize the maximum completion time.Secondly,a Q-learning-assisted coevolutionary algorithmis presented to solve themodel:(1)Multiple populations are developed to seek required decisions simultaneously;(2)An encoding and decoding method based on problem features is applied to represent individuals;(3)A hybrid approach of heuristic rules and random methods is employed to acquire a high-quality population;(4)Three evolutionary strategies having crossover and mutation methods are adopted to enhance exploration capabilities;(5)Three neighborhood structures based on problem features are constructed,and a Q-learning-based iterative local search method is devised to improve exploitation abilities.The Q-learning approach is applied to intelligently select better neighborhood structures.Finally,a group of instances is constructed to perform comparison experiments.The effectiveness of the Q-learning approach is verified by comparing the developed algorithm with its variant without the Q-learning method.Three renowned meta-heuristic algorithms are used in comparison with the developed algorithm.The comparison results demonstrate that the designed method exhibits better performance in coping with the formulated problem.