针对工业装配任务,尤其是不规则轴孔工件装配中,基于学习的前期样本质量低、训练过程不稳定等问题,提出一种融合引斥力模型(Attraction-Repulsion Model,ARM)引导机制和长短期记忆网络(Long Short Term Memory,LSTM)的柔性演员-评论家(S...针对工业装配任务,尤其是不规则轴孔工件装配中,基于学习的前期样本质量低、训练过程不稳定等问题,提出一种融合引斥力模型(Attraction-Repulsion Model,ARM)引导机制和长短期记忆网络(Long Short Term Memory,LSTM)的柔性演员-评论家(Soft Actor-Critic,SAC)算法。首先,为解决训练初期探索效率低的问题,提出一种基于引斥力模型的策略引导机制,通过目标位置信息引导机械臂运动,加速收敛过程;其次,基于长短期记忆网络对算法的策略网络和价值网络进行改进,有效利用历史信息,增强策略学习能力,提高算法的收敛速度和稳定性。仿真结果表明,所提出的算法在行星减速器中心轴装配任务中取得显著的效果,装配成功率高达99.4%,与普通SAC算法相比,平均最大接触力和力矩分别降低了68.8%和79.2%。在物理环境中装配成功率达95%以上,最大接触力和力矩分别小于10 N和1.5 N·m,验证了算法的有效性。展开更多
With the increasing growth of online news,fake electronic news detection has become one of the most important paradigms of modern research.Traditional electronic news detection techniques are generally based on contex...With the increasing growth of online news,fake electronic news detection has become one of the most important paradigms of modern research.Traditional electronic news detection techniques are generally based on contextual understanding,sequential dependencies,and/or data imbalance.This makes distinction between genuine and fabricated news a challenging task.To address this problem,we propose a novel hybrid architecture,T5-SA-LSTM,which synergistically integrates the T5 Transformer for semantically rich contextual embedding with the Self-Attentionenhanced(SA)Long Short-Term Memory(LSTM).The LSTM is trained using the Adam optimizer,which provides faster and more stable convergence compared to the Stochastic Gradient Descend(SGD)and Root Mean Square Propagation(RMSProp).The WELFake and FakeNewsPrediction datasets are used,which consist of labeled news articles having fake and real news samples.Tokenization and Synthetic Minority Over-sampling Technique(SMOTE)methods are used for data preprocessing to ensure linguistic normalization and class imbalance.The incorporation of the Self-Attention(SA)mechanism enables the model to highlight critical words and phrases,thereby enhancing predictive accuracy.The proposed model is evaluated using accuracy,precision,recall(sensitivity),and F1-score as performance metrics.The model achieved 99%accuracy on the WELFake dataset and 96.5%accuracy on the FakeNewsPrediction dataset.It outperformed the competitive schemes such as T5-SA-LSTM(RMSProp),T5-SA-LSTM(SGD)and some other models.展开更多
文摘针对工业装配任务,尤其是不规则轴孔工件装配中,基于学习的前期样本质量低、训练过程不稳定等问题,提出一种融合引斥力模型(Attraction-Repulsion Model,ARM)引导机制和长短期记忆网络(Long Short Term Memory,LSTM)的柔性演员-评论家(Soft Actor-Critic,SAC)算法。首先,为解决训练初期探索效率低的问题,提出一种基于引斥力模型的策略引导机制,通过目标位置信息引导机械臂运动,加速收敛过程;其次,基于长短期记忆网络对算法的策略网络和价值网络进行改进,有效利用历史信息,增强策略学习能力,提高算法的收敛速度和稳定性。仿真结果表明,所提出的算法在行星减速器中心轴装配任务中取得显著的效果,装配成功率高达99.4%,与普通SAC算法相比,平均最大接触力和力矩分别降低了68.8%和79.2%。在物理环境中装配成功率达95%以上,最大接触力和力矩分别小于10 N和1.5 N·m,验证了算法的有效性。
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R195)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘With the increasing growth of online news,fake electronic news detection has become one of the most important paradigms of modern research.Traditional electronic news detection techniques are generally based on contextual understanding,sequential dependencies,and/or data imbalance.This makes distinction between genuine and fabricated news a challenging task.To address this problem,we propose a novel hybrid architecture,T5-SA-LSTM,which synergistically integrates the T5 Transformer for semantically rich contextual embedding with the Self-Attentionenhanced(SA)Long Short-Term Memory(LSTM).The LSTM is trained using the Adam optimizer,which provides faster and more stable convergence compared to the Stochastic Gradient Descend(SGD)and Root Mean Square Propagation(RMSProp).The WELFake and FakeNewsPrediction datasets are used,which consist of labeled news articles having fake and real news samples.Tokenization and Synthetic Minority Over-sampling Technique(SMOTE)methods are used for data preprocessing to ensure linguistic normalization and class imbalance.The incorporation of the Self-Attention(SA)mechanism enables the model to highlight critical words and phrases,thereby enhancing predictive accuracy.The proposed model is evaluated using accuracy,precision,recall(sensitivity),and F1-score as performance metrics.The model achieved 99%accuracy on the WELFake dataset and 96.5%accuracy on the FakeNewsPrediction dataset.It outperformed the competitive schemes such as T5-SA-LSTM(RMSProp),T5-SA-LSTM(SGD)and some other models.