This study investigates sarcasm detection in text using a dataset of 8095 sentences compiled from MUStARD and HuggingFace repositories,balanced across sarcastic and non-sarcastic classes.A sequential baseline model(LS...This study investigates sarcasm detection in text using a dataset of 8095 sentences compiled from MUStARD and HuggingFace repositories,balanced across sarcastic and non-sarcastic classes.A sequential baseline model(LSTM)is compared with transformer-based models(RoBERTa and XLNet),integrated with attention mechanisms.Transformers were chosen for their proven ability to capture long-range contextual dependencies,whereas LSTM serves as a traditional benchmark for sequential modeling.Experimental results show that RoBERTa achieves 0.87 accuracy,XLNet 0.83,and LSTM 0.52.These findings confirm that transformer architectures significantly outperform recurrent models in sarcasm detection.Future work will incorporate multimodal features and error analysis to further improve robustness.展开更多
针对工业装配任务,尤其是不规则轴孔工件装配中,基于学习的前期样本质量低、训练过程不稳定等问题,提出一种融合引斥力模型(Attraction-Repulsion Model,ARM)引导机制和长短期记忆网络(Long Short Term Memory,LSTM)的柔性演员-评论家(S...针对工业装配任务,尤其是不规则轴孔工件装配中,基于学习的前期样本质量低、训练过程不稳定等问题,提出一种融合引斥力模型(Attraction-Repulsion Model,ARM)引导机制和长短期记忆网络(Long Short Term Memory,LSTM)的柔性演员-评论家(Soft Actor-Critic,SAC)算法。首先,为解决训练初期探索效率低的问题,提出一种基于引斥力模型的策略引导机制,通过目标位置信息引导机械臂运动,加速收敛过程;其次,基于长短期记忆网络对算法的策略网络和价值网络进行改进,有效利用历史信息,增强策略学习能力,提高算法的收敛速度和稳定性。仿真结果表明,所提出的算法在行星减速器中心轴装配任务中取得显著的效果,装配成功率高达99.4%,与普通SAC算法相比,平均最大接触力和力矩分别降低了68.8%和79.2%。在物理环境中装配成功率达95%以上,最大接触力和力矩分别小于10 N和1.5 N·m,验证了算法的有效性。展开更多
文摘This study investigates sarcasm detection in text using a dataset of 8095 sentences compiled from MUStARD and HuggingFace repositories,balanced across sarcastic and non-sarcastic classes.A sequential baseline model(LSTM)is compared with transformer-based models(RoBERTa and XLNet),integrated with attention mechanisms.Transformers were chosen for their proven ability to capture long-range contextual dependencies,whereas LSTM serves as a traditional benchmark for sequential modeling.Experimental results show that RoBERTa achieves 0.87 accuracy,XLNet 0.83,and LSTM 0.52.These findings confirm that transformer architectures significantly outperform recurrent models in sarcasm detection.Future work will incorporate multimodal features and error analysis to further improve robustness.
文摘由于不同井间工况差异显著,异常振动特征分布存在跨井不一致性,传统基于单井数据的监测方法难以适应跨井场景。为此,以黏滑振动为例,对不同工况下的黏滑振动数据特征进行了对比分析,提出了一种结合深度判别迁移学习网络(domain adaptive transfer learning network,DDTLN)与BO⁃Transformer⁃LSTM的跨井异常振动识别方法。将近钻头振动数据输入到DDTLN模型中,通过卷积层与改进的联合分布自适应(IJDA)机制减小域间特征差异,实现跨域特征提取;将提取的特征输入到BO⁃Transformer⁃LSTM模型中挖掘时序信息,实现跨井高效分类。试验结果表明:不同工况下井间振动信号差异显著,传统方法跨域分类效果较差;经过DDTLN处理后,不同域间的数据特征有了很好的对齐,跨域识别准确率高达91.5%;DDTLN⁃BO⁃Transformer⁃LSTM模型能够有效解决跨井识别问题,分类准确率最高达96.7%,显著优于传统单井识别方法,具有更好的泛化能力。该研究可为跨井场景下的井下异常振动识别提供新思路。
文摘针对工业装配任务,尤其是不规则轴孔工件装配中,基于学习的前期样本质量低、训练过程不稳定等问题,提出一种融合引斥力模型(Attraction-Repulsion Model,ARM)引导机制和长短期记忆网络(Long Short Term Memory,LSTM)的柔性演员-评论家(Soft Actor-Critic,SAC)算法。首先,为解决训练初期探索效率低的问题,提出一种基于引斥力模型的策略引导机制,通过目标位置信息引导机械臂运动,加速收敛过程;其次,基于长短期记忆网络对算法的策略网络和价值网络进行改进,有效利用历史信息,增强策略学习能力,提高算法的收敛速度和稳定性。仿真结果表明,所提出的算法在行星减速器中心轴装配任务中取得显著的效果,装配成功率高达99.4%,与普通SAC算法相比,平均最大接触力和力矩分别降低了68.8%和79.2%。在物理环境中装配成功率达95%以上,最大接触力和力矩分别小于10 N和1.5 N·m,验证了算法的有效性。