This study investigates sarcasm detection in text using a dataset of 8095 sentences compiled from MUStARD and HuggingFace repositories,balanced across sarcastic and non-sarcastic classes.A sequential baseline model(LS...This study investigates sarcasm detection in text using a dataset of 8095 sentences compiled from MUStARD and HuggingFace repositories,balanced across sarcastic and non-sarcastic classes.A sequential baseline model(LSTM)is compared with transformer-based models(RoBERTa and XLNet),integrated with attention mechanisms.Transformers were chosen for their proven ability to capture long-range contextual dependencies,whereas LSTM serves as a traditional benchmark for sequential modeling.Experimental results show that RoBERTa achieves 0.87 accuracy,XLNet 0.83,and LSTM 0.52.These findings confirm that transformer architectures significantly outperform recurrent models in sarcasm detection.Future work will incorporate multimodal features and error analysis to further improve robustness.展开更多
微服务的按需伸缩对提高集群的资源利用率至关重要,而按需伸缩的前提是集群能够对资源需求进行精准预测。当前基于规则响应式的资源管理策略仍是产业界的主流方式,学术界结合机器学习的资源负载预测方法仍存在预测不够精准等问题。因此...微服务的按需伸缩对提高集群的资源利用率至关重要,而按需伸缩的前提是集群能够对资源需求进行精准预测。当前基于规则响应式的资源管理策略仍是产业界的主流方式,学术界结合机器学习的资源负载预测方法仍存在预测不够精准等问题。因此,提出一种基于微服务依赖程度的负载预测模型。通过基于DTW(Dynamic Time Warping)改进的容器依赖程度检测算法,对容器进行依赖程度评估。分析存在强依赖关系的容器之间指标的相关性,选择相关性较高的指标作为模型的输入特征变量。预测模型采用Seq2Seq(Sequence to Sequence)编解码模型,并结合注意力机制和残差LSTM来提升模型预测的精准性和稳定性。实验表明,该模型预测效果显著,误差评价指标MAE、RMSE、MAPE相较于另外两个深度学习模型平均降低了48%、35%、51%,能够有效预测出存在强依赖关系容器的短时负载。展开更多
文摘This study investigates sarcasm detection in text using a dataset of 8095 sentences compiled from MUStARD and HuggingFace repositories,balanced across sarcastic and non-sarcastic classes.A sequential baseline model(LSTM)is compared with transformer-based models(RoBERTa and XLNet),integrated with attention mechanisms.Transformers were chosen for their proven ability to capture long-range contextual dependencies,whereas LSTM serves as a traditional benchmark for sequential modeling.Experimental results show that RoBERTa achieves 0.87 accuracy,XLNet 0.83,and LSTM 0.52.These findings confirm that transformer architectures significantly outperform recurrent models in sarcasm detection.Future work will incorporate multimodal features and error analysis to further improve robustness.
文摘由于不同井间工况差异显著,异常振动特征分布存在跨井不一致性,传统基于单井数据的监测方法难以适应跨井场景。为此,以黏滑振动为例,对不同工况下的黏滑振动数据特征进行了对比分析,提出了一种结合深度判别迁移学习网络(domain adaptive transfer learning network,DDTLN)与BO⁃Transformer⁃LSTM的跨井异常振动识别方法。将近钻头振动数据输入到DDTLN模型中,通过卷积层与改进的联合分布自适应(IJDA)机制减小域间特征差异,实现跨域特征提取;将提取的特征输入到BO⁃Transformer⁃LSTM模型中挖掘时序信息,实现跨井高效分类。试验结果表明:不同工况下井间振动信号差异显著,传统方法跨域分类效果较差;经过DDTLN处理后,不同域间的数据特征有了很好的对齐,跨域识别准确率高达91.5%;DDTLN⁃BO⁃Transformer⁃LSTM模型能够有效解决跨井识别问题,分类准确率最高达96.7%,显著优于传统单井识别方法,具有更好的泛化能力。该研究可为跨井场景下的井下异常振动识别提供新思路。
文摘微服务的按需伸缩对提高集群的资源利用率至关重要,而按需伸缩的前提是集群能够对资源需求进行精准预测。当前基于规则响应式的资源管理策略仍是产业界的主流方式,学术界结合机器学习的资源负载预测方法仍存在预测不够精准等问题。因此,提出一种基于微服务依赖程度的负载预测模型。通过基于DTW(Dynamic Time Warping)改进的容器依赖程度检测算法,对容器进行依赖程度评估。分析存在强依赖关系的容器之间指标的相关性,选择相关性较高的指标作为模型的输入特征变量。预测模型采用Seq2Seq(Sequence to Sequence)编解码模型,并结合注意力机制和残差LSTM来提升模型预测的精准性和稳定性。实验表明,该模型预测效果显著,误差评价指标MAE、RMSE、MAPE相较于另外两个深度学习模型平均降低了48%、35%、51%,能够有效预测出存在强依赖关系容器的短时负载。