针对额外提取数据特征的方法需要花费大量时间和人力成本、轴承退化的线性预测精度低等问题,以及时序数据具有时间依赖关系的特点,提出了端到端的结合长短时记忆网络的深度变分自编码器模型(E2E Deep VAE-LSTM)用于轴承退化预测。通过改...针对额外提取数据特征的方法需要花费大量时间和人力成本、轴承退化的线性预测精度低等问题,以及时序数据具有时间依赖关系的特点,提出了端到端的结合长短时记忆网络的深度变分自编码器模型(E2E Deep VAE-LSTM)用于轴承退化预测。通过改进VAE的结构,并结合LSTM,该模型可以在含有异常值的数据集上直接进行训练和预测;使用系统重建误差表征轴承退化趋势,实现了轴承退化的非线性预测。在三个真实数据集上的实验结果表明,E2E Deep VAE-LSTM模型可以得到满意的预测结果,预测精度均高于现有的几种AE类模型及其他几种方法,且具有良好的泛化能力和抗过拟合能力。展开更多
Deep learning(DL)has shown its superior performance in dealing with various computer vision tasks in recent years.As a simple and effective DL model,autoencoder(AE)is popularly used to decompose hyperspectral images(H...Deep learning(DL)has shown its superior performance in dealing with various computer vision tasks in recent years.As a simple and effective DL model,autoencoder(AE)is popularly used to decompose hyperspectral images(HSIs)due to its powerful ability of feature extraction and data reconstruction.However,most existing AE-based unmixing algorithms usually ignore the spatial information of HSIs.To solve this problem,a hypergraph regularized deep autoencoder(HGAE)is proposed for unmixing.Firstly,the traditional AE architecture is specifically improved as an unsupervised unmixing framework.Secondly,hypergraph learning is employed to reformulate the loss function,which facilitates the expression of high-order similarity among locally neighboring pixels and promotes the consistency of their abundances.Moreover,L_(1/2)norm is further used to enhance abundances sparsity.Finally,the experiments on simulated data,real hyperspectral remote sensing images,and textile cloth images are used to verify that the proposed method can perform better than several state-of-the-art unmixing algorithms.展开更多
以ChatGPT为代表的大语言模型(large language model,LLM)因其强大的自然语言理解和生成能力在各领域中得到广泛应用.然而,深度学习模型在受到对抗样本攻击时往往展现出脆弱性.在自然语言处理领域中,当前对抗样本生成方法的研究通常使用...以ChatGPT为代表的大语言模型(large language model,LLM)因其强大的自然语言理解和生成能力在各领域中得到广泛应用.然而,深度学习模型在受到对抗样本攻击时往往展现出脆弱性.在自然语言处理领域中,当前对抗样本生成方法的研究通常使用CNN类模型、RNN类模型和基于Transformer结构的预训练模型作为目标模型,而很少有工作探究LLM受到对抗攻击时的鲁棒性并量化LLM鲁棒性的评估标准.以中文对抗攻击下的ChatGPT为例,引入了偏移平均差(offset average difference,OAD)这一新概念,提出了一种基于OAD的可量化的LLM鲁棒性评价指标OAD-based robustness score (ORS).在黑盒攻击场景下,选取9种基于词语重要性的主流中文对抗攻击方法来生成对抗文本,利用这些对抗文本攻击ChatGPT后可以得到每种方法的攻击成功率.所提的ORS基于攻击成功率为LLM面向每种攻击方法的鲁棒性打分.除了输出为硬标签的ChatGPT,还基于攻击成功率和以高置信度误分类对抗文本占比,设计了适用于输出为软标签的目标模型的ORS.与此同时,将这种打分公式推广到对抗文本的流畅性评估中,提出了一种基于OAD的对抗文本流畅性打分方法 OAD-based fluency score (OFS).相比于需要人类参与的传统方法,所提的OFS大大降低了评估成本.分别在真实世界中的中文新闻分类和情感倾向分类数据集上开展实验.实验结果在一定程度上初步表明,面向文本分类任务,对抗攻击下的ChatGPT鲁棒性分数比中文BERT高近20%.然而,ChatGPT在受到对抗攻击时仍会产生错误预测,攻击成功率最高可超过40%.展开更多
文摘针对额外提取数据特征的方法需要花费大量时间和人力成本、轴承退化的线性预测精度低等问题,以及时序数据具有时间依赖关系的特点,提出了端到端的结合长短时记忆网络的深度变分自编码器模型(E2E Deep VAE-LSTM)用于轴承退化预测。通过改进VAE的结构,并结合LSTM,该模型可以在含有异常值的数据集上直接进行训练和预测;使用系统重建误差表征轴承退化趋势,实现了轴承退化的非线性预测。在三个真实数据集上的实验结果表明,E2E Deep VAE-LSTM模型可以得到满意的预测结果,预测精度均高于现有的几种AE类模型及其他几种方法,且具有良好的泛化能力和抗过拟合能力。
基金National Natural Science Foundation of China(No.62001098)Fundamental Research Funds for the Central Universities of Ministry of Education of China(No.2232020D-33)。
文摘Deep learning(DL)has shown its superior performance in dealing with various computer vision tasks in recent years.As a simple and effective DL model,autoencoder(AE)is popularly used to decompose hyperspectral images(HSIs)due to its powerful ability of feature extraction and data reconstruction.However,most existing AE-based unmixing algorithms usually ignore the spatial information of HSIs.To solve this problem,a hypergraph regularized deep autoencoder(HGAE)is proposed for unmixing.Firstly,the traditional AE architecture is specifically improved as an unsupervised unmixing framework.Secondly,hypergraph learning is employed to reformulate the loss function,which facilitates the expression of high-order similarity among locally neighboring pixels and promotes the consistency of their abundances.Moreover,L_(1/2)norm is further used to enhance abundances sparsity.Finally,the experiments on simulated data,real hyperspectral remote sensing images,and textile cloth images are used to verify that the proposed method can perform better than several state-of-the-art unmixing algorithms.