Fine-grained image classification, which aims to distinguish images with subtle distinctions, is a challenging task for two main reasons: lack of sufficient training data for every class and difficulty in learning dis...Fine-grained image classification, which aims to distinguish images with subtle distinctions, is a challenging task for two main reasons: lack of sufficient training data for every class and difficulty in learning discriminative features for representation. In this paper, to address the two issues, we propose a two-phase framework for recognizing images from unseen fine-grained classes, i.e., zeroshot fine-grained classification. In the first feature learning phase, we finetune deep convolutional neural networks using hierarchical semantic structure among fine-grained classes to extract discriminative deep visual features. Meanwhile, a domain adaptation structure is induced into deep convolutional neural networks to avoid domain shift from training data to test data. In the second label inference phase, a semantic directed graph is constructed over attributes of fine-grained classes. Based on this graph, we develop a label propagation algorithm to infer the labels of images in the unseen classes. Experimental results on two benchmark datasets demonstrate that our model outperforms the state-of-the-art zero-shot learning models. In addition, the features obtained by our feature learning model also yield significant gains when they are used by other zero-shot learning models, which shows the flexility of our model in zero-shot finegrained classification.展开更多
The goal of zero-shot recognition is to classify classes it has never seen before, which needs to build a bridge between seen and unseen classes through semantic embedding space. Therefore, semantic embedding space le...The goal of zero-shot recognition is to classify classes it has never seen before, which needs to build a bridge between seen and unseen classes through semantic embedding space. Therefore, semantic embedding space learning plays an important role in zero-shot recognition. Among existing works, semantic embedding space is mainly taken by user-defined attribute vectors. However, the discriminative information included in the user-defined attribute vector is limited. In this paper, we propose to learn an extra latent attribute space automatically to produce a more generalized and discriminative semantic embedded space. To prevent the bias problem, both user-defined attribute vector and latent attribute space are optimized by adversarial learning with auto-encoders. We also propose to reconstruct semantic patterns produced by explanatory graphs, which can make semantic embedding space more sensitive to usefully semantic information and less sensitive to useless information. The proposed method is evaluated on the AwA2 and CUB dataset. These results show that our proposed method achieves superior performance.展开更多
零样本图像分类解决了训练和测试数据类别不相交的问题,人类标注属性是一种常用的实现零样本图像分类的辅助知识.为协助专家设计类属性矩阵,提出了一种交互式构建方法,简化了烦琐且缺乏指导的流程.首先,通过一种基于概念的深度学习可解...零样本图像分类解决了训练和测试数据类别不相交的问题,人类标注属性是一种常用的实现零样本图像分类的辅助知识.为协助专家设计类属性矩阵,提出了一种交互式构建方法,简化了烦琐且缺乏指导的流程.首先,通过一种基于概念的深度学习可解释性方法,在训练集图像数据中提取出可理解的属性信息;然后,采用多视图协作的交互方式,探索和分析已提取属性的重要性.系统提供了全局和局部2种方式,辅助用户设计测试集数据类别的属性值;最后,通过在数据集Animals with Attributes2上进行的案例分析,以及采用李克特量表的用户评估实验,验证了设计方法的有效性和实用性,可以帮助专家用户高效且便捷地完成类属性构建工作.展开更多
目的电力设备巡检影像缺陷检测对于提高电力传输的安全性和电网运行的可靠性具有重要作用。但由于相应训练数据集的构造成本高昂,传统的监督学习方法难以适应电力设备巡检影像缺陷检测。同时电力设备巡检影像中通常含有复杂多样的背景,...目的电力设备巡检影像缺陷检测对于提高电力传输的安全性和电网运行的可靠性具有重要作用。但由于相应训练数据集的构造成本高昂,传统的监督学习方法难以适应电力设备巡检影像缺陷检测。同时电力设备巡检影像中通常含有复杂多样的背景,严重干扰了模型对缺陷的检测。方法基于视觉语言模型并结合文本提示,提出了电力设备巡检影像零样本缺陷检测模型。模型中含有多个双专家模块,在由视觉语言模型获得文本特征和视觉特征后,经多个双专家模块处理并融合,得到像素级的缺陷检测结果。同时,构建了具有像素级掩码标注的电力设备巡检影像数据集对模型性能进行全面评测。结果在本文构建的电力设备巡检影像测试数据集上与SAA+(segment any anomaly+)、AnomalyGPT、WinCLIP(window-based CLIP)、PaDiM(patch distribution modeling)和PatchCore进行比较,在像素级的缺陷分割性能表现上,AUROC(area under the receiver operating characteristic curve)平均提升18.1%,F1-max(F1 score at optimal threshold)平均提升26.1%;在图像级的缺陷分类性能表现上,AUROC平均提升20.2%,AP(average precision)平均提升10.0%。具体到数据集中的各个电力设备,模型在像素级缺陷分割性能表现上,均获得最好结果。同时进行了消融实验,证明了双专家模块对提升模型缺陷检测精度的显著效果。结论本文模型以零样本的方式,避免了构造电力设备巡检影像数据集的高昂成本。同时提出的双专家模块,使模型减少了受巡检影像复杂背景区域的干扰。展开更多
针对红外与可见光图像融合中的颜色失真和热目标细节丢失问题,提出基于融合曲线的零样本红外与可见光图像融合方法(Zero-Shot Infrared and Visible Image Fusion Based on Fusion Curve,ZSFuCu).首先,将融合任务转化为基于深度网络的...针对红外与可见光图像融合中的颜色失真和热目标细节丢失问题,提出基于融合曲线的零样本红外与可见光图像融合方法(Zero-Shot Infrared and Visible Image Fusion Based on Fusion Curve,ZSFuCu).首先,将融合任务转化为基于深度网络的图像特定曲线估计过程,通过像素级非线性映射实现热目标纹理的增强与色彩特征的保留.然后,设计多维度视觉感知损失函数,从对比度增强、颜色保持及空间连续性三个维度构建约束机制,协同优化融合图像的高频信息与色彩分布,保留结构特征和关键信息.最后,采用零样本训练策略,仅需单个红外与可见光图像对即可完成参数的自适应优化,具备在不同照明条件下融合的强鲁棒性.实验表明,ZSFuCu在目标突出性、细节丰富度及颜色自然度方面具有显著优势,兼具有效性与实用性.展开更多
借助离散神经音频编解码器的能力,大型语言模型(Large language model,LLM)已被广泛认为是一种零样本语音合成(Text-to-Speech,TTS)的潜在方法。然而,基于采样的解码策略虽然能够为语音生成带来丰富的多样性,但同时也引入了诸如拼写错...借助离散神经音频编解码器的能力,大型语言模型(Large language model,LLM)已被广泛认为是一种零样本语音合成(Text-to-Speech,TTS)的潜在方法。然而,基于采样的解码策略虽然能够为语音生成带来丰富的多样性,但同时也引入了诸如拼写错误、遗漏和重复等鲁棒性问题。为了解决上述问题,我们提出了VALL-E R,一个鲁棒且高效的零样本TTS系统,并以VALL-E为基础进行构建。具体而言,我们引入了一种音素单调对齐策略,通过约束声学标记与其对应的音素严格匹配,增强了音素与声学序列之间的映射关系,从而确保更精确的对齐。此外,我们采用编解码器合并的方法,在浅层量化层对离散码进行降采样,以减少解码计算量,同时保持语音输出的高质量。受益于这些策略,VALL-E R在音素可控性方面取得了显著提升,并通过逼近真实语音的词错误率展现了卓越的鲁棒性。此外,该系统仅需较少的自回归推理步骤,推理时间降低超过60%,极大提升了推理效率。展开更多
基金supported by National Basic Research Program of China (973 Program) (No. 2015CB352502)National Nature Science Foundation of China (No. 61573026)Beijing Nature Science Foundation (No. L172037)
文摘Fine-grained image classification, which aims to distinguish images with subtle distinctions, is a challenging task for two main reasons: lack of sufficient training data for every class and difficulty in learning discriminative features for representation. In this paper, to address the two issues, we propose a two-phase framework for recognizing images from unseen fine-grained classes, i.e., zeroshot fine-grained classification. In the first feature learning phase, we finetune deep convolutional neural networks using hierarchical semantic structure among fine-grained classes to extract discriminative deep visual features. Meanwhile, a domain adaptation structure is induced into deep convolutional neural networks to avoid domain shift from training data to test data. In the second label inference phase, a semantic directed graph is constructed over attributes of fine-grained classes. Based on this graph, we develop a label propagation algorithm to infer the labels of images in the unseen classes. Experimental results on two benchmark datasets demonstrate that our model outperforms the state-of-the-art zero-shot learning models. In addition, the features obtained by our feature learning model also yield significant gains when they are used by other zero-shot learning models, which shows the flexility of our model in zero-shot finegrained classification.
文摘The goal of zero-shot recognition is to classify classes it has never seen before, which needs to build a bridge between seen and unseen classes through semantic embedding space. Therefore, semantic embedding space learning plays an important role in zero-shot recognition. Among existing works, semantic embedding space is mainly taken by user-defined attribute vectors. However, the discriminative information included in the user-defined attribute vector is limited. In this paper, we propose to learn an extra latent attribute space automatically to produce a more generalized and discriminative semantic embedded space. To prevent the bias problem, both user-defined attribute vector and latent attribute space are optimized by adversarial learning with auto-encoders. We also propose to reconstruct semantic patterns produced by explanatory graphs, which can make semantic embedding space more sensitive to usefully semantic information and less sensitive to useless information. The proposed method is evaluated on the AwA2 and CUB dataset. These results show that our proposed method achieves superior performance.
文摘零样本图像分类解决了训练和测试数据类别不相交的问题,人类标注属性是一种常用的实现零样本图像分类的辅助知识.为协助专家设计类属性矩阵,提出了一种交互式构建方法,简化了烦琐且缺乏指导的流程.首先,通过一种基于概念的深度学习可解释性方法,在训练集图像数据中提取出可理解的属性信息;然后,采用多视图协作的交互方式,探索和分析已提取属性的重要性.系统提供了全局和局部2种方式,辅助用户设计测试集数据类别的属性值;最后,通过在数据集Animals with Attributes2上进行的案例分析,以及采用李克特量表的用户评估实验,验证了设计方法的有效性和实用性,可以帮助专家用户高效且便捷地完成类属性构建工作.
文摘目的电力设备巡检影像缺陷检测对于提高电力传输的安全性和电网运行的可靠性具有重要作用。但由于相应训练数据集的构造成本高昂,传统的监督学习方法难以适应电力设备巡检影像缺陷检测。同时电力设备巡检影像中通常含有复杂多样的背景,严重干扰了模型对缺陷的检测。方法基于视觉语言模型并结合文本提示,提出了电力设备巡检影像零样本缺陷检测模型。模型中含有多个双专家模块,在由视觉语言模型获得文本特征和视觉特征后,经多个双专家模块处理并融合,得到像素级的缺陷检测结果。同时,构建了具有像素级掩码标注的电力设备巡检影像数据集对模型性能进行全面评测。结果在本文构建的电力设备巡检影像测试数据集上与SAA+(segment any anomaly+)、AnomalyGPT、WinCLIP(window-based CLIP)、PaDiM(patch distribution modeling)和PatchCore进行比较,在像素级的缺陷分割性能表现上,AUROC(area under the receiver operating characteristic curve)平均提升18.1%,F1-max(F1 score at optimal threshold)平均提升26.1%;在图像级的缺陷分类性能表现上,AUROC平均提升20.2%,AP(average precision)平均提升10.0%。具体到数据集中的各个电力设备,模型在像素级缺陷分割性能表现上,均获得最好结果。同时进行了消融实验,证明了双专家模块对提升模型缺陷检测精度的显著效果。结论本文模型以零样本的方式,避免了构造电力设备巡检影像数据集的高昂成本。同时提出的双专家模块,使模型减少了受巡检影像复杂背景区域的干扰。
文摘针对红外与可见光图像融合中的颜色失真和热目标细节丢失问题,提出基于融合曲线的零样本红外与可见光图像融合方法(Zero-Shot Infrared and Visible Image Fusion Based on Fusion Curve,ZSFuCu).首先,将融合任务转化为基于深度网络的图像特定曲线估计过程,通过像素级非线性映射实现热目标纹理的增强与色彩特征的保留.然后,设计多维度视觉感知损失函数,从对比度增强、颜色保持及空间连续性三个维度构建约束机制,协同优化融合图像的高频信息与色彩分布,保留结构特征和关键信息.最后,采用零样本训练策略,仅需单个红外与可见光图像对即可完成参数的自适应优化,具备在不同照明条件下融合的强鲁棒性.实验表明,ZSFuCu在目标突出性、细节丰富度及颜色自然度方面具有显著优势,兼具有效性与实用性.
文摘借助离散神经音频编解码器的能力,大型语言模型(Large language model,LLM)已被广泛认为是一种零样本语音合成(Text-to-Speech,TTS)的潜在方法。然而,基于采样的解码策略虽然能够为语音生成带来丰富的多样性,但同时也引入了诸如拼写错误、遗漏和重复等鲁棒性问题。为了解决上述问题,我们提出了VALL-E R,一个鲁棒且高效的零样本TTS系统,并以VALL-E为基础进行构建。具体而言,我们引入了一种音素单调对齐策略,通过约束声学标记与其对应的音素严格匹配,增强了音素与声学序列之间的映射关系,从而确保更精确的对齐。此外,我们采用编解码器合并的方法,在浅层量化层对离散码进行降采样,以减少解码计算量,同时保持语音输出的高质量。受益于这些策略,VALL-E R在音素可控性方面取得了显著提升,并通过逼近真实语音的词错误率展现了卓越的鲁棒性。此外,该系统仅需较少的自回归推理步骤,推理时间降低超过60%,极大提升了推理效率。