Contrastive self‐supervised representation learning on attributed graph networks with Graph Neural Networks has attracted considerable research interest recently.However,there are still two challenges.First,most of t...Contrastive self‐supervised representation learning on attributed graph networks with Graph Neural Networks has attracted considerable research interest recently.However,there are still two challenges.First,most of the real‐word system are multiple relations,where entities are linked by different types of relations,and each relation is a view of the graph network.Second,the rich multi‐scale information(structure‐level and feature‐level)of the graph network can be seen as self‐supervised signals,which are not fully exploited.A novel contrastive self‐supervised representation learning framework on attributed multiplex graph networks with multi‐scale(named CoLM^(2)S)information is presented in this study.It mainly contains two components:intra‐relation contrast learning and interrelation contrastive learning.Specifically,the contrastive self‐supervised representation learning framework on attributed single‐layer graph networks with multi‐scale information(CoLMS)framework with the graph convolutional network as encoder to capture the intra‐relation information with multi‐scale structure‐level and feature‐level selfsupervised signals is introduced first.The structure‐level information includes the edge structure and sub‐graph structure,and the feature‐level information represents the output of different graph convolutional layer.Second,according to the consensus assumption among inter‐relations,the CoLM^(2)S framework is proposed to jointly learn various graph relations in attributed multiplex graph network to achieve global consensus node embedding.The proposed method can fully distil the graph information.Extensive experiments on unsupervised node clustering and graph visualisation tasks demonstrate the effectiveness of our methods,and it outperforms existing competitive baselines.展开更多
Volumetric imaging is increasingly in demand for its precision in statistically visualizing and analyzing the intricacies of biological phenomena.To visualize the intricate details of these minute structures and facil...Volumetric imaging is increasingly in demand for its precision in statistically visualizing and analyzing the intricacies of biological phenomena.To visualize the intricate details of these minute structures and facilitate the analysis in biomedical research,high-signal-to-noise ratio(SNR)images are indispensable.However,the inevitable noise presents a significant barrier to imaging qualities.Here,we propose SelfMirror,a self-supervised deep-learning denoising method for volumetric image reconstruction.SelfMirror is developed based on the insight that the variation of biological structure is continuous and smooth;when the sampling interval in volumetric imaging is sufficiently small,the similarity of neighboring slices in terms of the spatial structure becomes apparent.Such similarity can be used to train our proposed network to revive the signals and suppress the noise accurately.The denoising performance of SelfMirror exhibits remarkable robustness and fidelity even in extremely low-SNR conditions.We demonstrate the broad applicability of SelfMirror on multiple imaging modalities,including two-photon microscopy,confocal microscopy,expansion microscopy,computed tomography,and 3D electron microscopy.This versatility extends from single neuron cells to tissues and organs,highlighting SelfMirror's potential for integration into diverse imaging and analysis pipelines.展开更多
现有短视频推荐方法存在用户短期兴趣表示和短期兴趣代理提取不完全,导致长短期兴趣解纠缠不充分的问题。提出了一种自监督短期兴趣特征增强的短视频推荐模型(short video recommendation model based on self-supervised short-term in...现有短视频推荐方法存在用户短期兴趣表示和短期兴趣代理提取不完全,导致长短期兴趣解纠缠不充分的问题。提出了一种自监督短期兴趣特征增强的短视频推荐模型(short video recommendation model based on self-supervised short-term interest feature enhancement,SSER)。该模型采用自监督的对比学习方法对用户长短期兴趣进行解纠缠,针对短期兴趣表示提取不完全的问题,提出采用扩展循环神经网络(dilated RNN)从非线性的用户交互序列中有效捕捉用户短期兴趣表示;针对短期兴趣代理提取不完全的问题,提出一种多头自注意力机制的短期兴趣代理增强方式,该方式首先使用自注意力机制对短期交互序列嵌入数据进行噪声消除,随后融合从用户序列中提取的短期兴趣普遍特征和突出特征形成融合向量,采用多头自注意力机制从融合向量中提取短期兴趣代理,从而有效增强短期兴趣代理的提取。在KuaiRec短视频数据集上进行了多项实验,结果表明该模型在多个评价指标上优于其他主流方法。展开更多
基金support by the National Natural Science Foundation of China(NSFC)under grant number 61873274.
文摘Contrastive self‐supervised representation learning on attributed graph networks with Graph Neural Networks has attracted considerable research interest recently.However,there are still two challenges.First,most of the real‐word system are multiple relations,where entities are linked by different types of relations,and each relation is a view of the graph network.Second,the rich multi‐scale information(structure‐level and feature‐level)of the graph network can be seen as self‐supervised signals,which are not fully exploited.A novel contrastive self‐supervised representation learning framework on attributed multiplex graph networks with multi‐scale(named CoLM^(2)S)information is presented in this study.It mainly contains two components:intra‐relation contrast learning and interrelation contrastive learning.Specifically,the contrastive self‐supervised representation learning framework on attributed single‐layer graph networks with multi‐scale information(CoLMS)framework with the graph convolutional network as encoder to capture the intra‐relation information with multi‐scale structure‐level and feature‐level selfsupervised signals is introduced first.The structure‐level information includes the edge structure and sub‐graph structure,and the feature‐level information represents the output of different graph convolutional layer.Second,according to the consensus assumption among inter‐relations,the CoLM^(2)S framework is proposed to jointly learn various graph relations in attributed multiplex graph network to achieve global consensus node embedding.The proposed method can fully distil the graph information.Extensive experiments on unsupervised node clustering and graph visualisation tasks demonstrate the effectiveness of our methods,and it outperforms existing competitive baselines.
基金National Natural Science Foundation of China(62027812,62333012)。
文摘Volumetric imaging is increasingly in demand for its precision in statistically visualizing and analyzing the intricacies of biological phenomena.To visualize the intricate details of these minute structures and facilitate the analysis in biomedical research,high-signal-to-noise ratio(SNR)images are indispensable.However,the inevitable noise presents a significant barrier to imaging qualities.Here,we propose SelfMirror,a self-supervised deep-learning denoising method for volumetric image reconstruction.SelfMirror is developed based on the insight that the variation of biological structure is continuous and smooth;when the sampling interval in volumetric imaging is sufficiently small,the similarity of neighboring slices in terms of the spatial structure becomes apparent.Such similarity can be used to train our proposed network to revive the signals and suppress the noise accurately.The denoising performance of SelfMirror exhibits remarkable robustness and fidelity even in extremely low-SNR conditions.We demonstrate the broad applicability of SelfMirror on multiple imaging modalities,including two-photon microscopy,confocal microscopy,expansion microscopy,computed tomography,and 3D electron microscopy.This versatility extends from single neuron cells to tissues and organs,highlighting SelfMirror's potential for integration into diverse imaging and analysis pipelines.
文摘现有短视频推荐方法存在用户短期兴趣表示和短期兴趣代理提取不完全,导致长短期兴趣解纠缠不充分的问题。提出了一种自监督短期兴趣特征增强的短视频推荐模型(short video recommendation model based on self-supervised short-term interest feature enhancement,SSER)。该模型采用自监督的对比学习方法对用户长短期兴趣进行解纠缠,针对短期兴趣表示提取不完全的问题,提出采用扩展循环神经网络(dilated RNN)从非线性的用户交互序列中有效捕捉用户短期兴趣表示;针对短期兴趣代理提取不完全的问题,提出一种多头自注意力机制的短期兴趣代理增强方式,该方式首先使用自注意力机制对短期交互序列嵌入数据进行噪声消除,随后融合从用户序列中提取的短期兴趣普遍特征和突出特征形成融合向量,采用多头自注意力机制从融合向量中提取短期兴趣代理,从而有效增强短期兴趣代理的提取。在KuaiRec短视频数据集上进行了多项实验,结果表明该模型在多个评价指标上优于其他主流方法。