Reliable traffic flow prediction is crucial for mitigating urban congestion.This paper proposes Attentionbased spatiotemporal Interactive Dynamic Graph Convolutional Network(AIDGCN),a novel architecture integrating In...Reliable traffic flow prediction is crucial for mitigating urban congestion.This paper proposes Attentionbased spatiotemporal Interactive Dynamic Graph Convolutional Network(AIDGCN),a novel architecture integrating Interactive Dynamic Graph Convolution Network(IDGCN)with Temporal Multi-Head Trend-Aware Attention.Its core innovation lies in IDGCN,which uniquely splits sequences into symmetric intervals for interactive feature sharing via dynamic graphs,and a novel attention mechanism incorporating convolutional operations to capture essential local traffic trends—addressing a critical gap in standard attention for continuous data.For 15-and 60-min forecasting on METR-LA,AIDGCN achieves MAEs of 0.75%and 0.39%,and RMSEs of 1.32%and 0.14%,respectively.In the 60-min long-term forecasting of the PEMS-BAY dataset,the AIDGCN out-performs the MRA-BGCN method by 6.28%,4.93%,and 7.17%in terms of MAE,RMSE,and MAPE,respectively.Experimental results demonstrate the superiority of our pro-posed model over state-of-the-art methods.展开更多
In the actual complex environment,the recognition accuracy of crop leaf disease is often not high.Inspired by the brain parallel interaction mechanism,a two-stream parallel interactive convolutional neural network(TSP...In the actual complex environment,the recognition accuracy of crop leaf disease is often not high.Inspired by the brain parallel interaction mechanism,a two-stream parallel interactive convolutional neural network(TSPI-CNN)is proposed to improve the recognition accuracy.TSPI-CNN includes a two-stream parallel network(TSP-Net)and a parallel interactive network(PI-Net).TSP-Net simulates the ventral and dorsal stream.PI-Net simulates the interaction between two pathways in the process of human brain visual information transmission.A large number of experiments shows that the proposed TSPI-CNN performs well on MK-D2,PlantVillage,Apple-3 leaf,and Cassava leaf datasets.Furthermore,the effect of numbers of interactions on the recognition performance of TSPI-CNN is discussed.The experimental results show that as the number of interactions increases,the recognition accuracy of the network also increases.Finally,the network is visualized to show the working mechanism of the network and provide enlightenment for future research.展开更多
Automatically extracting Drug-Drug Interactions (DDIs) from text is a crucial and challenging task, particularly when multiple medications are taken concurrently. In this study, we propose a novel approach, called Enh...Automatically extracting Drug-Drug Interactions (DDIs) from text is a crucial and challenging task, particularly when multiple medications are taken concurrently. In this study, we propose a novel approach, called Enhanced Attention-driven Dynamic Graph Convolutional Network (E-ADGCN), for DDI extraction. Our model combines the Attention-driven Dynamic Graph Convolutional Network (ADGCN) with a feature fusion method and multi-task learning framework. The ADGCN effectively utilizes entity information and dependency tree information from biomedical texts to extract DDIs. The feature fusion method integrates User-Generated Content (UGC) and molecular information with drug entity information from text through dynamic routing. By leveraging external resources, our approach maximizes the auxiliary effect and improves the accuracy of DDI extraction. We evaluate the E-ADGCN model on the extended DDIExtraction2013 dataset and achieve an F1-score of 81.45%. This research contributes to the advancement of automated methods for extracting valuable drug interaction information from textual sources, facilitating improved medication management and patient safety.展开更多
从单张RGB图像中实现双手的3D交互式网格重建是一项极具挑战性的任务。由于双手之间的相互遮挡以及局部外观相似性较高,导致部分特征提取不够准确,从而丢失了双手之间的交互信息并使重建的手部网格与输入图像出现不对齐等问题。为了解...从单张RGB图像中实现双手的3D交互式网格重建是一项极具挑战性的任务。由于双手之间的相互遮挡以及局部外观相似性较高,导致部分特征提取不够准确,从而丢失了双手之间的交互信息并使重建的手部网格与输入图像出现不对齐等问题。为了解决上述问题,本文首先提出一种包含两个部分的特征交互适应模块,第一部分特征交互在保留左右手分离特征的同时生成两种新的特征表示,并通过交互注意力模块捕获双手的交互特征;第二部分特征适应则是将此交互特征利用交互注意力模块适应到每只手,为左右手特征注入全局上下文信息。其次,引入三层图卷积细化网络结构用于精确回归双手网格顶点,并通过基于注意力机制的特征对齐模块增强顶点特征和图像特征的对齐,从而增强重建的手部网格和输入图像的对齐。同时提出一种新的多层感知机结构,通过下采样和上采样操作学习多尺度特征信息。最后,设计相对偏移损失函数约束双手的空间关系。在InterHand2.6M数据集上的定量和定性实验表明,与现有的优秀方法相比,所提出的方法显著提升了模型性能,其中平均每关节位置误差(Mean Per Joint Position Error,MPJPE)和平均每顶点位置误差(Mean Per Vertex Position Error,MPVPE)分别降低至7.19 mm和7.33 mm。此外,在RGB2Hands和EgoHands数据集上进行泛化性实验,定性实验结果表明所提出的方法具有良好的泛化能力,能够适应不同环境背景下的手部网格重建。展开更多
文摘Reliable traffic flow prediction is crucial for mitigating urban congestion.This paper proposes Attentionbased spatiotemporal Interactive Dynamic Graph Convolutional Network(AIDGCN),a novel architecture integrating Interactive Dynamic Graph Convolution Network(IDGCN)with Temporal Multi-Head Trend-Aware Attention.Its core innovation lies in IDGCN,which uniquely splits sequences into symmetric intervals for interactive feature sharing via dynamic graphs,and a novel attention mechanism incorporating convolutional operations to capture essential local traffic trends—addressing a critical gap in standard attention for continuous data.For 15-and 60-min forecasting on METR-LA,AIDGCN achieves MAEs of 0.75%and 0.39%,and RMSEs of 1.32%and 0.14%,respectively.In the 60-min long-term forecasting of the PEMS-BAY dataset,the AIDGCN out-performs the MRA-BGCN method by 6.28%,4.93%,and 7.17%in terms of MAE,RMSE,and MAPE,respectively.Experimental results demonstrate the superiority of our pro-posed model over state-of-the-art methods.
基金National Natural Science Foundation of China(Nos.61806051 and 61903078)Fundamental Research Funds for the Central Universities,China(Nos.2232021A-10 and 2232021D-32)Natural Science Foundation of Shanghai,China(No.20ZR1400400)。
文摘In the actual complex environment,the recognition accuracy of crop leaf disease is often not high.Inspired by the brain parallel interaction mechanism,a two-stream parallel interactive convolutional neural network(TSPI-CNN)is proposed to improve the recognition accuracy.TSPI-CNN includes a two-stream parallel network(TSP-Net)and a parallel interactive network(PI-Net).TSP-Net simulates the ventral and dorsal stream.PI-Net simulates the interaction between two pathways in the process of human brain visual information transmission.A large number of experiments shows that the proposed TSPI-CNN performs well on MK-D2,PlantVillage,Apple-3 leaf,and Cassava leaf datasets.Furthermore,the effect of numbers of interactions on the recognition performance of TSPI-CNN is discussed.The experimental results show that as the number of interactions increases,the recognition accuracy of the network also increases.Finally,the network is visualized to show the working mechanism of the network and provide enlightenment for future research.
基金supported by the National Natural Science Foundation of China(No.62476025)the Shaanxi Provincial Department of Science and Technology Projects(No.2013K06-39).
文摘Automatically extracting Drug-Drug Interactions (DDIs) from text is a crucial and challenging task, particularly when multiple medications are taken concurrently. In this study, we propose a novel approach, called Enhanced Attention-driven Dynamic Graph Convolutional Network (E-ADGCN), for DDI extraction. Our model combines the Attention-driven Dynamic Graph Convolutional Network (ADGCN) with a feature fusion method and multi-task learning framework. The ADGCN effectively utilizes entity information and dependency tree information from biomedical texts to extract DDIs. The feature fusion method integrates User-Generated Content (UGC) and molecular information with drug entity information from text through dynamic routing. By leveraging external resources, our approach maximizes the auxiliary effect and improves the accuracy of DDI extraction. We evaluate the E-ADGCN model on the extended DDIExtraction2013 dataset and achieve an F1-score of 81.45%. This research contributes to the advancement of automated methods for extracting valuable drug interaction information from textual sources, facilitating improved medication management and patient safety.
文摘从单张RGB图像中实现双手的3D交互式网格重建是一项极具挑战性的任务。由于双手之间的相互遮挡以及局部外观相似性较高,导致部分特征提取不够准确,从而丢失了双手之间的交互信息并使重建的手部网格与输入图像出现不对齐等问题。为了解决上述问题,本文首先提出一种包含两个部分的特征交互适应模块,第一部分特征交互在保留左右手分离特征的同时生成两种新的特征表示,并通过交互注意力模块捕获双手的交互特征;第二部分特征适应则是将此交互特征利用交互注意力模块适应到每只手,为左右手特征注入全局上下文信息。其次,引入三层图卷积细化网络结构用于精确回归双手网格顶点,并通过基于注意力机制的特征对齐模块增强顶点特征和图像特征的对齐,从而增强重建的手部网格和输入图像的对齐。同时提出一种新的多层感知机结构,通过下采样和上采样操作学习多尺度特征信息。最后,设计相对偏移损失函数约束双手的空间关系。在InterHand2.6M数据集上的定量和定性实验表明,与现有的优秀方法相比,所提出的方法显著提升了模型性能,其中平均每关节位置误差(Mean Per Joint Position Error,MPJPE)和平均每顶点位置误差(Mean Per Vertex Position Error,MPVPE)分别降低至7.19 mm和7.33 mm。此外,在RGB2Hands和EgoHands数据集上进行泛化性实验,定性实验结果表明所提出的方法具有良好的泛化能力,能够适应不同环境背景下的手部网格重建。