Drug-drug interaction(DDI)refers to the interaction between two or more drugs in the body,altering their efficacy or pharmacokinetics.Fully considering and accurately predicting DDI has become an indispensable part of...Drug-drug interaction(DDI)refers to the interaction between two or more drugs in the body,altering their efficacy or pharmacokinetics.Fully considering and accurately predicting DDI has become an indispensable part of ensuring safe medication for patients.In recent years,many deep learning-based methods have been proposed to predict DDI.However,most existing computational models tend to oversimplify the fusion of drug structural and topological information,often relying on methods such as splicing or weighted summation,which fail to adequately capture the potential complementarity between structural and topological features.This loss of information may lead to models that do not fully leverage these features,thus limiting their performance in DDI prediction.To address these challenges,we propose a relation-aware cross adversarial network for predicting DDI,named RCAN-DDI,which combines a relationship-aware structure feature learning module and a topological feature learning module based on DDI networks to capture multimodal features of drugs.To explore the correlations and complementarities among different information sources,the cross-adversarial network is introduced to fully integrate features from various modalities,enhancing the predictive performance of the model.The experimental results demonstrate that the RCAN-DDI method outperforms other methods.Even in cases of labelled DDI scarcity,the method exhibits good robustness in the DDI prediction task.Furthermore,the effectiveness of the cross-adversarial module is validated through ablation experiments,demonstrating its superiority in learning multimodal complementary information.展开更多
With the speedy development of communication Internet and the widespread use of social multimedia,so many creators have published posts on social multimedia platforms that fake news detection has already been a challe...With the speedy development of communication Internet and the widespread use of social multimedia,so many creators have published posts on social multimedia platforms that fake news detection has already been a challenging task.Although some works use deep learning methods to capture visual and textual information of posts,most existingmethods cannot explicitly model the binary relations among image regions or text tokens to mine the global relation information in a modality deeply such as image or text.Moreover,they cannot fully exploit the supplementary cross-modal information,including image and text relations,to supplement and enrich each modality.In order to address these problems,in this paper,we propose an innovative end-to-end Cross-modal Relation-aware Networks(CRAN),which exploits jointly models the visual and textual information with their corresponding relations in a unified framework.(1)To capture the global structural relations in a modality,we design a global relation-aware network to explicitly model the relation-aware semantics of the fragment features in the target modality from a global scope perspective.(2)To effectively fuse cross-modal information,we propose a cross-modal co-attention network module for multi-modal information fusion,which utilizes the intra-modality relationships and inter-modality relationship jointly among image regions and textual words to replenish and heighten each other.Extensive experiments on two public real-world datasets demonstrate the superior performance of CRAN compared with other state-of-the-art baseline algorithms.展开更多
基金supported by the Natural Science Foundation of Shandong Province(Grant No.:ZR2023MF053)the National Natural Science Foundation of China(Grant No.:61902430).
文摘Drug-drug interaction(DDI)refers to the interaction between two or more drugs in the body,altering their efficacy or pharmacokinetics.Fully considering and accurately predicting DDI has become an indispensable part of ensuring safe medication for patients.In recent years,many deep learning-based methods have been proposed to predict DDI.However,most existing computational models tend to oversimplify the fusion of drug structural and topological information,often relying on methods such as splicing or weighted summation,which fail to adequately capture the potential complementarity between structural and topological features.This loss of information may lead to models that do not fully leverage these features,thus limiting their performance in DDI prediction.To address these challenges,we propose a relation-aware cross adversarial network for predicting DDI,named RCAN-DDI,which combines a relationship-aware structure feature learning module and a topological feature learning module based on DDI networks to capture multimodal features of drugs.To explore the correlations and complementarities among different information sources,the cross-adversarial network is introduced to fully integrate features from various modalities,enhancing the predictive performance of the model.The experimental results demonstrate that the RCAN-DDI method outperforms other methods.Even in cases of labelled DDI scarcity,the method exhibits good robustness in the DDI prediction task.Furthermore,the effectiveness of the cross-adversarial module is validated through ablation experiments,demonstrating its superiority in learning multimodal complementary information.
基金partially funded by the National Natural Science Foundation of China(Grant No.61902193)in part by the PAPD fund.
文摘With the speedy development of communication Internet and the widespread use of social multimedia,so many creators have published posts on social multimedia platforms that fake news detection has already been a challenging task.Although some works use deep learning methods to capture visual and textual information of posts,most existingmethods cannot explicitly model the binary relations among image regions or text tokens to mine the global relation information in a modality deeply such as image or text.Moreover,they cannot fully exploit the supplementary cross-modal information,including image and text relations,to supplement and enrich each modality.In order to address these problems,in this paper,we propose an innovative end-to-end Cross-modal Relation-aware Networks(CRAN),which exploits jointly models the visual and textual information with their corresponding relations in a unified framework.(1)To capture the global structural relations in a modality,we design a global relation-aware network to explicitly model the relation-aware semantics of the fragment features in the target modality from a global scope perspective.(2)To effectively fuse cross-modal information,we propose a cross-modal co-attention network module for multi-modal information fusion,which utilizes the intra-modality relationships and inter-modality relationship jointly among image regions and textual words to replenish and heighten each other.Extensive experiments on two public real-world datasets demonstrate the superior performance of CRAN compared with other state-of-the-art baseline algorithms.