期刊文献+
共找到546,208篇文章
< 1 2 250 >
每页显示 20 50 100
Controllability of Multi-Relational Networks With Heterogeneous Dynamical Nodes
1
作者 Lifu Wang Zhaofei Li +2 位作者 Lianqian Cao Ge Guo Zhi Kong 《IEEE/CAA Journal of Automatica Sinica》 CSCD 2024年第12期2476-2486,共11页
This paper studies the controllability of networked systems,in which the nodes are heterogeneous high-dimensional dynamical systems,and the links between nodes are multi-relational.Our aim is to find controllability c... This paper studies the controllability of networked systems,in which the nodes are heterogeneous high-dimensional dynamical systems,and the links between nodes are multi-relational.Our aim is to find controllability criteria for heterogeneous networks with multi-relational links beyond those only applicable to networks with single-relational links.It is found a network with multi-relational links can be controllable even if each single-relational network topology is uncontrollable,and vice versa.Some sufficient and necessary conditions are derived for the controllability of multi-relational networks with heterogeneous dynamical nodes.For two typical multi-relational networks with star-chain topology and star-circle topology,some easily verified conditions are presented.For illustration and verification,several examples are presented.These findings provide practical insights for the analysis and control of multi-relational complex systems. 展开更多
关键词 Heterogeneous network multi-relational network network controllability node dynamics
在线阅读 下载PDF
改进Deep Q Networks的交通信号均衡调度算法
2
作者 贺道坤 《机械设计与制造》 北大核心 2025年第4期135-140,共6页
为进一步缓解城市道路高峰时段十字路口的交通拥堵现象,实现路口各道路车流均衡通过,基于改进Deep Q Networks提出了一种的交通信号均衡调度算法。提取十字路口与交通信号调度最相关的特征,分别建立单向十字路口交通信号模型和线性双向... 为进一步缓解城市道路高峰时段十字路口的交通拥堵现象,实现路口各道路车流均衡通过,基于改进Deep Q Networks提出了一种的交通信号均衡调度算法。提取十字路口与交通信号调度最相关的特征,分别建立单向十字路口交通信号模型和线性双向十字路口交通信号模型,并基于此构建交通信号调度优化模型;针对Deep Q Networks算法在交通信号调度问题应用中所存在的收敛性、过估计等不足,对Deep Q Networks进行竞争网络改进、双网络改进以及梯度更新策略改进,提出相适应的均衡调度算法。通过与经典Deep Q Networks仿真比对,验证论文算法对交通信号调度问题的适用性和优越性。基于城市道路数据,分别针对两种场景进行仿真计算,仿真结果表明该算法能够有效缩减十字路口车辆排队长度,均衡各路口车流通行量,缓解高峰出行方向的道路拥堵现象,有利于十字路口交通信号调度效益的提升。 展开更多
关键词 交通信号调度 十字路口 Deep Q networks 深度强化学习 智能交通
在线阅读 下载PDF
Mining Topical Influencers Based on the Multi-Relational Network in Micro-Blogging Sites 被引量:4
3
作者 丁兆云 贾焰 +1 位作者 周斌 韩毅 《China Communications》 SCIE CSCD 2013年第1期93-104,共12页
In micro-blogging contexts such as Twitter,the number of content producers can easily reach tens of thousands,and many users can participate in discussion of any given topic.While many users can introduce diversity,as... In micro-blogging contexts such as Twitter,the number of content producers can easily reach tens of thousands,and many users can participate in discussion of any given topic.While many users can introduce diversity,as not all users are equally influential,it makes it challenging to identify the true influencers,who are generally rated as being interesting and authoritative on a given topic.In this study,the influence of users is measured by performing random walks of the multi-relational data in micro-blogging:retweet,reply,reintroduce,and read.Due to the uncertainty of the reintroduce and read operations,a new method is proposed to determine the transition probabilities of uncertain relational networks.Moreover,we propose a method for performing the combined random walks for the multi-relational influence network,considering both the transition probabilities for intra-and inter-networking.Experiments were conducted on a real Twitter dataset containing about 260 000 users and 2.7million tweets,and the results show that our method is more effective than TwitterRank and other methods used to discover influencers. 展开更多
关键词 social network topical influence PAGERANK multi-relational network influencers micro-blogging
在线阅读 下载PDF
Application of virtual reality technology improves the functionality of brain networks in individuals experiencing pain 被引量:3
4
作者 Takahiko Nagamine 《World Journal of Clinical Cases》 SCIE 2025年第3期66-68,共3页
Medical procedures are inherently invasive and carry the risk of inducing pain to the mind and body.Recently,efforts have been made to alleviate the discomfort associated with invasive medical procedures through the u... Medical procedures are inherently invasive and carry the risk of inducing pain to the mind and body.Recently,efforts have been made to alleviate the discomfort associated with invasive medical procedures through the use of virtual reality(VR)technology.VR has been demonstrated to be an effective treatment for pain associated with medical procedures,as well as for chronic pain conditions for which no effective treatment has been established.The precise mechanism by which the diversion from reality facilitated by VR contributes to the diminution of pain and anxiety has yet to be elucidated.However,the provision of positive images through VR-based visual stimulation may enhance the functionality of brain networks.The salience network is diminished,while the default mode network is enhanced.Additionally,the medial prefrontal cortex may establish a stronger connection with the default mode network,which could result in a reduction of pain and anxiety.Further research into the potential of VR technology to alleviate pain could lead to a reduction in the number of individuals who overdose on painkillers and contribute to positive change in the medical field. 展开更多
关键词 Virtual reality PAIN ANXIETY Salience network Default mode network
在线阅读 下载PDF
Robustness Optimization Algorithm with Multi-Granularity Integration for Scale-Free Networks Against Malicious Attacks 被引量:1
5
作者 ZHANG Yiheng LI Jinhai 《昆明理工大学学报(自然科学版)》 北大核心 2025年第1期54-71,共18页
Complex network models are frequently employed for simulating and studyingdiverse real-world complex systems.Among these models,scale-free networks typically exhibit greater fragility to malicious attacks.Consequently... Complex network models are frequently employed for simulating and studyingdiverse real-world complex systems.Among these models,scale-free networks typically exhibit greater fragility to malicious attacks.Consequently,enhancing the robustness of scale-free networks has become a pressing issue.To address this problem,this paper proposes a Multi-Granularity Integration Algorithm(MGIA),which aims to improve the robustness of scale-free networks while keeping the initial degree of each node unchanged,ensuring network connectivity and avoiding the generation of multiple edges.The algorithm generates a multi-granularity structure from the initial network to be optimized,then uses different optimization strategies to optimize the networks at various granular layers in this structure,and finally realizes the information exchange between different granular layers,thereby further enhancing the optimization effect.We propose new network refresh,crossover,and mutation operators to ensure that the optimized network satisfies the given constraints.Meanwhile,we propose new network similarity and network dissimilarity evaluation metrics to improve the effectiveness of the optimization operators in the algorithm.In the experiments,the MGIA enhances the robustness of the scale-free network by 67.6%.This improvement is approximately 17.2%higher than the optimization effects achieved by eight currently existing complex network robustness optimization algorithms. 展开更多
关键词 complex network model MULTI-GRANULARITY scale-free networks ROBUSTNESS algorithm integration
原文传递
Offload Strategy for Edge Computing in Satellite Networks Based on Software Defined Network 被引量:1
6
作者 Zhiguo Liu Yuqing Gui +1 位作者 Lin Wang Yingru Jiang 《Computers, Materials & Continua》 SCIE EI 2025年第1期863-879,共17页
Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in us... Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency. 展开更多
关键词 Satellite network edge computing task scheduling computing offloading
在线阅读 下载PDF
DEEP NEURAL NETWORKS COMBINING MULTI-TASK LEARNING FOR SOLVING DELAY INTEGRO-DIFFERENTIAL EQUATIONS 被引量:1
7
作者 WANG Chen-yao SHI Feng 《数学杂志》 2025年第1期13-38,共26页
Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay di... Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay differential equations(DDEs)and delay integrodifferential equations(DIDEs)with constant delays,primarily due to their low regularity at delayinduced breaking points.In this paper,a DNN method that combines multi-task learning(MTL)which is proposed to solve both the forward and inverse problems of DIDEs.The core idea of this approach is to divide the original equation into multiple tasks based on the delay,using auxiliary outputs to represent the integral terms,followed by the use of MTL to seamlessly incorporate the properties at the breaking points into the loss function.Furthermore,given the increased training dificulty associated with multiple tasks and outputs,we employ a sequential training scheme to reduce training complexity and provide reference solutions for subsequent tasks.This approach significantly enhances the approximation accuracy of solving DIDEs with DNNs,as demonstrated by comparisons with traditional DNN methods.We validate the effectiveness of this method through several numerical experiments,test various parameter sharing structures in MTL and compare the testing results of these structures.Finally,this method is implemented to solve the inverse problem of nonlinear DIDE and the results show that the unknown parameters of DIDE can be discovered with sparse or noisy data. 展开更多
关键词 Delay integro-differential equation Multi-task learning parameter sharing structure deep neural network sequential training scheme
在线阅读 下载PDF
Enhanced electrode-level diagnostics for lithium-ion battery degradation using physics-informed neural networks 被引量:1
8
作者 Rui Xiong Yinghao He +2 位作者 Yue Sun Yanbo Jia Weixiang Shen 《Journal of Energy Chemistry》 2025年第5期618-627,共10页
For the diagnostics and health management of lithium-ion batteries,numerous models have been developed to understand their degradation characteristics.These models typically fall into two categories:data-driven models... For the diagnostics and health management of lithium-ion batteries,numerous models have been developed to understand their degradation characteristics.These models typically fall into two categories:data-driven models and physical models,each offering unique advantages but also facing limitations.Physics-informed neural networks(PINNs)provide a robust framework to integrate data-driven models with physical principles,ensuring consistency with underlying physics while enabling generalization across diverse operational conditions.This study introduces a PINN-based approach to reconstruct open circuit voltage(OCV)curves and estimate key ageing parameters at both the cell and electrode levels.These parameters include available capacity,electrode capacities,and lithium inventory capacity.The proposed method integrates OCV reconstruction models as functional components into convolutional neural networks(CNNs)and is validated using a public dataset.The results reveal that the estimated ageing parameters closely align with those obtained through offline OCV tests,with errors in reconstructed OCV curves remaining within 15 mV.This demonstrates the ability of the method to deliver fast and accurate degradation diagnostics at the electrode level,advancing the potential for precise and efficient battery health management. 展开更多
关键词 Lithium-ion batteries Electrode level Ageing diagnosis Physics-informed neural network Convolutional neural networks
在线阅读 下载PDF
TMC-GCN: Encrypted Traffic Mapping Classification Method Based on Graph Convolutional Networks 被引量:1
9
作者 Baoquan Liu Xi Chen +2 位作者 Qingjun Yuan Degang Li Chunxiang Gu 《Computers, Materials & Continua》 2025年第2期3179-3201,共23页
With the emphasis on user privacy and communication security, encrypted traffic has increased dramatically, which brings great challenges to traffic classification. The classification method of encrypted traffic based... With the emphasis on user privacy and communication security, encrypted traffic has increased dramatically, which brings great challenges to traffic classification. The classification method of encrypted traffic based on GNN can deal with encrypted traffic well. However, existing GNN-based approaches ignore the relationship between client or server packets. In this paper, we design a network traffic topology based on GCN, called Flow Mapping Graph (FMG). FMG establishes sequential edges between vertexes by the arrival order of packets and establishes jump-order edges between vertexes by connecting packets in different bursts with the same direction. It not only reflects the time characteristics of the packet but also strengthens the relationship between the client or server packets. According to FMG, a Traffic Mapping Classification model (TMC-GCN) is designed, which can automatically capture and learn the characteristics and structure information of the top vertex in FMG. The TMC-GCN model is used to classify the encrypted traffic. The encryption stream classification problem is transformed into a graph classification problem, which can effectively deal with data from different data sources and application scenarios. By comparing the performance of TMC-GCN with other classical models in four public datasets, including CICIOT2023, ISCXVPN2016, CICAAGM2017, and GraphDapp, the effectiveness of the FMG algorithm is verified. The experimental results show that the accuracy rate of the TMC-GCN model is 96.13%, the recall rate is 95.04%, and the F1 rate is 94.54%. 展开更多
关键词 Encrypted traffic classification deep learning graph neural networks multi-layer perceptron graph convolutional networks
在线阅读 下载PDF
Cooperative Privacy Provisioning for Energy Harvesting Based Cognitive Multi-Relay Networks 被引量:2
10
作者 Dawei Wang Wei Liang +2 位作者 Xiaoyu Hu Daosen Zhai Di Zhang 《China Communications》 SCIE CSCD 2020年第2期125-137,共13页
In order to provide privacy provisioning for the secondary information,we propose an energy harvesting based secure transmission scheme for the cognitive multi-relay networks.In the proposed scheme,two secondary relay... In order to provide privacy provisioning for the secondary information,we propose an energy harvesting based secure transmission scheme for the cognitive multi-relay networks.In the proposed scheme,two secondary relays harvest energy to power the secondary transmitter and assist the secondary secure transmission without interfere the secondary transmission.Specifically,the proposed secure transmission policy is implemented into two phases.In the first phase,the secondary transmitter transmits the secrecy information and jamming signal through the power split method.After harvesting energy from a fraction of received radio-frequency signals,one secondary relay adopts the amplify-and-forward relay protocol to assist the secondary secure transmission and the other secondary relay just forwards the new designed jamming signal to protect the secondary privacy information and degrade the jamming interference at the secondary receiver.For the proposed scheme,we first analyze the average secrecy rate,the secondary secrecy outage probability,and the ergodic secrecy rate,and derive their closed-form expressions.Following the above results,we optimally allocate the transmission power such that the secrecy rate is maximized under the secrecy outage probability constraint.For the optimization problem,an AI based simulated annealing algorithm is proposed to allocate the transmit power.Numerical results are presented to validate the performance analytical results and show the performance superiority of the proposed scheme in terms of the average secrecy rate. 展开更多
关键词 cognitive radio networks energy harvesting RELAY security
在线阅读 下载PDF
DIGNN-A:Real-Time Network Intrusion Detection with Integrated Neural Networks Based on Dynamic Graph
11
作者 Jizhao Liu Minghao Guo 《Computers, Materials & Continua》 SCIE EI 2025年第1期817-842,共26页
The increasing popularity of the Internet and the widespread use of information technology have led to a rise in the number and sophistication of network attacks and security threats.Intrusion detection systems are cr... The increasing popularity of the Internet and the widespread use of information technology have led to a rise in the number and sophistication of network attacks and security threats.Intrusion detection systems are crucial to network security,playing a pivotal role in safeguarding networks from potential threats.However,in the context of an evolving landscape of sophisticated and elusive attacks,existing intrusion detection methodologies often overlook critical aspects such as changes in network topology over time and interactions between hosts.To address these issues,this paper proposes a real-time network intrusion detection method based on graph neural networks.The proposedmethod leverages the advantages of graph neural networks and employs a straightforward graph construction method to represent network traffic as dynamic graph-structured data.Additionally,a graph convolution operation with a multi-head attention mechanism is utilized to enhance the model’s ability to capture the intricate relationships within the graph structure comprehensively.Furthermore,it uses an integrated graph neural network to address dynamic graphs’structural and topological changes at different time points and the challenges of edge embedding in intrusion detection data.The edge classification problem is effectively transformed into node classification by employing a line graph data representation,which facilitates fine-grained intrusion detection tasks on dynamic graph node feature representations.The efficacy of the proposed method is evaluated using two commonly used intrusion detection datasets,UNSW-NB15 and NF-ToN-IoT-v2,and results are compared with previous studies in this field.The experimental results demonstrate that our proposed method achieves 99.3%and 99.96%accuracy on the two datasets,respectively,and outperforms the benchmark model in several evaluation metrics. 展开更多
关键词 Intrusion detection graph neural networks attention mechanisms line graphs dynamic graph neural networks
在线阅读 下载PDF
Secure Channel Estimation Using Norm Estimation Model for 5G Next Generation Wireless Networks
12
作者 Khalil Ullah Song Jian +4 位作者 Muhammad Naeem Ul Hassan Suliman Khan Mohammad Babar Arshad Ahmad Shafiq Ahmad 《Computers, Materials & Continua》 SCIE EI 2025年第1期1151-1169,共19页
The emergence of next generation networks(NextG),including 5G and beyond,is reshaping the technological landscape of cellular and mobile networks.These networks are sufficiently scaled to interconnect billions of user... The emergence of next generation networks(NextG),including 5G and beyond,is reshaping the technological landscape of cellular and mobile networks.These networks are sufficiently scaled to interconnect billions of users and devices.Researchers in academia and industry are focusing on technological advancements to achieve highspeed transmission,cell planning,and latency reduction to facilitate emerging applications such as virtual reality,the metaverse,smart cities,smart health,and autonomous vehicles.NextG continuously improves its network functionality to support these applications.Multiple input multiple output(MIMO)technology offers spectral efficiency,dependability,and overall performance in conjunctionwithNextG.This article proposes a secure channel estimation technique in MIMO topology using a norm-estimation model to provide comprehensive insights into protecting NextG network components against adversarial attacks.The technique aims to create long-lasting and secure NextG networks using this extended approach.The viability of MIMO applications and modern AI-driven methodologies to combat cybersecurity threats are explored in this research.Moreover,the proposed model demonstrates high performance in terms of reliability and accuracy,with a 20%reduction in the MalOut-RealOut-Diff metric compared to existing state-of-the-art techniques. 展开更多
关键词 Next generation networks massive mimo communication network artificial intelligence 5G adversarial attacks channel estimation information security
在线阅读 下载PDF
DMGNN:A Dual Multi-Relational GNN Model for Enhanced Recommendation
13
作者 Siyue Li Tian Jin +3 位作者 Erfan Wang Ranting Tao Jiaxin Lu Kai Xi 《Computers, Materials & Continua》 2025年第8期2331-2353,共23页
In the era of exponential growth of digital information,recommender algorithms are vital for helping users navigate vast data to find relevant items.Traditional approaches such as collaborative filtering and contentba... In the era of exponential growth of digital information,recommender algorithms are vital for helping users navigate vast data to find relevant items.Traditional approaches such as collaborative filtering and contentbasedmethods have limitations in capturing complex,multi-faceted relationships in large-scale,sparse datasets.Recent advances in Graph Neural Networks(GNNs)have significantly improved recommendation performance by modeling high-order connection patterns within user-item interaction networks.However,existing GNN-based models like LightGCN and NGCF focus primarily on single-type interactions and often overlook diverse semantic relationships,leading to reduced recommendation diversity and limited generalization.To address these challenges,this paper proposes a dual multi-relational graph neural network recommendation algorithm based on relational interactions.Our approach constructs two complementary graph structures:a User-Item Interaction Graph(UIIG),which explicitly models direct user behaviors such as clicks and purchases,and a Relational Association Graph(RAG),which uncovers latent associations based on user similarities and item attributes.The proposed Dual Multi-relational Graph Neural Network(DMGNN)features two parallel branches that perform multi-layer graph convolutional operations,followed by an adaptive fusion mechanism to effectively integrate information from both graphs.This design enhances the model’s capacity to capture diverse relationship types and complex relational patterns.Extensive experiments conducted on benchmark datasets—including MovieLens-1M,Amazon-Electronics,and Yelp—demonstrate thatDMGNN outperforms state-of-the-art baselines,achieving improvements of up to 12.3%in Precision,9.7%in Recall,and 11.5%in F1 score.Moreover,DMGNN significantly boosts recommendation diversity by 15.2%,balancing accuracy with exploration.These results highlight the effectiveness of leveraging hierarchical multi-relational information,offering a promising solution to the challenges of data sparsity and relation heterogeneity in recommendation systems.Our work advances the theoretical understanding of multi-relational graph modeling and presents practical insights for developing more personalized,diverse,and robust recommender systems. 展开更多
关键词 Recommendation algorithm graph neural network multi-relational graph relational interaction
在线阅读 下载PDF
Distributed algorithms for aggregative games with multiple uncertain Euler–Lagrange systems over switching networks 被引量:1
14
作者 Zhaocong Liu Jie Huang 《Journal of Automation and Intelligence》 2025年第1期2-9,共8页
In this paper,we investigate the distributed Nash equilibrium(NE)seeking problem for aggregative games with multiple uncertain Euler–Lagrange(EL)systems over jointly connected and weight-balanced switching networks.T... In this paper,we investigate the distributed Nash equilibrium(NE)seeking problem for aggregative games with multiple uncertain Euler–Lagrange(EL)systems over jointly connected and weight-balanced switching networks.The designed distributed controller consists of two parts:a dynamic average consensus part that asymptotically reproduces the unknown NE,and an adaptive reference-tracking module responsible for steering EL systems’positions to track a desired trajectory.The generalized Barbalat’s Lemma is used to overcome the discontinuity of the closed-loop system caused by the switching networks.The proposed algorithm is illustrated by a sensor network deployment problem. 展开更多
关键词 Aggregative games Euler-Lagrange systems Jointly connected networks Adaptive control
在线阅读 下载PDF
Enhancing reliability in photonuclear cross-section fitting with Bayesian neural networks 被引量:1
15
作者 Qian-Kun Sun Yue Zhang +8 位作者 Zi-Rui Hao Hong-Wei Wang Gong-Tao Fan Hang-Hua Xu Long-Xiang Liu Sheng Jin Yu-Xuan Yang Kai-Jie Chen Zhen-Wei Wang 《Nuclear Science and Techniques》 2025年第3期146-156,共11页
This study investigates photonuclear reaction(γ,n)cross-sections using Bayesian neural network(BNN)analysis.After determining the optimal network architecture,which features two hidden layers,each with 50 hidden node... This study investigates photonuclear reaction(γ,n)cross-sections using Bayesian neural network(BNN)analysis.After determining the optimal network architecture,which features two hidden layers,each with 50 hidden nodes,training was conducted for 30,000 iterations to ensure comprehensive data capture.By analyzing the distribution of absolute errors positively correlated with the cross-section for the isotope 159Tb,as well as the relative errors unrelated to the cross-section,we confirmed that the network effectively captured the data features without overfitting.Comparison with the TENDL-2021 Database demonstrated the BNN's reliability in fitting photonuclear cross-sections with lower average errors.The predictions for nuclei with single and double giant dipole resonance peak cross-sections,the accurate determination of the photoneutron reaction threshold in the low-energy region,and the precise description of trends in the high-energy cross-sections further demonstrate the network's generalization ability on the validation set.This can be attributed to the consistency of the training data.By using consistent training sets from different laboratories,Bayesian neural networks can predict nearby unknown cross-sections based on existing laboratory data,thereby estimating the potential differences between other laboratories'existing data and their own measurement results.Experimental measurements of photonuclear reactions on the newly constructed SLEGS beamline will contribute to clarifying the differences in cross-sections within the existing data. 展开更多
关键词 Photoneutron reaction Bayesian neural network Machine learning Gamma source SLEGS
在线阅读 下载PDF
Perturbation response scanning of drug-target networks:Drug repurposing for multiple sclerosis 被引量:1
16
作者 Yitan Lu Ziyun Zhou +10 位作者 Qi Li Bin Yang Xing Xu Yu Zhu Mengjun Xie Yuwan Qi Fei Xiao Wenying Yan Zhongjie Liang Qifei Cong Guang Hu 《Journal of Pharmaceutical Analysis》 2025年第6期1277-1290,共14页
Combined with elastic network model(ENM),the perturbation response scanning(PRS)has emerged as a robust technique for pinpointing allosteric interactions within proteins.Here,we proposed the PRS analysis of drug-targe... Combined with elastic network model(ENM),the perturbation response scanning(PRS)has emerged as a robust technique for pinpointing allosteric interactions within proteins.Here,we proposed the PRS analysis of drug-target networks(DTNs),which could provide a promising avenue in network medicine.We demonstrated the utility of the method by introducing a deep learning and network perturbation-based framework,for drug repurposing of multiple sclerosis(MS).First,the MS comorbidity network was constructed by performing a random walk with restart algorithm based on shared genes between MS and other diseases as seed nodes.Then,based on topological analysis and functional annotation,the neurotransmission module was identified as the“therapeutic module”of MS.Further,perturbation scores of drugs on the module were calculated by constructing the DTN and introducing the PRS analysis,giving a list of repurposable drugs for MS.Mechanism of action analysis both at pathway and structural levels screened dihydroergocristine as a candidate drug of MS by targeting a serotonin receptor of se-rotonin 2B receptor(HTR2B).Finally,we established a cuprizone-induced chronic mouse model to evaluate the alteration of HTR2B in mouse brain regions and observed that HTR2B was significantly reduced in the cuprizone-induced mouse cortex.These findings proved that the network perturbation modeling is a promising avenue for drug repurposing of MS.As a useful systematic method,our approach can also be used to discover the new molecular mechanism and provide effective candidate drugs for other complex diseases. 展开更多
关键词 network perturbations Mechanism of action Multiple sclerosis HTR2B
暂未订购
Two-Phase Software Fault Localization Based on Relational Graph Convolutional Neural Networks 被引量:1
17
作者 Xin Fan Zhenlei Fu +2 位作者 Jian Shu Zuxiong Shen Yun Ge 《Computers, Materials & Continua》 2025年第2期2583-2607,共25页
Spectrum-based fault localization (SBFL) generates a ranked list of suspicious elements by using the program execution spectrum, but the excessive number of elements ranked in parallel results in low localization accu... Spectrum-based fault localization (SBFL) generates a ranked list of suspicious elements by using the program execution spectrum, but the excessive number of elements ranked in parallel results in low localization accuracy. Most researchers consider intra-class dependencies to improve localization accuracy. However, some studies show that inter-class method call type faults account for more than 20%, which means such methods still have certain limitations. To solve the above problems, this paper proposes a two-phase software fault localization based on relational graph convolutional neural networks (Two-RGCNFL). Firstly, in Phase 1, the method call dependence graph (MCDG) of the program is constructed, the intra-class and inter-class dependencies in MCDG are extracted by using the relational graph convolutional neural network, and the classifier is used to identify the faulty methods. Then, the GraphSMOTE algorithm is improved to alleviate the impact of class imbalance on classification accuracy. Aiming at the problem of parallel ranking of element suspicious values in traditional SBFL technology, in Phase 2, Doc2Vec is used to learn static features, while spectrum information serves as dynamic features. A RankNet model based on siamese multi-layer perceptron is constructed to score and rank statements in the faulty method. This work conducts experiments on 5 real projects of Defects4J benchmark. Experimental results show that, compared with the traditional SBFL technique and two baseline methods, our approach improves the Top-1 accuracy by 262.86%, 29.59% and 53.01%, respectively, which verifies the effectiveness of Two-RGCNFL. Furthermore, this work verifies the importance of inter-class dependencies through ablation experiments. 展开更多
关键词 Software fault localization graph neural network RankNet inter-class dependency class imbalance
在线阅读 下载PDF
Personalized Generative AI Services Through Federated Learning in 6G Edge Networks 被引量:1
18
作者 Li Zeshen Chen Zihan +1 位作者 Hu Xinyi Howard H.Yang 《China Communications》 2025年第7期1-13,共13页
Network architectures assisted by Generative Artificial Intelligence(GAI)are envisioned as foundational elements of sixth-generation(6G)communication system.To deliver ubiquitous intelligent services and meet diverse ... Network architectures assisted by Generative Artificial Intelligence(GAI)are envisioned as foundational elements of sixth-generation(6G)communication system.To deliver ubiquitous intelligent services and meet diverse service requirements,6G network architecture should offer personalized services to various mobile devices.Federated learning(FL)with personalized local training,as a privacypreserving machine learning(ML)approach,can be applied to address these challenges.In this paper,we propose a meta-learning-based personalized FL(PFL)method that improves both communication and computation efficiency by utilizing over-the-air computations.Its“pretraining-and-fine-tuning”principle makes it particularly suitable for enabling edge nodes to access personalized GAI services while preserving local privacy.Experiment results demonstrate the outperformance and efficacy of the proposed algorithm,and notably indicate enhanced communication efficiency without compromising accuracy. 展开更多
关键词 generative artificial intelligence personalized federated learning 6G networks
在线阅读 下载PDF
Nonlinear Interference-Aware Routing,Wavelength and Power Allocation in C+L+S Multi-Band Optical Networks 被引量:1
19
作者 Zhang Xu Xie Wang +4 位作者 Feng Chuan Zeng Hankun Zhou Shanshan Zhang Fan Gong Xiaoxue 《China Communications》 2025年第4期129-142,共14页
Multi-band optical networks are a potential technology for increasing network capacity.However,the strong interference and non-uniformity between wavelengths in multi-band optical networks have become a bottleneck res... Multi-band optical networks are a potential technology for increasing network capacity.However,the strong interference and non-uniformity between wavelengths in multi-band optical networks have become a bottleneck restricting the transmission capacity of multi-band optical networks.To overcome these challenges,it is particularly important to implement optical power optimization targeting wavelength differences.Therefore,based on the generalized Gaussian noise model,we first formulate an optimization model for the problems of routing,modulation format,wavelength,and power allocation in C+L+S multi-band optical networks.Our objective function is to maximize the average link capacity of the network while ensuring that the Optical Signal-to-Noise(OSNR)threshold of the service request is not exceeded.Next,we propose a NonLinear Interferenceaware(NLI-aware)routing,modulation format,wavelength,and power allocation algorithm.Finally,we conduct simulations under different test conditions.The simulation results indicate that our algorithm can effectively reduce the blocking probability by 23.5%and improve the average link capacity by 3.78%in C+L+S multi-band optical networks. 展开更多
关键词 multiband optical communications multiband optical networks power allocation wavelength assignment
在线阅读 下载PDF
Application Research of Wireless Sensor Networks and the Internet of Things 被引量:1
20
作者 Changjian Lv Rui Wang Man Zhao 《Journal of Electronic Research and Application》 2025年第4期283-289,共7页
In the context of the rapid iteration of information technology,the Internet of Things(IoT)has established itself as a pivotal hub connecting the digital world and the physical world.Wireless Sensor Networks(WSNs),dee... In the context of the rapid iteration of information technology,the Internet of Things(IoT)has established itself as a pivotal hub connecting the digital world and the physical world.Wireless Sensor Networks(WSNs),deeply embedded in the perception layer architecture of the IoT,play a crucial role as“tactile nerve endings.”A vast number of micro sensor nodes are widely distributed in monitoring areas according to preset deployment strategies,continuously and accurately perceiving and collecting real-time data on environmental parameters such as temperature,humidity,light intensity,air pressure,and pollutant concentration.These data are transmitted to the IoT cloud platform through stable and reliable communication links,forming a massive and detailed basic data resource pool.By using cutting-edge big data processing algorithms,machine learning models,and artificial intelligence analysis tools,in-depth mining and intelligent analysis of these multi-source heterogeneous data are conducted to generate high-value-added decision-making bases.This precisely empowers multiple fields,including agriculture,medical and health care,smart home,environmental science,and industrial manufacturing,driving intelligent transformation and catalyzing society to move towards a new stage of high-quality development.This paper comprehensively analyzes the technical cores of the IoT and WSNs,systematically sorts out the advanced key technologies of WSNs and the evolution of their strategic significance in the IoT system,deeply explores the innovative application scenarios and practical effects of the two in specific vertical fields,and looks forward to the technological evolution trends.It provides a detailed and highly practical guiding reference for researchers,technical engineers,and industrial decision-makers. 展开更多
关键词 Wireless Sensor networks Internet of Things Key technologies Application fields
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部