According to the disease module hypothesis,the cellular components associated with a disease segregate in the same neighborhood of the human interactome,the map of biologically relevant molecular interactions.Yet,give...According to the disease module hypothesis,the cellular components associated with a disease segregate in the same neighborhood of the human interactome,the map of biologically relevant molecular interactions.Yet,given the incompleteness of the interactome and the limited knowledge of disease-associated genes,it is not obvious if the available data have sufficient coverage to map out modules associated with each disease.展开更多
According to the disease module hypothesis,the cellular components associated with a disease segregate in the same neighborhood of the human interactome,the map of biologically relevant molecular interactions.Yet,give...According to the disease module hypothesis,the cellular components associated with a disease segregate in the same neighborhood of the human interactome,the map of biologically relevant molecular interactions.Yet,given the incompleteness of the interactome and the limited knowledge of disease-associated genes,it is not obvious if the available data have sufficient coverage to map out modules associated with each disease.Here we derive mathematical conditions for the identifiability of disease modules and show that the network-based location of each disease module determines its pathobiological relationship to other diseases.For example,diseases with overlapping network modules show significant coexpression patterns,symptom similarity,and comorbidity,whereas diseases residing in separated network neighborhoods are phenotypically distinct.These tools represent an interactome-based platform to predict molecular commonalities between phenotypically related diseases,even if they do not share primary disease genes.展开更多
Medical procedures are inherently invasive and carry the risk of inducing pain to the mind and body.Recently,efforts have been made to alleviate the discomfort associated with invasive medical procedures through the u...Medical procedures are inherently invasive and carry the risk of inducing pain to the mind and body.Recently,efforts have been made to alleviate the discomfort associated with invasive medical procedures through the use of virtual reality(VR)technology.VR has been demonstrated to be an effective treatment for pain associated with medical procedures,as well as for chronic pain conditions for which no effective treatment has been established.The precise mechanism by which the diversion from reality facilitated by VR contributes to the diminution of pain and anxiety has yet to be elucidated.However,the provision of positive images through VR-based visual stimulation may enhance the functionality of brain networks.The salience network is diminished,while the default mode network is enhanced.Additionally,the medial prefrontal cortex may establish a stronger connection with the default mode network,which could result in a reduction of pain and anxiety.Further research into the potential of VR technology to alleviate pain could lead to a reduction in the number of individuals who overdose on painkillers and contribute to positive change in the medical field.展开更多
Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in us...Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency.展开更多
Complex network models are frequently employed for simulating and studyingdiverse real-world complex systems.Among these models,scale-free networks typically exhibit greater fragility to malicious attacks.Consequently...Complex network models are frequently employed for simulating and studyingdiverse real-world complex systems.Among these models,scale-free networks typically exhibit greater fragility to malicious attacks.Consequently,enhancing the robustness of scale-free networks has become a pressing issue.To address this problem,this paper proposes a Multi-Granularity Integration Algorithm(MGIA),which aims to improve the robustness of scale-free networks while keeping the initial degree of each node unchanged,ensuring network connectivity and avoiding the generation of multiple edges.The algorithm generates a multi-granularity structure from the initial network to be optimized,then uses different optimization strategies to optimize the networks at various granular layers in this structure,and finally realizes the information exchange between different granular layers,thereby further enhancing the optimization effect.We propose new network refresh,crossover,and mutation operators to ensure that the optimized network satisfies the given constraints.Meanwhile,we propose new network similarity and network dissimilarity evaluation metrics to improve the effectiveness of the optimization operators in the algorithm.In the experiments,the MGIA enhances the robustness of the scale-free network by 67.6%.This improvement is approximately 17.2%higher than the optimization effects achieved by eight currently existing complex network robustness optimization algorithms.展开更多
Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay di...Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay differential equations(DDEs)and delay integrodifferential equations(DIDEs)with constant delays,primarily due to their low regularity at delayinduced breaking points.In this paper,a DNN method that combines multi-task learning(MTL)which is proposed to solve both the forward and inverse problems of DIDEs.The core idea of this approach is to divide the original equation into multiple tasks based on the delay,using auxiliary outputs to represent the integral terms,followed by the use of MTL to seamlessly incorporate the properties at the breaking points into the loss function.Furthermore,given the increased training dificulty associated with multiple tasks and outputs,we employ a sequential training scheme to reduce training complexity and provide reference solutions for subsequent tasks.This approach significantly enhances the approximation accuracy of solving DIDEs with DNNs,as demonstrated by comparisons with traditional DNN methods.We validate the effectiveness of this method through several numerical experiments,test various parameter sharing structures in MTL and compare the testing results of these structures.Finally,this method is implemented to solve the inverse problem of nonlinear DIDE and the results show that the unknown parameters of DIDE can be discovered with sparse or noisy data.展开更多
The landscape of financial transactions has grown increasingly complex due to the expansion of global economic integration and advancements in information technology.This complexity poses greater challenges in detecti...The landscape of financial transactions has grown increasingly complex due to the expansion of global economic integration and advancements in information technology.This complexity poses greater challenges in detecting and managing financial fraud.This review explores the role of Graph Neural Networks(GNNs)in addressing these challenges by proposing a unified framework that categorizes existing GNN methodologies applied to financial fraud detection.Specifically,by examining a series of detailed research questions,this review delves into the suitability of GNNs for financial fraud detection,their deployment in real-world scenarios,and the design considerations that enhance their effectiveness.This review reveals that GNNs are exceptionally adept at capturing complex relational patterns and dynamics within financial networks,significantly outperforming traditional fraud detection methods.Unlike previous surveys that often overlook the specific potentials of GNNs or address them only superficially,our review provides a comprehensive,structured analysis,distinctly focusing on the multifaceted applications and deployments of GNNs in financial fraud detection.This review not only highlights the potential of GNNs to improve fraud detection mechanisms but also identifies current gaps and outlines future research directions to enhance their deployment in financial systems.Through a structured review of over 100 studies,this review paper contributes to the understanding of GNN applications in financial fraud detection,offering insights into their adaptability and potential integration strategies.展开更多
With the emphasis on user privacy and communication security, encrypted traffic has increased dramatically, which brings great challenges to traffic classification. The classification method of encrypted traffic based...With the emphasis on user privacy and communication security, encrypted traffic has increased dramatically, which brings great challenges to traffic classification. The classification method of encrypted traffic based on GNN can deal with encrypted traffic well. However, existing GNN-based approaches ignore the relationship between client or server packets. In this paper, we design a network traffic topology based on GCN, called Flow Mapping Graph (FMG). FMG establishes sequential edges between vertexes by the arrival order of packets and establishes jump-order edges between vertexes by connecting packets in different bursts with the same direction. It not only reflects the time characteristics of the packet but also strengthens the relationship between the client or server packets. According to FMG, a Traffic Mapping Classification model (TMC-GCN) is designed, which can automatically capture and learn the characteristics and structure information of the top vertex in FMG. The TMC-GCN model is used to classify the encrypted traffic. The encryption stream classification problem is transformed into a graph classification problem, which can effectively deal with data from different data sources and application scenarios. By comparing the performance of TMC-GCN with other classical models in four public datasets, including CICIOT2023, ISCXVPN2016, CICAAGM2017, and GraphDapp, the effectiveness of the FMG algorithm is verified. The experimental results show that the accuracy rate of the TMC-GCN model is 96.13%, the recall rate is 95.04%, and the F1 rate is 94.54%.展开更多
For the diagnostics and health management of lithium-ion batteries,numerous models have been developed to understand their degradation characteristics.These models typically fall into two categories:data-driven models...For the diagnostics and health management of lithium-ion batteries,numerous models have been developed to understand their degradation characteristics.These models typically fall into two categories:data-driven models and physical models,each offering unique advantages but also facing limitations.Physics-informed neural networks(PINNs)provide a robust framework to integrate data-driven models with physical principles,ensuring consistency with underlying physics while enabling generalization across diverse operational conditions.This study introduces a PINN-based approach to reconstruct open circuit voltage(OCV)curves and estimate key ageing parameters at both the cell and electrode levels.These parameters include available capacity,electrode capacities,and lithium inventory capacity.The proposed method integrates OCV reconstruction models as functional components into convolutional neural networks(CNNs)and is validated using a public dataset.The results reveal that the estimated ageing parameters closely align with those obtained through offline OCV tests,with errors in reconstructed OCV curves remaining within 15 mV.This demonstrates the ability of the method to deliver fast and accurate degradation diagnostics at the electrode level,advancing the potential for precise and efficient battery health management.展开更多
Accurate estimation of mineralogy from geophysical well logs is crucial for characterizing geological formations,particularly in hydrocarbon exploration,CO_(2) sequestration,and geothermal energy development.Current t...Accurate estimation of mineralogy from geophysical well logs is crucial for characterizing geological formations,particularly in hydrocarbon exploration,CO_(2) sequestration,and geothermal energy development.Current techniques,such as multimineral petrophysical analysis,offer details into mineralogical distribution.However,it is inherently time-intensive and demands substantial geological expertise for accurate model evaluation.Furthermore,traditional machine learning techniques often struggle to predict mineralogy accurately and sometimes produce estimations that violate fundamental physical principles.To address this,we present a new approach using Physics-Integrated Neural Networks(PINNs),that combines data-driven learning with domain-specific physical constraints,embedding petrophysical relationships directly into the neural network architecture.This approach enforces that predictions adhere to physical laws.The methodology is applied to the Broom Creek Deep Saline aquifer,a CO_(2) sequestration site in the Williston Basin,to predict the volumes of key mineral constituents—quartz,dolomite,feldspar,anhydrite,illite—along with porosity.Compared to traditional artificial neural networks (ANN),the PINN approach demonstrates higher accuracy and better generalizability,significantly enhancing predictive performance on unseen well datasets.The average mean error across the three blind wells is 0.123 for ANN and 0.042 for PINN,highlighting the superior accuracy of the PINN approach.This method reduces uncertainties in reservoir characterization by improving the reliability of mineralogy and porosity predictions,providing a more robust tool for decision-making in various subsurface geoscience applications.展开更多
It is difficult to improve both energy consumption and detection accuracy simultaneously,and even to obtain the trade-off between them,when detecting and tracking moving targets,especially for Underwater Wireless Sens...It is difficult to improve both energy consumption and detection accuracy simultaneously,and even to obtain the trade-off between them,when detecting and tracking moving targets,especially for Underwater Wireless Sensor Networks(UWSNs).To this end,this paper investigates the relationship between the Degree of Target Change(DoTC)and the detection period,as well as the impact of individual nodes.A Hierarchical Detection and Tracking Approach(HDTA)is proposed.Firstly,the network detection period is determined according to DoTC,which reflects the variation of target motion.Secondly,during the network detection period,each detection node calculates its own node detection period based on the detection mutual information.Taking DoTC as pheromone,an ant colony algorithm is proposed to adaptively adjust the network detection period.The simulation results show that the proposed HDTA with the optimizations of network level and node level significantly improves the detection accuracy by 25%and the network energy consumption by 10%simultaneously,compared to the traditional adaptive period detection schemes.展开更多
In the context of the rapid iteration of information technology,the Internet of Things(IoT)has established itself as a pivotal hub connecting the digital world and the physical world.Wireless Sensor Networks(WSNs),dee...In the context of the rapid iteration of information technology,the Internet of Things(IoT)has established itself as a pivotal hub connecting the digital world and the physical world.Wireless Sensor Networks(WSNs),deeply embedded in the perception layer architecture of the IoT,play a crucial role as“tactile nerve endings.”A vast number of micro sensor nodes are widely distributed in monitoring areas according to preset deployment strategies,continuously and accurately perceiving and collecting real-time data on environmental parameters such as temperature,humidity,light intensity,air pressure,and pollutant concentration.These data are transmitted to the IoT cloud platform through stable and reliable communication links,forming a massive and detailed basic data resource pool.By using cutting-edge big data processing algorithms,machine learning models,and artificial intelligence analysis tools,in-depth mining and intelligent analysis of these multi-source heterogeneous data are conducted to generate high-value-added decision-making bases.This precisely empowers multiple fields,including agriculture,medical and health care,smart home,environmental science,and industrial manufacturing,driving intelligent transformation and catalyzing society to move towards a new stage of high-quality development.This paper comprehensively analyzes the technical cores of the IoT and WSNs,systematically sorts out the advanced key technologies of WSNs and the evolution of their strategic significance in the IoT system,deeply explores the innovative application scenarios and practical effects of the two in specific vertical fields,and looks forward to the technological evolution trends.It provides a detailed and highly practical guiding reference for researchers,technical engineers,and industrial decision-makers.展开更多
Severe acute respiratory syndrome coronavirus 2(SARS-CoV-2)mutations are influenced by random and uncontrollable factors,and the risk of the next widespread epidemic remains.Dual-target drugs that synergistically act ...Severe acute respiratory syndrome coronavirus 2(SARS-CoV-2)mutations are influenced by random and uncontrollable factors,and the risk of the next widespread epidemic remains.Dual-target drugs that synergistically act on two targets exhibit strong therapeutic effects and advantages against mutations.In this study,a novel computational workflow was developed to design dual-target SARS-CoV-2 candidate inhibitors with the Envelope protein and Main protease selected as the two target proteins.The drug-like molecules of our self-constructed 3D scaffold database were used as high-throughput molecular docking probes for feature extraction of two target protein pockets.A multi-layer perceptron(MLP)was employed to embed the binding affinities into a latent space as conditional vectors to control conditional distribution.Utilizing a conditional generative neural network,cG-SchNet,with 3D Euclidean group(E3)symmetries,the conditional probability distributions of molecular 3D structures were acquired and a set of novel SARS-CoV-2 dual-target candidate inhibitors were generated.The 1D probability,2D joint probability,and 2D cumulative probability distribution results indicate that the generated sets are significantly enhanced compared to the training set in the high binding affinity area.Among the 201 generated molecules,42 molecules exhibited a sum binding affinity exceeding 17.0 kcal/mol while 9 of them having a sum binding affinity exceeding 19.0 kcal/mol,demonstrating structure diversity along with strong dual-target affinities,good absorption,distribution,metabolism,excretion,and toxicity(ADMET)properties,and ease of synthesis.Dual-target drugs are rare and difficult to find,and our“high-throughput docking-multi-conditional generation”workflow offers a wide range of options for designing or optimizing potent dual-target SARS-CoV-2 inhibitors.展开更多
The emergence of next generation networks(NextG),including 5G and beyond,is reshaping the technological landscape of cellular and mobile networks.These networks are sufficiently scaled to interconnect billions of user...The emergence of next generation networks(NextG),including 5G and beyond,is reshaping the technological landscape of cellular and mobile networks.These networks are sufficiently scaled to interconnect billions of users and devices.Researchers in academia and industry are focusing on technological advancements to achieve highspeed transmission,cell planning,and latency reduction to facilitate emerging applications such as virtual reality,the metaverse,smart cities,smart health,and autonomous vehicles.NextG continuously improves its network functionality to support these applications.Multiple input multiple output(MIMO)technology offers spectral efficiency,dependability,and overall performance in conjunctionwithNextG.This article proposes a secure channel estimation technique in MIMO topology using a norm-estimation model to provide comprehensive insights into protecting NextG network components against adversarial attacks.The technique aims to create long-lasting and secure NextG networks using this extended approach.The viability of MIMO applications and modern AI-driven methodologies to combat cybersecurity threats are explored in this research.Moreover,the proposed model demonstrates high performance in terms of reliability and accuracy,with a 20%reduction in the MalOut-RealOut-Diff metric compared to existing state-of-the-art techniques.展开更多
The increasing popularity of the Internet and the widespread use of information technology have led to a rise in the number and sophistication of network attacks and security threats.Intrusion detection systems are cr...The increasing popularity of the Internet and the widespread use of information technology have led to a rise in the number and sophistication of network attacks and security threats.Intrusion detection systems are crucial to network security,playing a pivotal role in safeguarding networks from potential threats.However,in the context of an evolving landscape of sophisticated and elusive attacks,existing intrusion detection methodologies often overlook critical aspects such as changes in network topology over time and interactions between hosts.To address these issues,this paper proposes a real-time network intrusion detection method based on graph neural networks.The proposedmethod leverages the advantages of graph neural networks and employs a straightforward graph construction method to represent network traffic as dynamic graph-structured data.Additionally,a graph convolution operation with a multi-head attention mechanism is utilized to enhance the model’s ability to capture the intricate relationships within the graph structure comprehensively.Furthermore,it uses an integrated graph neural network to address dynamic graphs’structural and topological changes at different time points and the challenges of edge embedding in intrusion detection data.The edge classification problem is effectively transformed into node classification by employing a line graph data representation,which facilitates fine-grained intrusion detection tasks on dynamic graph node feature representations.The efficacy of the proposed method is evaluated using two commonly used intrusion detection datasets,UNSW-NB15 and NF-ToN-IoT-v2,and results are compared with previous studies in this field.The experimental results demonstrate that our proposed method achieves 99.3%and 99.96%accuracy on the two datasets,respectively,and outperforms the benchmark model in several evaluation metrics.展开更多
The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and hist...The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and historical context,DL offers a powerful complement by enhancing the speed,objectivity,and precision of the classification process.This research explores the significance of image data augmentation techniques in optimizing the performance of convolutional neural networks(CNNs)for geological image analysis,particularly in the classification of igneous,metamorphic,and sedimentary rock types from rock thin section(RTS)images.This study primarily focuses on classic image augmentation techniques and evaluates their impact on model accuracy and precision.Results demonstrate that augmentation techniques like Equalize significantly enhance the model's classification capabilities,achieving an F1-Score of 0.9869 for igneous rocks,0.9884 for metamorphic rocks,and 0.9929 for sedimentary rocks,representing improvements compared to the baseline original results.Moreover,the weighted average F1-Score across all classes and techniques is 0.9886,indicating an enhancement.Conversely,methods like Distort lead to decreased accuracy and F1-Score,with an F1-Score of 0.949 for igneous rocks,0.954 for metamorphic rocks,and 0.9416 for sedimentary rocks,exacerbating the performance compared to the baseline.The study underscores the practicality of image data augmentation in geological image classification and advocates for the adoption of DL methods in this domain for automation and improved results.The findings of this study can benefit various fields,including remote sensing,mineral exploration,and environmental monitoring,by enhancing the accuracy of geological image analysis both for scientific research and industrial applications.展开更多
Control signaling is mandatory for the operation and management of all types of communication networks,including the Third Generation Partnership Project(3GPP)mobile broadband networks.However,they consume important a...Control signaling is mandatory for the operation and management of all types of communication networks,including the Third Generation Partnership Project(3GPP)mobile broadband networks.However,they consume important and scarce network resources such as bandwidth and processing power.There have been several reports of these control signaling turning into signaling storms halting network operations and causing the respective Telecom companies big financial losses.This paper draws its motivation from such real network disaster incidents attributed to signaling storms.In this paper,we present a thorough survey of the causes,of the signaling storm problems in 3GPP-based mobile broadband networks and discuss in detail their possible solutions and countermeasures.We provide relevant analytical models to help quantify the effect of the potential causes and benefits of their corresponding solutions.Another important contribution of this paper is the comparison of the possible causes and solutions/countermeasures,concerning their effect on several important network aspects such as architecture,additional signaling,fidelity,etc.,in the form of a table.This paper presents an update and an extension of our earlier conference publication.To our knowledge,no similar survey study exists on the subject.展开更多
This study investigates photonuclear reaction(γ,n)cross-sections using Bayesian neural network(BNN)analysis.After determining the optimal network architecture,which features two hidden layers,each with 50 hidden node...This study investigates photonuclear reaction(γ,n)cross-sections using Bayesian neural network(BNN)analysis.After determining the optimal network architecture,which features two hidden layers,each with 50 hidden nodes,training was conducted for 30,000 iterations to ensure comprehensive data capture.By analyzing the distribution of absolute errors positively correlated with the cross-section for the isotope 159Tb,as well as the relative errors unrelated to the cross-section,we confirmed that the network effectively captured the data features without overfitting.Comparison with the TENDL-2021 Database demonstrated the BNN's reliability in fitting photonuclear cross-sections with lower average errors.The predictions for nuclei with single and double giant dipole resonance peak cross-sections,the accurate determination of the photoneutron reaction threshold in the low-energy region,and the precise description of trends in the high-energy cross-sections further demonstrate the network's generalization ability on the validation set.This can be attributed to the consistency of the training data.By using consistent training sets from different laboratories,Bayesian neural networks can predict nearby unknown cross-sections based on existing laboratory data,thereby estimating the potential differences between other laboratories'existing data and their own measurement results.Experimental measurements of photonuclear reactions on the newly constructed SLEGS beamline will contribute to clarifying the differences in cross-sections within the existing data.展开更多
Spectrum-based fault localization (SBFL) generates a ranked list of suspicious elements by using the program execution spectrum, but the excessive number of elements ranked in parallel results in low localization accu...Spectrum-based fault localization (SBFL) generates a ranked list of suspicious elements by using the program execution spectrum, but the excessive number of elements ranked in parallel results in low localization accuracy. Most researchers consider intra-class dependencies to improve localization accuracy. However, some studies show that inter-class method call type faults account for more than 20%, which means such methods still have certain limitations. To solve the above problems, this paper proposes a two-phase software fault localization based on relational graph convolutional neural networks (Two-RGCNFL). Firstly, in Phase 1, the method call dependence graph (MCDG) of the program is constructed, the intra-class and inter-class dependencies in MCDG are extracted by using the relational graph convolutional neural network, and the classifier is used to identify the faulty methods. Then, the GraphSMOTE algorithm is improved to alleviate the impact of class imbalance on classification accuracy. Aiming at the problem of parallel ranking of element suspicious values in traditional SBFL technology, in Phase 2, Doc2Vec is used to learn static features, while spectrum information serves as dynamic features. A RankNet model based on siamese multi-layer perceptron is constructed to score and rank statements in the faulty method. This work conducts experiments on 5 real projects of Defects4J benchmark. Experimental results show that, compared with the traditional SBFL technique and two baseline methods, our approach improves the Top-1 accuracy by 262.86%, 29.59% and 53.01%, respectively, which verifies the effectiveness of Two-RGCNFL. Furthermore, this work verifies the importance of inter-class dependencies through ablation experiments.展开更多
文摘According to the disease module hypothesis,the cellular components associated with a disease segregate in the same neighborhood of the human interactome,the map of biologically relevant molecular interactions.Yet,given the incompleteness of the interactome and the limited knowledge of disease-associated genes,it is not obvious if the available data have sufficient coverage to map out modules associated with each disease.
文摘According to the disease module hypothesis,the cellular components associated with a disease segregate in the same neighborhood of the human interactome,the map of biologically relevant molecular interactions.Yet,given the incompleteness of the interactome and the limited knowledge of disease-associated genes,it is not obvious if the available data have sufficient coverage to map out modules associated with each disease.Here we derive mathematical conditions for the identifiability of disease modules and show that the network-based location of each disease module determines its pathobiological relationship to other diseases.For example,diseases with overlapping network modules show significant coexpression patterns,symptom similarity,and comorbidity,whereas diseases residing in separated network neighborhoods are phenotypically distinct.These tools represent an interactome-based platform to predict molecular commonalities between phenotypically related diseases,even if they do not share primary disease genes.
文摘Medical procedures are inherently invasive and carry the risk of inducing pain to the mind and body.Recently,efforts have been made to alleviate the discomfort associated with invasive medical procedures through the use of virtual reality(VR)technology.VR has been demonstrated to be an effective treatment for pain associated with medical procedures,as well as for chronic pain conditions for which no effective treatment has been established.The precise mechanism by which the diversion from reality facilitated by VR contributes to the diminution of pain and anxiety has yet to be elucidated.However,the provision of positive images through VR-based visual stimulation may enhance the functionality of brain networks.The salience network is diminished,while the default mode network is enhanced.Additionally,the medial prefrontal cortex may establish a stronger connection with the default mode network,which could result in a reduction of pain and anxiety.Further research into the potential of VR technology to alleviate pain could lead to a reduction in the number of individuals who overdose on painkillers and contribute to positive change in the medical field.
文摘Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency.
基金National Natural Science Foundation of China(11971211,12171388).
文摘Complex network models are frequently employed for simulating and studyingdiverse real-world complex systems.Among these models,scale-free networks typically exhibit greater fragility to malicious attacks.Consequently,enhancing the robustness of scale-free networks has become a pressing issue.To address this problem,this paper proposes a Multi-Granularity Integration Algorithm(MGIA),which aims to improve the robustness of scale-free networks while keeping the initial degree of each node unchanged,ensuring network connectivity and avoiding the generation of multiple edges.The algorithm generates a multi-granularity structure from the initial network to be optimized,then uses different optimization strategies to optimize the networks at various granular layers in this structure,and finally realizes the information exchange between different granular layers,thereby further enhancing the optimization effect.We propose new network refresh,crossover,and mutation operators to ensure that the optimized network satisfies the given constraints.Meanwhile,we propose new network similarity and network dissimilarity evaluation metrics to improve the effectiveness of the optimization operators in the algorithm.In the experiments,the MGIA enhances the robustness of the scale-free network by 67.6%.This improvement is approximately 17.2%higher than the optimization effects achieved by eight currently existing complex network robustness optimization algorithms.
文摘Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay differential equations(DDEs)and delay integrodifferential equations(DIDEs)with constant delays,primarily due to their low regularity at delayinduced breaking points.In this paper,a DNN method that combines multi-task learning(MTL)which is proposed to solve both the forward and inverse problems of DIDEs.The core idea of this approach is to divide the original equation into multiple tasks based on the delay,using auxiliary outputs to represent the integral terms,followed by the use of MTL to seamlessly incorporate the properties at the breaking points into the loss function.Furthermore,given the increased training dificulty associated with multiple tasks and outputs,we employ a sequential training scheme to reduce training complexity and provide reference solutions for subsequent tasks.This approach significantly enhances the approximation accuracy of solving DIDEs with DNNs,as demonstrated by comparisons with traditional DNN methods.We validate the effectiveness of this method through several numerical experiments,test various parameter sharing structures in MTL and compare the testing results of these structures.Finally,this method is implemented to solve the inverse problem of nonlinear DIDE and the results show that the unknown parameters of DIDE can be discovered with sparse or noisy data.
基金supported by the National Key R&D Program of China(No.2022YFB4501704)the National Natural Science Foundation of China(Grant No.62102287)the Shanghai Science and Technology Innovation Action Plan Project(Nos.22YS1400600 and 22511100700).
文摘The landscape of financial transactions has grown increasingly complex due to the expansion of global economic integration and advancements in information technology.This complexity poses greater challenges in detecting and managing financial fraud.This review explores the role of Graph Neural Networks(GNNs)in addressing these challenges by proposing a unified framework that categorizes existing GNN methodologies applied to financial fraud detection.Specifically,by examining a series of detailed research questions,this review delves into the suitability of GNNs for financial fraud detection,their deployment in real-world scenarios,and the design considerations that enhance their effectiveness.This review reveals that GNNs are exceptionally adept at capturing complex relational patterns and dynamics within financial networks,significantly outperforming traditional fraud detection methods.Unlike previous surveys that often overlook the specific potentials of GNNs or address them only superficially,our review provides a comprehensive,structured analysis,distinctly focusing on the multifaceted applications and deployments of GNNs in financial fraud detection.This review not only highlights the potential of GNNs to improve fraud detection mechanisms but also identifies current gaps and outlines future research directions to enhance their deployment in financial systems.Through a structured review of over 100 studies,this review paper contributes to the understanding of GNN applications in financial fraud detection,offering insights into their adaptability and potential integration strategies.
基金supported by the National Key Research and Development Program of China No.2023YFA1009500.
文摘With the emphasis on user privacy and communication security, encrypted traffic has increased dramatically, which brings great challenges to traffic classification. The classification method of encrypted traffic based on GNN can deal with encrypted traffic well. However, existing GNN-based approaches ignore the relationship between client or server packets. In this paper, we design a network traffic topology based on GCN, called Flow Mapping Graph (FMG). FMG establishes sequential edges between vertexes by the arrival order of packets and establishes jump-order edges between vertexes by connecting packets in different bursts with the same direction. It not only reflects the time characteristics of the packet but also strengthens the relationship between the client or server packets. According to FMG, a Traffic Mapping Classification model (TMC-GCN) is designed, which can automatically capture and learn the characteristics and structure information of the top vertex in FMG. The TMC-GCN model is used to classify the encrypted traffic. The encryption stream classification problem is transformed into a graph classification problem, which can effectively deal with data from different data sources and application scenarios. By comparing the performance of TMC-GCN with other classical models in four public datasets, including CICIOT2023, ISCXVPN2016, CICAAGM2017, and GraphDapp, the effectiveness of the FMG algorithm is verified. The experimental results show that the accuracy rate of the TMC-GCN model is 96.13%, the recall rate is 95.04%, and the F1 rate is 94.54%.
基金supported by the Beijing Natural Science Foundation(Grant No.L223013)。
文摘For the diagnostics and health management of lithium-ion batteries,numerous models have been developed to understand their degradation characteristics.These models typically fall into two categories:data-driven models and physical models,each offering unique advantages but also facing limitations.Physics-informed neural networks(PINNs)provide a robust framework to integrate data-driven models with physical principles,ensuring consistency with underlying physics while enabling generalization across diverse operational conditions.This study introduces a PINN-based approach to reconstruct open circuit voltage(OCV)curves and estimate key ageing parameters at both the cell and electrode levels.These parameters include available capacity,electrode capacities,and lithium inventory capacity.The proposed method integrates OCV reconstruction models as functional components into convolutional neural networks(CNNs)and is validated using a public dataset.The results reveal that the estimated ageing parameters closely align with those obtained through offline OCV tests,with errors in reconstructed OCV curves remaining within 15 mV.This demonstrates the ability of the method to deliver fast and accurate degradation diagnostics at the electrode level,advancing the potential for precise and efficient battery health management.
基金the North Dakota Industrial Commission (NDIC) for their financial supportprovided by the University of North Dakota Computational Research Center。
文摘Accurate estimation of mineralogy from geophysical well logs is crucial for characterizing geological formations,particularly in hydrocarbon exploration,CO_(2) sequestration,and geothermal energy development.Current techniques,such as multimineral petrophysical analysis,offer details into mineralogical distribution.However,it is inherently time-intensive and demands substantial geological expertise for accurate model evaluation.Furthermore,traditional machine learning techniques often struggle to predict mineralogy accurately and sometimes produce estimations that violate fundamental physical principles.To address this,we present a new approach using Physics-Integrated Neural Networks(PINNs),that combines data-driven learning with domain-specific physical constraints,embedding petrophysical relationships directly into the neural network architecture.This approach enforces that predictions adhere to physical laws.The methodology is applied to the Broom Creek Deep Saline aquifer,a CO_(2) sequestration site in the Williston Basin,to predict the volumes of key mineral constituents—quartz,dolomite,feldspar,anhydrite,illite—along with porosity.Compared to traditional artificial neural networks (ANN),the PINN approach demonstrates higher accuracy and better generalizability,significantly enhancing predictive performance on unseen well datasets.The average mean error across the three blind wells is 0.123 for ANN and 0.042 for PINN,highlighting the superior accuracy of the PINN approach.This method reduces uncertainties in reservoir characterization by improving the reliability of mineralogy and porosity predictions,providing a more robust tool for decision-making in various subsurface geoscience applications.
文摘It is difficult to improve both energy consumption and detection accuracy simultaneously,and even to obtain the trade-off between them,when detecting and tracking moving targets,especially for Underwater Wireless Sensor Networks(UWSNs).To this end,this paper investigates the relationship between the Degree of Target Change(DoTC)and the detection period,as well as the impact of individual nodes.A Hierarchical Detection and Tracking Approach(HDTA)is proposed.Firstly,the network detection period is determined according to DoTC,which reflects the variation of target motion.Secondly,during the network detection period,each detection node calculates its own node detection period based on the detection mutual information.Taking DoTC as pheromone,an ant colony algorithm is proposed to adaptively adjust the network detection period.The simulation results show that the proposed HDTA with the optimizations of network level and node level significantly improves the detection accuracy by 25%and the network energy consumption by 10%simultaneously,compared to the traditional adaptive period detection schemes.
文摘In the context of the rapid iteration of information technology,the Internet of Things(IoT)has established itself as a pivotal hub connecting the digital world and the physical world.Wireless Sensor Networks(WSNs),deeply embedded in the perception layer architecture of the IoT,play a crucial role as“tactile nerve endings.”A vast number of micro sensor nodes are widely distributed in monitoring areas according to preset deployment strategies,continuously and accurately perceiving and collecting real-time data on environmental parameters such as temperature,humidity,light intensity,air pressure,and pollutant concentration.These data are transmitted to the IoT cloud platform through stable and reliable communication links,forming a massive and detailed basic data resource pool.By using cutting-edge big data processing algorithms,machine learning models,and artificial intelligence analysis tools,in-depth mining and intelligent analysis of these multi-source heterogeneous data are conducted to generate high-value-added decision-making bases.This precisely empowers multiple fields,including agriculture,medical and health care,smart home,environmental science,and industrial manufacturing,driving intelligent transformation and catalyzing society to move towards a new stage of high-quality development.This paper comprehensively analyzes the technical cores of the IoT and WSNs,systematically sorts out the advanced key technologies of WSNs and the evolution of their strategic significance in the IoT system,deeply explores the innovative application scenarios and practical effects of the two in specific vertical fields,and looks forward to the technological evolution trends.It provides a detailed and highly practical guiding reference for researchers,technical engineers,and industrial decision-makers.
基金supported by Interdisciplinary Innova-tion Project of“Bioarchaeology Laboratory”of Jilin University,China,and“MedicineþX”Interdisciplinary Innovation Team of Norman Bethune Health Science Center of Jilin University,China(Grant No.:2022JBGS05).
文摘Severe acute respiratory syndrome coronavirus 2(SARS-CoV-2)mutations are influenced by random and uncontrollable factors,and the risk of the next widespread epidemic remains.Dual-target drugs that synergistically act on two targets exhibit strong therapeutic effects and advantages against mutations.In this study,a novel computational workflow was developed to design dual-target SARS-CoV-2 candidate inhibitors with the Envelope protein and Main protease selected as the two target proteins.The drug-like molecules of our self-constructed 3D scaffold database were used as high-throughput molecular docking probes for feature extraction of two target protein pockets.A multi-layer perceptron(MLP)was employed to embed the binding affinities into a latent space as conditional vectors to control conditional distribution.Utilizing a conditional generative neural network,cG-SchNet,with 3D Euclidean group(E3)symmetries,the conditional probability distributions of molecular 3D structures were acquired and a set of novel SARS-CoV-2 dual-target candidate inhibitors were generated.The 1D probability,2D joint probability,and 2D cumulative probability distribution results indicate that the generated sets are significantly enhanced compared to the training set in the high binding affinity area.Among the 201 generated molecules,42 molecules exhibited a sum binding affinity exceeding 17.0 kcal/mol while 9 of them having a sum binding affinity exceeding 19.0 kcal/mol,demonstrating structure diversity along with strong dual-target affinities,good absorption,distribution,metabolism,excretion,and toxicity(ADMET)properties,and ease of synthesis.Dual-target drugs are rare and difficult to find,and our“high-throughput docking-multi-conditional generation”workflow offers a wide range of options for designing or optimizing potent dual-target SARS-CoV-2 inhibitors.
基金funding from King Saud University through Researchers Supporting Project number(RSP2024R387),King Saud University,Riyadh,Saudi Arabia.
文摘The emergence of next generation networks(NextG),including 5G and beyond,is reshaping the technological landscape of cellular and mobile networks.These networks are sufficiently scaled to interconnect billions of users and devices.Researchers in academia and industry are focusing on technological advancements to achieve highspeed transmission,cell planning,and latency reduction to facilitate emerging applications such as virtual reality,the metaverse,smart cities,smart health,and autonomous vehicles.NextG continuously improves its network functionality to support these applications.Multiple input multiple output(MIMO)technology offers spectral efficiency,dependability,and overall performance in conjunctionwithNextG.This article proposes a secure channel estimation technique in MIMO topology using a norm-estimation model to provide comprehensive insights into protecting NextG network components against adversarial attacks.The technique aims to create long-lasting and secure NextG networks using this extended approach.The viability of MIMO applications and modern AI-driven methodologies to combat cybersecurity threats are explored in this research.Moreover,the proposed model demonstrates high performance in terms of reliability and accuracy,with a 20%reduction in the MalOut-RealOut-Diff metric compared to existing state-of-the-art techniques.
文摘The increasing popularity of the Internet and the widespread use of information technology have led to a rise in the number and sophistication of network attacks and security threats.Intrusion detection systems are crucial to network security,playing a pivotal role in safeguarding networks from potential threats.However,in the context of an evolving landscape of sophisticated and elusive attacks,existing intrusion detection methodologies often overlook critical aspects such as changes in network topology over time and interactions between hosts.To address these issues,this paper proposes a real-time network intrusion detection method based on graph neural networks.The proposedmethod leverages the advantages of graph neural networks and employs a straightforward graph construction method to represent network traffic as dynamic graph-structured data.Additionally,a graph convolution operation with a multi-head attention mechanism is utilized to enhance the model’s ability to capture the intricate relationships within the graph structure comprehensively.Furthermore,it uses an integrated graph neural network to address dynamic graphs’structural and topological changes at different time points and the challenges of edge embedding in intrusion detection data.The edge classification problem is effectively transformed into node classification by employing a line graph data representation,which facilitates fine-grained intrusion detection tasks on dynamic graph node feature representations.The efficacy of the proposed method is evaluated using two commonly used intrusion detection datasets,UNSW-NB15 and NF-ToN-IoT-v2,and results are compared with previous studies in this field.The experimental results demonstrate that our proposed method achieves 99.3%and 99.96%accuracy on the two datasets,respectively,and outperforms the benchmark model in several evaluation metrics.
文摘The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and historical context,DL offers a powerful complement by enhancing the speed,objectivity,and precision of the classification process.This research explores the significance of image data augmentation techniques in optimizing the performance of convolutional neural networks(CNNs)for geological image analysis,particularly in the classification of igneous,metamorphic,and sedimentary rock types from rock thin section(RTS)images.This study primarily focuses on classic image augmentation techniques and evaluates their impact on model accuracy and precision.Results demonstrate that augmentation techniques like Equalize significantly enhance the model's classification capabilities,achieving an F1-Score of 0.9869 for igneous rocks,0.9884 for metamorphic rocks,and 0.9929 for sedimentary rocks,representing improvements compared to the baseline original results.Moreover,the weighted average F1-Score across all classes and techniques is 0.9886,indicating an enhancement.Conversely,methods like Distort lead to decreased accuracy and F1-Score,with an F1-Score of 0.949 for igneous rocks,0.954 for metamorphic rocks,and 0.9416 for sedimentary rocks,exacerbating the performance compared to the baseline.The study underscores the practicality of image data augmentation in geological image classification and advocates for the adoption of DL methods in this domain for automation and improved results.The findings of this study can benefit various fields,including remote sensing,mineral exploration,and environmental monitoring,by enhancing the accuracy of geological image analysis both for scientific research and industrial applications.
基金the Deanship of Graduate Studies and Scientific Research at Qassim University for financial support(QU-APC-2024-9/1).
文摘Control signaling is mandatory for the operation and management of all types of communication networks,including the Third Generation Partnership Project(3GPP)mobile broadband networks.However,they consume important and scarce network resources such as bandwidth and processing power.There have been several reports of these control signaling turning into signaling storms halting network operations and causing the respective Telecom companies big financial losses.This paper draws its motivation from such real network disaster incidents attributed to signaling storms.In this paper,we present a thorough survey of the causes,of the signaling storm problems in 3GPP-based mobile broadband networks and discuss in detail their possible solutions and countermeasures.We provide relevant analytical models to help quantify the effect of the potential causes and benefits of their corresponding solutions.Another important contribution of this paper is the comparison of the possible causes and solutions/countermeasures,concerning their effect on several important network aspects such as architecture,additional signaling,fidelity,etc.,in the form of a table.This paper presents an update and an extension of our earlier conference publication.To our knowledge,no similar survey study exists on the subject.
基金supported by National key research and development program(No.2022YFA1602404)the National Natural Science Foundation of China(Nos.12388102,12275338,12005280)the Key Laboratory of Nuclear Data foundation(No.JCKY2022201C152)。
文摘This study investigates photonuclear reaction(γ,n)cross-sections using Bayesian neural network(BNN)analysis.After determining the optimal network architecture,which features two hidden layers,each with 50 hidden nodes,training was conducted for 30,000 iterations to ensure comprehensive data capture.By analyzing the distribution of absolute errors positively correlated with the cross-section for the isotope 159Tb,as well as the relative errors unrelated to the cross-section,we confirmed that the network effectively captured the data features without overfitting.Comparison with the TENDL-2021 Database demonstrated the BNN's reliability in fitting photonuclear cross-sections with lower average errors.The predictions for nuclei with single and double giant dipole resonance peak cross-sections,the accurate determination of the photoneutron reaction threshold in the low-energy region,and the precise description of trends in the high-energy cross-sections further demonstrate the network's generalization ability on the validation set.This can be attributed to the consistency of the training data.By using consistent training sets from different laboratories,Bayesian neural networks can predict nearby unknown cross-sections based on existing laboratory data,thereby estimating the potential differences between other laboratories'existing data and their own measurement results.Experimental measurements of photonuclear reactions on the newly constructed SLEGS beamline will contribute to clarifying the differences in cross-sections within the existing data.
基金funded by the Youth Fund of the National Natural Science Foundation of China(Grant No.42261070).
文摘Spectrum-based fault localization (SBFL) generates a ranked list of suspicious elements by using the program execution spectrum, but the excessive number of elements ranked in parallel results in low localization accuracy. Most researchers consider intra-class dependencies to improve localization accuracy. However, some studies show that inter-class method call type faults account for more than 20%, which means such methods still have certain limitations. To solve the above problems, this paper proposes a two-phase software fault localization based on relational graph convolutional neural networks (Two-RGCNFL). Firstly, in Phase 1, the method call dependence graph (MCDG) of the program is constructed, the intra-class and inter-class dependencies in MCDG are extracted by using the relational graph convolutional neural network, and the classifier is used to identify the faulty methods. Then, the GraphSMOTE algorithm is improved to alleviate the impact of class imbalance on classification accuracy. Aiming at the problem of parallel ranking of element suspicious values in traditional SBFL technology, in Phase 2, Doc2Vec is used to learn static features, while spectrum information serves as dynamic features. A RankNet model based on siamese multi-layer perceptron is constructed to score and rank statements in the faulty method. This work conducts experiments on 5 real projects of Defects4J benchmark. Experimental results show that, compared with the traditional SBFL technique and two baseline methods, our approach improves the Top-1 accuracy by 262.86%, 29.59% and 53.01%, respectively, which verifies the effectiveness of Two-RGCNFL. Furthermore, this work verifies the importance of inter-class dependencies through ablation experiments.