Medical procedures are inherently invasive and carry the risk of inducing pain to the mind and body.Recently,efforts have been made to alleviate the discomfort associated with invasive medical procedures through the u...Medical procedures are inherently invasive and carry the risk of inducing pain to the mind and body.Recently,efforts have been made to alleviate the discomfort associated with invasive medical procedures through the use of virtual reality(VR)technology.VR has been demonstrated to be an effective treatment for pain associated with medical procedures,as well as for chronic pain conditions for which no effective treatment has been established.The precise mechanism by which the diversion from reality facilitated by VR contributes to the diminution of pain and anxiety has yet to be elucidated.However,the provision of positive images through VR-based visual stimulation may enhance the functionality of brain networks.The salience network is diminished,while the default mode network is enhanced.Additionally,the medial prefrontal cortex may establish a stronger connection with the default mode network,which could result in a reduction of pain and anxiety.Further research into the potential of VR technology to alleviate pain could lead to a reduction in the number of individuals who overdose on painkillers and contribute to positive change in the medical field.展开更多
Complex network models are frequently employed for simulating and studyingdiverse real-world complex systems.Among these models,scale-free networks typically exhibit greater fragility to malicious attacks.Consequently...Complex network models are frequently employed for simulating and studyingdiverse real-world complex systems.Among these models,scale-free networks typically exhibit greater fragility to malicious attacks.Consequently,enhancing the robustness of scale-free networks has become a pressing issue.To address this problem,this paper proposes a Multi-Granularity Integration Algorithm(MGIA),which aims to improve the robustness of scale-free networks while keeping the initial degree of each node unchanged,ensuring network connectivity and avoiding the generation of multiple edges.The algorithm generates a multi-granularity structure from the initial network to be optimized,then uses different optimization strategies to optimize the networks at various granular layers in this structure,and finally realizes the information exchange between different granular layers,thereby further enhancing the optimization effect.We propose new network refresh,crossover,and mutation operators to ensure that the optimized network satisfies the given constraints.Meanwhile,we propose new network similarity and network dissimilarity evaluation metrics to improve the effectiveness of the optimization operators in the algorithm.In the experiments,the MGIA enhances the robustness of the scale-free network by 67.6%.This improvement is approximately 17.2%higher than the optimization effects achieved by eight currently existing complex network robustness optimization algorithms.展开更多
Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in us...Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency.展开更多
Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay di...Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay differential equations(DDEs)and delay integrodifferential equations(DIDEs)with constant delays,primarily due to their low regularity at delayinduced breaking points.In this paper,a DNN method that combines multi-task learning(MTL)which is proposed to solve both the forward and inverse problems of DIDEs.The core idea of this approach is to divide the original equation into multiple tasks based on the delay,using auxiliary outputs to represent the integral terms,followed by the use of MTL to seamlessly incorporate the properties at the breaking points into the loss function.Furthermore,given the increased training dificulty associated with multiple tasks and outputs,we employ a sequential training scheme to reduce training complexity and provide reference solutions for subsequent tasks.This approach significantly enhances the approximation accuracy of solving DIDEs with DNNs,as demonstrated by comparisons with traditional DNN methods.We validate the effectiveness of this method through several numerical experiments,test various parameter sharing structures in MTL and compare the testing results of these structures.Finally,this method is implemented to solve the inverse problem of nonlinear DIDE and the results show that the unknown parameters of DIDE can be discovered with sparse or noisy data.展开更多
For the diagnostics and health management of lithium-ion batteries,numerous models have been developed to understand their degradation characteristics.These models typically fall into two categories:data-driven models...For the diagnostics and health management of lithium-ion batteries,numerous models have been developed to understand their degradation characteristics.These models typically fall into two categories:data-driven models and physical models,each offering unique advantages but also facing limitations.Physics-informed neural networks(PINNs)provide a robust framework to integrate data-driven models with physical principles,ensuring consistency with underlying physics while enabling generalization across diverse operational conditions.This study introduces a PINN-based approach to reconstruct open circuit voltage(OCV)curves and estimate key ageing parameters at both the cell and electrode levels.These parameters include available capacity,electrode capacities,and lithium inventory capacity.The proposed method integrates OCV reconstruction models as functional components into convolutional neural networks(CNNs)and is validated using a public dataset.The results reveal that the estimated ageing parameters closely align with those obtained through offline OCV tests,with errors in reconstructed OCV curves remaining within 15 mV.This demonstrates the ability of the method to deliver fast and accurate degradation diagnostics at the electrode level,advancing the potential for precise and efficient battery health management.展开更多
With the emphasis on user privacy and communication security, encrypted traffic has increased dramatically, which brings great challenges to traffic classification. The classification method of encrypted traffic based...With the emphasis on user privacy and communication security, encrypted traffic has increased dramatically, which brings great challenges to traffic classification. The classification method of encrypted traffic based on GNN can deal with encrypted traffic well. However, existing GNN-based approaches ignore the relationship between client or server packets. In this paper, we design a network traffic topology based on GCN, called Flow Mapping Graph (FMG). FMG establishes sequential edges between vertexes by the arrival order of packets and establishes jump-order edges between vertexes by connecting packets in different bursts with the same direction. It not only reflects the time characteristics of the packet but also strengthens the relationship between the client or server packets. According to FMG, a Traffic Mapping Classification model (TMC-GCN) is designed, which can automatically capture and learn the characteristics and structure information of the top vertex in FMG. The TMC-GCN model is used to classify the encrypted traffic. The encryption stream classification problem is transformed into a graph classification problem, which can effectively deal with data from different data sources and application scenarios. By comparing the performance of TMC-GCN with other classical models in four public datasets, including CICIOT2023, ISCXVPN2016, CICAAGM2017, and GraphDapp, the effectiveness of the FMG algorithm is verified. The experimental results show that the accuracy rate of the TMC-GCN model is 96.13%, the recall rate is 95.04%, and the F1 rate is 94.54%.展开更多
The increasing popularity of the Internet and the widespread use of information technology have led to a rise in the number and sophistication of network attacks and security threats.Intrusion detection systems are cr...The increasing popularity of the Internet and the widespread use of information technology have led to a rise in the number and sophistication of network attacks and security threats.Intrusion detection systems are crucial to network security,playing a pivotal role in safeguarding networks from potential threats.However,in the context of an evolving landscape of sophisticated and elusive attacks,existing intrusion detection methodologies often overlook critical aspects such as changes in network topology over time and interactions between hosts.To address these issues,this paper proposes a real-time network intrusion detection method based on graph neural networks.The proposedmethod leverages the advantages of graph neural networks and employs a straightforward graph construction method to represent network traffic as dynamic graph-structured data.Additionally,a graph convolution operation with a multi-head attention mechanism is utilized to enhance the model’s ability to capture the intricate relationships within the graph structure comprehensively.Furthermore,it uses an integrated graph neural network to address dynamic graphs’structural and topological changes at different time points and the challenges of edge embedding in intrusion detection data.The edge classification problem is effectively transformed into node classification by employing a line graph data representation,which facilitates fine-grained intrusion detection tasks on dynamic graph node feature representations.The efficacy of the proposed method is evaluated using two commonly used intrusion detection datasets,UNSW-NB15 and NF-ToN-IoT-v2,and results are compared with previous studies in this field.The experimental results demonstrate that our proposed method achieves 99.3%and 99.96%accuracy on the two datasets,respectively,and outperforms the benchmark model in several evaluation metrics.展开更多
The emergence of next generation networks(NextG),including 5G and beyond,is reshaping the technological landscape of cellular and mobile networks.These networks are sufficiently scaled to interconnect billions of user...The emergence of next generation networks(NextG),including 5G and beyond,is reshaping the technological landscape of cellular and mobile networks.These networks are sufficiently scaled to interconnect billions of users and devices.Researchers in academia and industry are focusing on technological advancements to achieve highspeed transmission,cell planning,and latency reduction to facilitate emerging applications such as virtual reality,the metaverse,smart cities,smart health,and autonomous vehicles.NextG continuously improves its network functionality to support these applications.Multiple input multiple output(MIMO)technology offers spectral efficiency,dependability,and overall performance in conjunctionwithNextG.This article proposes a secure channel estimation technique in MIMO topology using a norm-estimation model to provide comprehensive insights into protecting NextG network components against adversarial attacks.The technique aims to create long-lasting and secure NextG networks using this extended approach.The viability of MIMO applications and modern AI-driven methodologies to combat cybersecurity threats are explored in this research.Moreover,the proposed model demonstrates high performance in terms of reliability and accuracy,with a 20%reduction in the MalOut-RealOut-Diff metric compared to existing state-of-the-art techniques.展开更多
This study investigates photonuclear reaction(γ,n)cross-sections using Bayesian neural network(BNN)analysis.After determining the optimal network architecture,which features two hidden layers,each with 50 hidden node...This study investigates photonuclear reaction(γ,n)cross-sections using Bayesian neural network(BNN)analysis.After determining the optimal network architecture,which features two hidden layers,each with 50 hidden nodes,training was conducted for 30,000 iterations to ensure comprehensive data capture.By analyzing the distribution of absolute errors positively correlated with the cross-section for the isotope 159Tb,as well as the relative errors unrelated to the cross-section,we confirmed that the network effectively captured the data features without overfitting.Comparison with the TENDL-2021 Database demonstrated the BNN's reliability in fitting photonuclear cross-sections with lower average errors.The predictions for nuclei with single and double giant dipole resonance peak cross-sections,the accurate determination of the photoneutron reaction threshold in the low-energy region,and the precise description of trends in the high-energy cross-sections further demonstrate the network's generalization ability on the validation set.This can be attributed to the consistency of the training data.By using consistent training sets from different laboratories,Bayesian neural networks can predict nearby unknown cross-sections based on existing laboratory data,thereby estimating the potential differences between other laboratories'existing data and their own measurement results.Experimental measurements of photonuclear reactions on the newly constructed SLEGS beamline will contribute to clarifying the differences in cross-sections within the existing data.展开更多
Combined with elastic network model(ENM),the perturbation response scanning(PRS)has emerged as a robust technique for pinpointing allosteric interactions within proteins.Here,we proposed the PRS analysis of drug-targe...Combined with elastic network model(ENM),the perturbation response scanning(PRS)has emerged as a robust technique for pinpointing allosteric interactions within proteins.Here,we proposed the PRS analysis of drug-target networks(DTNs),which could provide a promising avenue in network medicine.We demonstrated the utility of the method by introducing a deep learning and network perturbation-based framework,for drug repurposing of multiple sclerosis(MS).First,the MS comorbidity network was constructed by performing a random walk with restart algorithm based on shared genes between MS and other diseases as seed nodes.Then,based on topological analysis and functional annotation,the neurotransmission module was identified as the“therapeutic module”of MS.Further,perturbation scores of drugs on the module were calculated by constructing the DTN and introducing the PRS analysis,giving a list of repurposable drugs for MS.Mechanism of action analysis both at pathway and structural levels screened dihydroergocristine as a candidate drug of MS by targeting a serotonin receptor of se-rotonin 2B receptor(HTR2B).Finally,we established a cuprizone-induced chronic mouse model to evaluate the alteration of HTR2B in mouse brain regions and observed that HTR2B was significantly reduced in the cuprizone-induced mouse cortex.These findings proved that the network perturbation modeling is a promising avenue for drug repurposing of MS.As a useful systematic method,our approach can also be used to discover the new molecular mechanism and provide effective candidate drugs for other complex diseases.展开更多
Spectrum-based fault localization (SBFL) generates a ranked list of suspicious elements by using the program execution spectrum, but the excessive number of elements ranked in parallel results in low localization accu...Spectrum-based fault localization (SBFL) generates a ranked list of suspicious elements by using the program execution spectrum, but the excessive number of elements ranked in parallel results in low localization accuracy. Most researchers consider intra-class dependencies to improve localization accuracy. However, some studies show that inter-class method call type faults account for more than 20%, which means such methods still have certain limitations. To solve the above problems, this paper proposes a two-phase software fault localization based on relational graph convolutional neural networks (Two-RGCNFL). Firstly, in Phase 1, the method call dependence graph (MCDG) of the program is constructed, the intra-class and inter-class dependencies in MCDG are extracted by using the relational graph convolutional neural network, and the classifier is used to identify the faulty methods. Then, the GraphSMOTE algorithm is improved to alleviate the impact of class imbalance on classification accuracy. Aiming at the problem of parallel ranking of element suspicious values in traditional SBFL technology, in Phase 2, Doc2Vec is used to learn static features, while spectrum information serves as dynamic features. A RankNet model based on siamese multi-layer perceptron is constructed to score and rank statements in the faulty method. This work conducts experiments on 5 real projects of Defects4J benchmark. Experimental results show that, compared with the traditional SBFL technique and two baseline methods, our approach improves the Top-1 accuracy by 262.86%, 29.59% and 53.01%, respectively, which verifies the effectiveness of Two-RGCNFL. Furthermore, this work verifies the importance of inter-class dependencies through ablation experiments.展开更多
Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhance...Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhanced networkmanagement capabilities than those of traditional networks.However,because SDN is designed to ensure high-level service availability,it faces additional challenges.One of themost critical challenges is ensuring efficient detection and recovery from link failures in the data plane.Such failures can significantly impact network performance and lead to service outages,making resiliency a key concern for the effective adoption of SDN.Since the recovery process is intrinsically dependent on timely failure detection,this research surveys and analyzes the current literature on both failure detection and recovery approaches in SDN.The survey provides a critical comparison of existing failure detection techniques,highlighting their advantages and disadvantages.Additionally,it examines the current failure recovery methods,categorized as either restoration-based or protection-based,and offers a comprehensive comparison of their strengths and limitations.Lastly,future research challenges and directions are discussed to address the shortcomings of existing failure recovery methods.展开更多
Severe acute respiratory syndrome coronavirus 2(SARS-CoV-2)mutations are influenced by random and uncontrollable factors,and the risk of the next widespread epidemic remains.Dual-target drugs that synergistically act ...Severe acute respiratory syndrome coronavirus 2(SARS-CoV-2)mutations are influenced by random and uncontrollable factors,and the risk of the next widespread epidemic remains.Dual-target drugs that synergistically act on two targets exhibit strong therapeutic effects and advantages against mutations.In this study,a novel computational workflow was developed to design dual-target SARS-CoV-2 candidate inhibitors with the Envelope protein and Main protease selected as the two target proteins.The drug-like molecules of our self-constructed 3D scaffold database were used as high-throughput molecular docking probes for feature extraction of two target protein pockets.A multi-layer perceptron(MLP)was employed to embed the binding affinities into a latent space as conditional vectors to control conditional distribution.Utilizing a conditional generative neural network,cG-SchNet,with 3D Euclidean group(E3)symmetries,the conditional probability distributions of molecular 3D structures were acquired and a set of novel SARS-CoV-2 dual-target candidate inhibitors were generated.The 1D probability,2D joint probability,and 2D cumulative probability distribution results indicate that the generated sets are significantly enhanced compared to the training set in the high binding affinity area.Among the 201 generated molecules,42 molecules exhibited a sum binding affinity exceeding 17.0 kcal/mol while 9 of them having a sum binding affinity exceeding 19.0 kcal/mol,demonstrating structure diversity along with strong dual-target affinities,good absorption,distribution,metabolism,excretion,and toxicity(ADMET)properties,and ease of synthesis.Dual-target drugs are rare and difficult to find,and our“high-throughput docking-multi-conditional generation”workflow offers a wide range of options for designing or optimizing potent dual-target SARS-CoV-2 inhibitors.展开更多
Real-time prediction and precise control of sinter quality are pivotal for energy saving,cost reduction,quality improvement and efficiency enhancement in the ironmaking process.To advance,the accuracy and comprehensiv...Real-time prediction and precise control of sinter quality are pivotal for energy saving,cost reduction,quality improvement and efficiency enhancement in the ironmaking process.To advance,the accuracy and comprehensiveness of sinter quality prediction,an intelligent flare monitoring system for sintering machine tails that combines hybrid neural networks integrating convolutional neural network with long short-term memory(CNN-LSTM)networks was proposed.The system utilized a high-temperature thermal imager for image acquisition at the sintering machine tail and employed a zone-triggered method to accurately capture dynamic feature images under challenging conditions of high-temperature,high dust,and occlusion.The feature images were then segmented through a triple-iteration multi-thresholding approach based on the maximum between-class variance method to minimize detail loss during the segmentation process.Leveraging the advantages of CNN and LSTM networks in capturing temporal and spatial information,a comprehensive model for sinter quality prediction was constructed,with inputs including the proportion of combustion layer,porosity rate,temperature distribution,and image features obtained from the convolutional neural network,and outputs comprising quality indicators such as underburning index,uniformity index,and FeO content of the sinter.The accuracy is notably increased,achieving a 95.8%hit rate within an error margin of±1.0.After the system is applied,the average qualified rate of FeO content increases from 87.24%to 89.99%,representing an improvement of 2.75%.The average monthly solid fuel consumption is reduced from 49.75 to 46.44 kg/t,leading to a 6.65%reduction and underscoring significant energy saving and cost reduction effects.展开更多
Discrete fracture network(DFN)commonly existing in natural rock masses plays an important role in geological complexity which can influence rock fracturing behaviour during fluid injection.This paper simulated the hyd...Discrete fracture network(DFN)commonly existing in natural rock masses plays an important role in geological complexity which can influence rock fracturing behaviour during fluid injection.This paper simulated the hydraulic fracturing process in lab-scale coal samples with DFNs and the induced seismic activities by the discrete element method(DEM).The effects of DFNs on hydraulic fracturing,induced seismicity and elastic property changes have been concluded.Denser DFNs can comprehensively decrease the peak injection pressure and injection duration.The proportion of strong seismic events increases first and then decreases with increasing DFN density.In addition,the relative modulus of the rock mass is derived innovatively from breakdown pressure,breakdown fracture length and the related initiation time.Increasing DFN densities among large(35–60 degrees)and small(0–30 degrees)fracture dip angles show opposite evolution trends in relative modulus.The transitional point(dip angle)for the opposite trends is also proportionally affected by the friction angle of the rock mass.The modelling results have much practical meaning to infer the density and geometry of pre-existing fractures and the elastic property of rock mass in the field,simply based on the hydraulic fracturing and induced seismicity monitoring data.展开更多
Link failure is a critical issue in large networks and must be effectively addressed.In software-defined networks(SDN),link failure recovery schemes can be categorized into proactive and reactive approaches.Reactive s...Link failure is a critical issue in large networks and must be effectively addressed.In software-defined networks(SDN),link failure recovery schemes can be categorized into proactive and reactive approaches.Reactive schemes have longer recovery times while proactive schemes provide faster recovery but overwhelm the memory of switches by flow entries.As SDN adoption grows,ensuring efficient recovery from link failures in the data plane becomes crucial.In particular,data center networks(DCNs)demand rapid recovery times and efficient resource utilization to meet carrier-grade requirements.This paper proposes an efficient Decentralized Failure Recovery(DFR)model for SDNs,meeting recovery time requirements and optimizing switch memory resource consumption.The DFR model enables switches to autonomously reroute traffic upon link failures without involving the controller,achieving fast recovery times while minimizing memory usage.DFR employs the Fast Failover Group in the OpenFlow standard for local recovery without requiring controller communication and utilizes the k-shortest path algorithm to proactively install backup paths,allowing immediate local recovery without controller intervention and enhancing overall network stability and scalability.DFR employs flow entry aggregation techniques to reduce switch memory usage.Instead of matching flow entries to the destination host’s MAC address,DFR matches packets to the destination switch’s MAC address.This reduces the switches’Ternary Content-Addressable Memory(TCAM)consumption.Additionally,DFR modifies Address Resolution Protocol(ARP)replies to provide source hosts with the destination switch’s MAC address,facilitating flow entry aggregation without affecting normal network operations.The performance of DFR is evaluated through the network emulator Mininet 2.3.1 and Ryu 3.1 as SDN controller.For different number of active flows,number of hosts per edge switch,and different network sizes,the proposed model outperformed various failure recovery models:restoration-based,protection by flow entries,protection by group entries and protection by Vlan-tagging model in terms of recovery time,switch memory consumption and controller overhead which represented the number of flow entry updates to recover from the failure.Experimental results demonstrate that DFR achieves recovery times under 20 milliseconds,satisfying carrier-grade requirements for rapid failure recovery.Additionally,DFR reduces switch memory usage by up to 95%compared to traditional protection methods and minimizes controller load by eliminating the need for controller intervention during failure recovery.Theresults underscore the efficiency and scalability of the DFR model,making it a practical solution for enhancing network resilience in SDN environments.展开更多
Control signaling is mandatory for the operation and management of all types of communication networks,including the Third Generation Partnership Project(3GPP)mobile broadband networks.However,they consume important a...Control signaling is mandatory for the operation and management of all types of communication networks,including the Third Generation Partnership Project(3GPP)mobile broadband networks.However,they consume important and scarce network resources such as bandwidth and processing power.There have been several reports of these control signaling turning into signaling storms halting network operations and causing the respective Telecom companies big financial losses.This paper draws its motivation from such real network disaster incidents attributed to signaling storms.In this paper,we present a thorough survey of the causes,of the signaling storm problems in 3GPP-based mobile broadband networks and discuss in detail their possible solutions and countermeasures.We provide relevant analytical models to help quantify the effect of the potential causes and benefits of their corresponding solutions.Another important contribution of this paper is the comparison of the possible causes and solutions/countermeasures,concerning their effect on several important network aspects such as architecture,additional signaling,fidelity,etc.,in the form of a table.This paper presents an update and an extension of our earlier conference publication.To our knowledge,no similar survey study exists on the subject.展开更多
BACKGROUND Mitochondrial genes are involved in tumor metabolism in ovarian cancer(OC)and affect immune cell infiltration and treatment responses.AIM To predict prognosis and immunotherapy response in patients diagnose...BACKGROUND Mitochondrial genes are involved in tumor metabolism in ovarian cancer(OC)and affect immune cell infiltration and treatment responses.AIM To predict prognosis and immunotherapy response in patients diagnosed with OC using mitochondrial genes and neural networks.METHODS Prognosis,immunotherapy efficacy,and next-generation sequencing data of patients with OC were downloaded from The Cancer Genome Atlas and Gene Expression Omnibus.Mitochondrial genes were sourced from the MitoCarta3.0 database.The discovery cohort for model construction was created from 70% of the patients,whereas the remaining 30% constituted the validation cohort.Using the expression of mitochondrial genes as the predictor variable and based on neural network algorithm,the overall survival time and immunotherapy efficacy(complete or partial response)of patients were predicted.RESULTS In total,375 patients with OC were included to construct the prognostic model,and 26 patients were included to construct the immune efficacy model.The average area under the receiver operating characteristic curve of the prognostic model was 0.7268[95% confidence interval(CI):0.7258-0.7278]in the discovery cohort and 0.6475(95%CI:0.6466-0.6484)in the validation cohort.The average area under the receiver operating characteristic curve of the immunotherapy efficacy model was 0.9444(95%CI:0.8333-1.0000)in the discovery cohort and 0.9167(95%CI:0.6667-1.0000)in the validation cohort.CONCLUSION The application of mitochondrial genes and neural networks has the potential to predict prognosis and immunotherapy response in patients with OC,providing valuable insights into personalized treatment strategies.展开更多
It is difficult to improve both energy consumption and detection accuracy simultaneously,and even to obtain the trade-off between them,when detecting and tracking moving targets,especially for Underwater Wireless Sens...It is difficult to improve both energy consumption and detection accuracy simultaneously,and even to obtain the trade-off between them,when detecting and tracking moving targets,especially for Underwater Wireless Sensor Networks(UWSNs).To this end,this paper investigates the relationship between the Degree of Target Change(DoTC)and the detection period,as well as the impact of individual nodes.A Hierarchical Detection and Tracking Approach(HDTA)is proposed.Firstly,the network detection period is determined according to DoTC,which reflects the variation of target motion.Secondly,during the network detection period,each detection node calculates its own node detection period based on the detection mutual information.Taking DoTC as pheromone,an ant colony algorithm is proposed to adaptively adjust the network detection period.The simulation results show that the proposed HDTA with the optimizations of network level and node level significantly improves the detection accuracy by 25%and the network energy consumption by 10%simultaneously,compared to the traditional adaptive period detection schemes.展开更多
文摘Medical procedures are inherently invasive and carry the risk of inducing pain to the mind and body.Recently,efforts have been made to alleviate the discomfort associated with invasive medical procedures through the use of virtual reality(VR)technology.VR has been demonstrated to be an effective treatment for pain associated with medical procedures,as well as for chronic pain conditions for which no effective treatment has been established.The precise mechanism by which the diversion from reality facilitated by VR contributes to the diminution of pain and anxiety has yet to be elucidated.However,the provision of positive images through VR-based visual stimulation may enhance the functionality of brain networks.The salience network is diminished,while the default mode network is enhanced.Additionally,the medial prefrontal cortex may establish a stronger connection with the default mode network,which could result in a reduction of pain and anxiety.Further research into the potential of VR technology to alleviate pain could lead to a reduction in the number of individuals who overdose on painkillers and contribute to positive change in the medical field.
基金National Natural Science Foundation of China(11971211,12171388).
文摘Complex network models are frequently employed for simulating and studyingdiverse real-world complex systems.Among these models,scale-free networks typically exhibit greater fragility to malicious attacks.Consequently,enhancing the robustness of scale-free networks has become a pressing issue.To address this problem,this paper proposes a Multi-Granularity Integration Algorithm(MGIA),which aims to improve the robustness of scale-free networks while keeping the initial degree of each node unchanged,ensuring network connectivity and avoiding the generation of multiple edges.The algorithm generates a multi-granularity structure from the initial network to be optimized,then uses different optimization strategies to optimize the networks at various granular layers in this structure,and finally realizes the information exchange between different granular layers,thereby further enhancing the optimization effect.We propose new network refresh,crossover,and mutation operators to ensure that the optimized network satisfies the given constraints.Meanwhile,we propose new network similarity and network dissimilarity evaluation metrics to improve the effectiveness of the optimization operators in the algorithm.In the experiments,the MGIA enhances the robustness of the scale-free network by 67.6%.This improvement is approximately 17.2%higher than the optimization effects achieved by eight currently existing complex network robustness optimization algorithms.
文摘Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency.
文摘Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay differential equations(DDEs)and delay integrodifferential equations(DIDEs)with constant delays,primarily due to their low regularity at delayinduced breaking points.In this paper,a DNN method that combines multi-task learning(MTL)which is proposed to solve both the forward and inverse problems of DIDEs.The core idea of this approach is to divide the original equation into multiple tasks based on the delay,using auxiliary outputs to represent the integral terms,followed by the use of MTL to seamlessly incorporate the properties at the breaking points into the loss function.Furthermore,given the increased training dificulty associated with multiple tasks and outputs,we employ a sequential training scheme to reduce training complexity and provide reference solutions for subsequent tasks.This approach significantly enhances the approximation accuracy of solving DIDEs with DNNs,as demonstrated by comparisons with traditional DNN methods.We validate the effectiveness of this method through several numerical experiments,test various parameter sharing structures in MTL and compare the testing results of these structures.Finally,this method is implemented to solve the inverse problem of nonlinear DIDE and the results show that the unknown parameters of DIDE can be discovered with sparse or noisy data.
基金supported by the Beijing Natural Science Foundation(Grant No.L223013)。
文摘For the diagnostics and health management of lithium-ion batteries,numerous models have been developed to understand their degradation characteristics.These models typically fall into two categories:data-driven models and physical models,each offering unique advantages but also facing limitations.Physics-informed neural networks(PINNs)provide a robust framework to integrate data-driven models with physical principles,ensuring consistency with underlying physics while enabling generalization across diverse operational conditions.This study introduces a PINN-based approach to reconstruct open circuit voltage(OCV)curves and estimate key ageing parameters at both the cell and electrode levels.These parameters include available capacity,electrode capacities,and lithium inventory capacity.The proposed method integrates OCV reconstruction models as functional components into convolutional neural networks(CNNs)and is validated using a public dataset.The results reveal that the estimated ageing parameters closely align with those obtained through offline OCV tests,with errors in reconstructed OCV curves remaining within 15 mV.This demonstrates the ability of the method to deliver fast and accurate degradation diagnostics at the electrode level,advancing the potential for precise and efficient battery health management.
基金supported by the National Key Research and Development Program of China No.2023YFA1009500.
文摘With the emphasis on user privacy and communication security, encrypted traffic has increased dramatically, which brings great challenges to traffic classification. The classification method of encrypted traffic based on GNN can deal with encrypted traffic well. However, existing GNN-based approaches ignore the relationship between client or server packets. In this paper, we design a network traffic topology based on GCN, called Flow Mapping Graph (FMG). FMG establishes sequential edges between vertexes by the arrival order of packets and establishes jump-order edges between vertexes by connecting packets in different bursts with the same direction. It not only reflects the time characteristics of the packet but also strengthens the relationship between the client or server packets. According to FMG, a Traffic Mapping Classification model (TMC-GCN) is designed, which can automatically capture and learn the characteristics and structure information of the top vertex in FMG. The TMC-GCN model is used to classify the encrypted traffic. The encryption stream classification problem is transformed into a graph classification problem, which can effectively deal with data from different data sources and application scenarios. By comparing the performance of TMC-GCN with other classical models in four public datasets, including CICIOT2023, ISCXVPN2016, CICAAGM2017, and GraphDapp, the effectiveness of the FMG algorithm is verified. The experimental results show that the accuracy rate of the TMC-GCN model is 96.13%, the recall rate is 95.04%, and the F1 rate is 94.54%.
文摘The increasing popularity of the Internet and the widespread use of information technology have led to a rise in the number and sophistication of network attacks and security threats.Intrusion detection systems are crucial to network security,playing a pivotal role in safeguarding networks from potential threats.However,in the context of an evolving landscape of sophisticated and elusive attacks,existing intrusion detection methodologies often overlook critical aspects such as changes in network topology over time and interactions between hosts.To address these issues,this paper proposes a real-time network intrusion detection method based on graph neural networks.The proposedmethod leverages the advantages of graph neural networks and employs a straightforward graph construction method to represent network traffic as dynamic graph-structured data.Additionally,a graph convolution operation with a multi-head attention mechanism is utilized to enhance the model’s ability to capture the intricate relationships within the graph structure comprehensively.Furthermore,it uses an integrated graph neural network to address dynamic graphs’structural and topological changes at different time points and the challenges of edge embedding in intrusion detection data.The edge classification problem is effectively transformed into node classification by employing a line graph data representation,which facilitates fine-grained intrusion detection tasks on dynamic graph node feature representations.The efficacy of the proposed method is evaluated using two commonly used intrusion detection datasets,UNSW-NB15 and NF-ToN-IoT-v2,and results are compared with previous studies in this field.The experimental results demonstrate that our proposed method achieves 99.3%and 99.96%accuracy on the two datasets,respectively,and outperforms the benchmark model in several evaluation metrics.
基金funding from King Saud University through Researchers Supporting Project number(RSP2024R387),King Saud University,Riyadh,Saudi Arabia.
文摘The emergence of next generation networks(NextG),including 5G and beyond,is reshaping the technological landscape of cellular and mobile networks.These networks are sufficiently scaled to interconnect billions of users and devices.Researchers in academia and industry are focusing on technological advancements to achieve highspeed transmission,cell planning,and latency reduction to facilitate emerging applications such as virtual reality,the metaverse,smart cities,smart health,and autonomous vehicles.NextG continuously improves its network functionality to support these applications.Multiple input multiple output(MIMO)technology offers spectral efficiency,dependability,and overall performance in conjunctionwithNextG.This article proposes a secure channel estimation technique in MIMO topology using a norm-estimation model to provide comprehensive insights into protecting NextG network components against adversarial attacks.The technique aims to create long-lasting and secure NextG networks using this extended approach.The viability of MIMO applications and modern AI-driven methodologies to combat cybersecurity threats are explored in this research.Moreover,the proposed model demonstrates high performance in terms of reliability and accuracy,with a 20%reduction in the MalOut-RealOut-Diff metric compared to existing state-of-the-art techniques.
基金supported by National key research and development program(No.2022YFA1602404)the National Natural Science Foundation of China(Nos.12388102,12275338,12005280)the Key Laboratory of Nuclear Data foundation(No.JCKY2022201C152)。
文摘This study investigates photonuclear reaction(γ,n)cross-sections using Bayesian neural network(BNN)analysis.After determining the optimal network architecture,which features two hidden layers,each with 50 hidden nodes,training was conducted for 30,000 iterations to ensure comprehensive data capture.By analyzing the distribution of absolute errors positively correlated with the cross-section for the isotope 159Tb,as well as the relative errors unrelated to the cross-section,we confirmed that the network effectively captured the data features without overfitting.Comparison with the TENDL-2021 Database demonstrated the BNN's reliability in fitting photonuclear cross-sections with lower average errors.The predictions for nuclei with single and double giant dipole resonance peak cross-sections,the accurate determination of the photoneutron reaction threshold in the low-energy region,and the precise description of trends in the high-energy cross-sections further demonstrate the network's generalization ability on the validation set.This can be attributed to the consistency of the training data.By using consistent training sets from different laboratories,Bayesian neural networks can predict nearby unknown cross-sections based on existing laboratory data,thereby estimating the potential differences between other laboratories'existing data and their own measurement results.Experimental measurements of photonuclear reactions on the newly constructed SLEGS beamline will contribute to clarifying the differences in cross-sections within the existing data.
基金supported by the National Natural Science Foundation of China(Grant Nos.:32271292,31872723,32200778,and 22377089)the Jiangsu Students Innovation and Entrepre-neurship Training Program,China(Program No.:202210285081Z)+6 种基金the Project of MOE Key Laboratory of Geriatric Diseases and Immunology,China(Project No.:JYN202404)Proj-ect Funded by the Priority Academic Program Development(PAPD)of Jiangsu Higher Education Institutions,Natural Science Foundation of Jiangsu Province,China(Project No.:BK20220494)Suzhou Medical and Health Technology Innovation Project,China(Grant No.:SKY2022107)the Clinical Research Center of Neuro-logical Disease in The Second Affiliated Hospital of Soochow University,China(Grant No.:ND2022A04)State Key Laboratory of Drug Research(Grant No.:SKLDR-2023-KF-05)Jiangsu Shuang-chuang Program for Doctor,Young Science Talents Promotion Project of Jiangsu Science and Technology Association(Program No.:TJ-2023-019)Young Science Talents Promotion Project of Suzhou Science and Technology Association,Suzhou International Joint Laboratory for Diagnosis and Treatment of Brain Diseases,and startup funding(Grant Nos.:NH21500221,NH21500122,and NH21500123)to Qifei Cong.
文摘Combined with elastic network model(ENM),the perturbation response scanning(PRS)has emerged as a robust technique for pinpointing allosteric interactions within proteins.Here,we proposed the PRS analysis of drug-target networks(DTNs),which could provide a promising avenue in network medicine.We demonstrated the utility of the method by introducing a deep learning and network perturbation-based framework,for drug repurposing of multiple sclerosis(MS).First,the MS comorbidity network was constructed by performing a random walk with restart algorithm based on shared genes between MS and other diseases as seed nodes.Then,based on topological analysis and functional annotation,the neurotransmission module was identified as the“therapeutic module”of MS.Further,perturbation scores of drugs on the module were calculated by constructing the DTN and introducing the PRS analysis,giving a list of repurposable drugs for MS.Mechanism of action analysis both at pathway and structural levels screened dihydroergocristine as a candidate drug of MS by targeting a serotonin receptor of se-rotonin 2B receptor(HTR2B).Finally,we established a cuprizone-induced chronic mouse model to evaluate the alteration of HTR2B in mouse brain regions and observed that HTR2B was significantly reduced in the cuprizone-induced mouse cortex.These findings proved that the network perturbation modeling is a promising avenue for drug repurposing of MS.As a useful systematic method,our approach can also be used to discover the new molecular mechanism and provide effective candidate drugs for other complex diseases.
基金funded by the Youth Fund of the National Natural Science Foundation of China(Grant No.42261070).
文摘Spectrum-based fault localization (SBFL) generates a ranked list of suspicious elements by using the program execution spectrum, but the excessive number of elements ranked in parallel results in low localization accuracy. Most researchers consider intra-class dependencies to improve localization accuracy. However, some studies show that inter-class method call type faults account for more than 20%, which means such methods still have certain limitations. To solve the above problems, this paper proposes a two-phase software fault localization based on relational graph convolutional neural networks (Two-RGCNFL). Firstly, in Phase 1, the method call dependence graph (MCDG) of the program is constructed, the intra-class and inter-class dependencies in MCDG are extracted by using the relational graph convolutional neural network, and the classifier is used to identify the faulty methods. Then, the GraphSMOTE algorithm is improved to alleviate the impact of class imbalance on classification accuracy. Aiming at the problem of parallel ranking of element suspicious values in traditional SBFL technology, in Phase 2, Doc2Vec is used to learn static features, while spectrum information serves as dynamic features. A RankNet model based on siamese multi-layer perceptron is constructed to score and rank statements in the faulty method. This work conducts experiments on 5 real projects of Defects4J benchmark. Experimental results show that, compared with the traditional SBFL technique and two baseline methods, our approach improves the Top-1 accuracy by 262.86%, 29.59% and 53.01%, respectively, which verifies the effectiveness of Two-RGCNFL. Furthermore, this work verifies the importance of inter-class dependencies through ablation experiments.
文摘Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhanced networkmanagement capabilities than those of traditional networks.However,because SDN is designed to ensure high-level service availability,it faces additional challenges.One of themost critical challenges is ensuring efficient detection and recovery from link failures in the data plane.Such failures can significantly impact network performance and lead to service outages,making resiliency a key concern for the effective adoption of SDN.Since the recovery process is intrinsically dependent on timely failure detection,this research surveys and analyzes the current literature on both failure detection and recovery approaches in SDN.The survey provides a critical comparison of existing failure detection techniques,highlighting their advantages and disadvantages.Additionally,it examines the current failure recovery methods,categorized as either restoration-based or protection-based,and offers a comprehensive comparison of their strengths and limitations.Lastly,future research challenges and directions are discussed to address the shortcomings of existing failure recovery methods.
基金supported by Interdisciplinary Innova-tion Project of“Bioarchaeology Laboratory”of Jilin University,China,and“MedicineþX”Interdisciplinary Innovation Team of Norman Bethune Health Science Center of Jilin University,China(Grant No.:2022JBGS05).
文摘Severe acute respiratory syndrome coronavirus 2(SARS-CoV-2)mutations are influenced by random and uncontrollable factors,and the risk of the next widespread epidemic remains.Dual-target drugs that synergistically act on two targets exhibit strong therapeutic effects and advantages against mutations.In this study,a novel computational workflow was developed to design dual-target SARS-CoV-2 candidate inhibitors with the Envelope protein and Main protease selected as the two target proteins.The drug-like molecules of our self-constructed 3D scaffold database were used as high-throughput molecular docking probes for feature extraction of two target protein pockets.A multi-layer perceptron(MLP)was employed to embed the binding affinities into a latent space as conditional vectors to control conditional distribution.Utilizing a conditional generative neural network,cG-SchNet,with 3D Euclidean group(E3)symmetries,the conditional probability distributions of molecular 3D structures were acquired and a set of novel SARS-CoV-2 dual-target candidate inhibitors were generated.The 1D probability,2D joint probability,and 2D cumulative probability distribution results indicate that the generated sets are significantly enhanced compared to the training set in the high binding affinity area.Among the 201 generated molecules,42 molecules exhibited a sum binding affinity exceeding 17.0 kcal/mol while 9 of them having a sum binding affinity exceeding 19.0 kcal/mol,demonstrating structure diversity along with strong dual-target affinities,good absorption,distribution,metabolism,excretion,and toxicity(ADMET)properties,and ease of synthesis.Dual-target drugs are rare and difficult to find,and our“high-throughput docking-multi-conditional generation”workflow offers a wide range of options for designing or optimizing potent dual-target SARS-CoV-2 inhibitors.
基金founded by the Open Project Program of Anhui Province Key Laboratory of Metallurgical Engineering and Resources Recycling(Anhui University of Technology)(No.SKF21-06)Research Fund for Young Teachers of Anhui University of Technology in 2020(No.QZ202001).
文摘Real-time prediction and precise control of sinter quality are pivotal for energy saving,cost reduction,quality improvement and efficiency enhancement in the ironmaking process.To advance,the accuracy and comprehensiveness of sinter quality prediction,an intelligent flare monitoring system for sintering machine tails that combines hybrid neural networks integrating convolutional neural network with long short-term memory(CNN-LSTM)networks was proposed.The system utilized a high-temperature thermal imager for image acquisition at the sintering machine tail and employed a zone-triggered method to accurately capture dynamic feature images under challenging conditions of high-temperature,high dust,and occlusion.The feature images were then segmented through a triple-iteration multi-thresholding approach based on the maximum between-class variance method to minimize detail loss during the segmentation process.Leveraging the advantages of CNN and LSTM networks in capturing temporal and spatial information,a comprehensive model for sinter quality prediction was constructed,with inputs including the proportion of combustion layer,porosity rate,temperature distribution,and image features obtained from the convolutional neural network,and outputs comprising quality indicators such as underburning index,uniformity index,and FeO content of the sinter.The accuracy is notably increased,achieving a 95.8%hit rate within an error margin of±1.0.After the system is applied,the average qualified rate of FeO content increases from 87.24%to 89.99%,representing an improvement of 2.75%.The average monthly solid fuel consumption is reduced from 49.75 to 46.44 kg/t,leading to a 6.65%reduction and underscoring significant energy saving and cost reduction effects.
基金Australian Research Council Linkage Program(LP200301404)for sponsoring this researchthe financial support provided by the State Key Laboratory of Geohazard Prevention and Geoenvironment Protection(Chengdu University of Technology,SKLGP2021K002)National Natural Science Foundation of China(52374101,32111530138).
文摘Discrete fracture network(DFN)commonly existing in natural rock masses plays an important role in geological complexity which can influence rock fracturing behaviour during fluid injection.This paper simulated the hydraulic fracturing process in lab-scale coal samples with DFNs and the induced seismic activities by the discrete element method(DEM).The effects of DFNs on hydraulic fracturing,induced seismicity and elastic property changes have been concluded.Denser DFNs can comprehensively decrease the peak injection pressure and injection duration.The proportion of strong seismic events increases first and then decreases with increasing DFN density.In addition,the relative modulus of the rock mass is derived innovatively from breakdown pressure,breakdown fracture length and the related initiation time.Increasing DFN densities among large(35–60 degrees)and small(0–30 degrees)fracture dip angles show opposite evolution trends in relative modulus.The transitional point(dip angle)for the opposite trends is also proportionally affected by the friction angle of the rock mass.The modelling results have much practical meaning to infer the density and geometry of pre-existing fractures and the elastic property of rock mass in the field,simply based on the hydraulic fracturing and induced seismicity monitoring data.
文摘Link failure is a critical issue in large networks and must be effectively addressed.In software-defined networks(SDN),link failure recovery schemes can be categorized into proactive and reactive approaches.Reactive schemes have longer recovery times while proactive schemes provide faster recovery but overwhelm the memory of switches by flow entries.As SDN adoption grows,ensuring efficient recovery from link failures in the data plane becomes crucial.In particular,data center networks(DCNs)demand rapid recovery times and efficient resource utilization to meet carrier-grade requirements.This paper proposes an efficient Decentralized Failure Recovery(DFR)model for SDNs,meeting recovery time requirements and optimizing switch memory resource consumption.The DFR model enables switches to autonomously reroute traffic upon link failures without involving the controller,achieving fast recovery times while minimizing memory usage.DFR employs the Fast Failover Group in the OpenFlow standard for local recovery without requiring controller communication and utilizes the k-shortest path algorithm to proactively install backup paths,allowing immediate local recovery without controller intervention and enhancing overall network stability and scalability.DFR employs flow entry aggregation techniques to reduce switch memory usage.Instead of matching flow entries to the destination host’s MAC address,DFR matches packets to the destination switch’s MAC address.This reduces the switches’Ternary Content-Addressable Memory(TCAM)consumption.Additionally,DFR modifies Address Resolution Protocol(ARP)replies to provide source hosts with the destination switch’s MAC address,facilitating flow entry aggregation without affecting normal network operations.The performance of DFR is evaluated through the network emulator Mininet 2.3.1 and Ryu 3.1 as SDN controller.For different number of active flows,number of hosts per edge switch,and different network sizes,the proposed model outperformed various failure recovery models:restoration-based,protection by flow entries,protection by group entries and protection by Vlan-tagging model in terms of recovery time,switch memory consumption and controller overhead which represented the number of flow entry updates to recover from the failure.Experimental results demonstrate that DFR achieves recovery times under 20 milliseconds,satisfying carrier-grade requirements for rapid failure recovery.Additionally,DFR reduces switch memory usage by up to 95%compared to traditional protection methods and minimizes controller load by eliminating the need for controller intervention during failure recovery.Theresults underscore the efficiency and scalability of the DFR model,making it a practical solution for enhancing network resilience in SDN environments.
基金the Deanship of Graduate Studies and Scientific Research at Qassim University for financial support(QU-APC-2024-9/1).
文摘Control signaling is mandatory for the operation and management of all types of communication networks,including the Third Generation Partnership Project(3GPP)mobile broadband networks.However,they consume important and scarce network resources such as bandwidth and processing power.There have been several reports of these control signaling turning into signaling storms halting network operations and causing the respective Telecom companies big financial losses.This paper draws its motivation from such real network disaster incidents attributed to signaling storms.In this paper,we present a thorough survey of the causes,of the signaling storm problems in 3GPP-based mobile broadband networks and discuss in detail their possible solutions and countermeasures.We provide relevant analytical models to help quantify the effect of the potential causes and benefits of their corresponding solutions.Another important contribution of this paper is the comparison of the possible causes and solutions/countermeasures,concerning their effect on several important network aspects such as architecture,additional signaling,fidelity,etc.,in the form of a table.This paper presents an update and an extension of our earlier conference publication.To our knowledge,no similar survey study exists on the subject.
基金Supported by National Key Technology Research and Developmental Program of China,No.2022YFC2704400 and No.2022YFC2704405.
文摘BACKGROUND Mitochondrial genes are involved in tumor metabolism in ovarian cancer(OC)and affect immune cell infiltration and treatment responses.AIM To predict prognosis and immunotherapy response in patients diagnosed with OC using mitochondrial genes and neural networks.METHODS Prognosis,immunotherapy efficacy,and next-generation sequencing data of patients with OC were downloaded from The Cancer Genome Atlas and Gene Expression Omnibus.Mitochondrial genes were sourced from the MitoCarta3.0 database.The discovery cohort for model construction was created from 70% of the patients,whereas the remaining 30% constituted the validation cohort.Using the expression of mitochondrial genes as the predictor variable and based on neural network algorithm,the overall survival time and immunotherapy efficacy(complete or partial response)of patients were predicted.RESULTS In total,375 patients with OC were included to construct the prognostic model,and 26 patients were included to construct the immune efficacy model.The average area under the receiver operating characteristic curve of the prognostic model was 0.7268[95% confidence interval(CI):0.7258-0.7278]in the discovery cohort and 0.6475(95%CI:0.6466-0.6484)in the validation cohort.The average area under the receiver operating characteristic curve of the immunotherapy efficacy model was 0.9444(95%CI:0.8333-1.0000)in the discovery cohort and 0.9167(95%CI:0.6667-1.0000)in the validation cohort.CONCLUSION The application of mitochondrial genes and neural networks has the potential to predict prognosis and immunotherapy response in patients with OC,providing valuable insights into personalized treatment strategies.
文摘It is difficult to improve both energy consumption and detection accuracy simultaneously,and even to obtain the trade-off between them,when detecting and tracking moving targets,especially for Underwater Wireless Sensor Networks(UWSNs).To this end,this paper investigates the relationship between the Degree of Target Change(DoTC)and the detection period,as well as the impact of individual nodes.A Hierarchical Detection and Tracking Approach(HDTA)is proposed.Firstly,the network detection period is determined according to DoTC,which reflects the variation of target motion.Secondly,during the network detection period,each detection node calculates its own node detection period based on the detection mutual information.Taking DoTC as pheromone,an ant colony algorithm is proposed to adaptively adjust the network detection period.The simulation results show that the proposed HDTA with the optimizations of network level and node level significantly improves the detection accuracy by 25%and the network energy consumption by 10%simultaneously,compared to the traditional adaptive period detection schemes.