The increasing popularity of the Internet and the widespread use of information technology have led to a rise in the number and sophistication of network attacks and security threats.Intrusion detection systems are cr...The increasing popularity of the Internet and the widespread use of information technology have led to a rise in the number and sophistication of network attacks and security threats.Intrusion detection systems are crucial to network security,playing a pivotal role in safeguarding networks from potential threats.However,in the context of an evolving landscape of sophisticated and elusive attacks,existing intrusion detection methodologies often overlook critical aspects such as changes in network topology over time and interactions between hosts.To address these issues,this paper proposes a real-time network intrusion detection method based on graph neural networks.The proposedmethod leverages the advantages of graph neural networks and employs a straightforward graph construction method to represent network traffic as dynamic graph-structured data.Additionally,a graph convolution operation with a multi-head attention mechanism is utilized to enhance the model’s ability to capture the intricate relationships within the graph structure comprehensively.Furthermore,it uses an integrated graph neural network to address dynamic graphs’structural and topological changes at different time points and the challenges of edge embedding in intrusion detection data.The edge classification problem is effectively transformed into node classification by employing a line graph data representation,which facilitates fine-grained intrusion detection tasks on dynamic graph node feature representations.The efficacy of the proposed method is evaluated using two commonly used intrusion detection datasets,UNSW-NB15 and NF-ToN-IoT-v2,and results are compared with previous studies in this field.The experimental results demonstrate that our proposed method achieves 99.3%and 99.96%accuracy on the two datasets,respectively,and outperforms the benchmark model in several evaluation metrics.展开更多
For the diagnostics and health management of lithium-ion batteries,numerous models have been developed to understand their degradation characteristics.These models typically fall into two categories:data-driven models...For the diagnostics and health management of lithium-ion batteries,numerous models have been developed to understand their degradation characteristics.These models typically fall into two categories:data-driven models and physical models,each offering unique advantages but also facing limitations.Physics-informed neural networks(PINNs)provide a robust framework to integrate data-driven models with physical principles,ensuring consistency with underlying physics while enabling generalization across diverse operational conditions.This study introduces a PINN-based approach to reconstruct open circuit voltage(OCV)curves and estimate key ageing parameters at both the cell and electrode levels.These parameters include available capacity,electrode capacities,and lithium inventory capacity.The proposed method integrates OCV reconstruction models as functional components into convolutional neural networks(CNNs)and is validated using a public dataset.The results reveal that the estimated ageing parameters closely align with those obtained through offline OCV tests,with errors in reconstructed OCV curves remaining within 15 mV.This demonstrates the ability of the method to deliver fast and accurate degradation diagnostics at the electrode level,advancing the potential for precise and efficient battery health management.展开更多
Accurate estimation of mineralogy from geophysical well logs is crucial for characterizing geological formations,particularly in hydrocarbon exploration,CO_(2) sequestration,and geothermal energy development.Current t...Accurate estimation of mineralogy from geophysical well logs is crucial for characterizing geological formations,particularly in hydrocarbon exploration,CO_(2) sequestration,and geothermal energy development.Current techniques,such as multimineral petrophysical analysis,offer details into mineralogical distribution.However,it is inherently time-intensive and demands substantial geological expertise for accurate model evaluation.Furthermore,traditional machine learning techniques often struggle to predict mineralogy accurately and sometimes produce estimations that violate fundamental physical principles.To address this,we present a new approach using Physics-Integrated Neural Networks(PINNs),that combines data-driven learning with domain-specific physical constraints,embedding petrophysical relationships directly into the neural network architecture.This approach enforces that predictions adhere to physical laws.The methodology is applied to the Broom Creek Deep Saline aquifer,a CO_(2) sequestration site in the Williston Basin,to predict the volumes of key mineral constituents—quartz,dolomite,feldspar,anhydrite,illite—along with porosity.Compared to traditional artificial neural networks (ANN),the PINN approach demonstrates higher accuracy and better generalizability,significantly enhancing predictive performance on unseen well datasets.The average mean error across the three blind wells is 0.123 for ANN and 0.042 for PINN,highlighting the superior accuracy of the PINN approach.This method reduces uncertainties in reservoir characterization by improving the reliability of mineralogy and porosity predictions,providing a more robust tool for decision-making in various subsurface geoscience applications.展开更多
Severe acute respiratory syndrome coronavirus 2(SARS-CoV-2)mutations are influenced by random and uncontrollable factors,and the risk of the next widespread epidemic remains.Dual-target drugs that synergistically act ...Severe acute respiratory syndrome coronavirus 2(SARS-CoV-2)mutations are influenced by random and uncontrollable factors,and the risk of the next widespread epidemic remains.Dual-target drugs that synergistically act on two targets exhibit strong therapeutic effects and advantages against mutations.In this study,a novel computational workflow was developed to design dual-target SARS-CoV-2 candidate inhibitors with the Envelope protein and Main protease selected as the two target proteins.The drug-like molecules of our self-constructed 3D scaffold database were used as high-throughput molecular docking probes for feature extraction of two target protein pockets.A multi-layer perceptron(MLP)was employed to embed the binding affinities into a latent space as conditional vectors to control conditional distribution.Utilizing a conditional generative neural network,cG-SchNet,with 3D Euclidean group(E3)symmetries,the conditional probability distributions of molecular 3D structures were acquired and a set of novel SARS-CoV-2 dual-target candidate inhibitors were generated.The 1D probability,2D joint probability,and 2D cumulative probability distribution results indicate that the generated sets are significantly enhanced compared to the training set in the high binding affinity area.Among the 201 generated molecules,42 molecules exhibited a sum binding affinity exceeding 17.0 kcal/mol while 9 of them having a sum binding affinity exceeding 19.0 kcal/mol,demonstrating structure diversity along with strong dual-target affinities,good absorption,distribution,metabolism,excretion,and toxicity(ADMET)properties,and ease of synthesis.Dual-target drugs are rare and difficult to find,and our“high-throughput docking-multi-conditional generation”workflow offers a wide range of options for designing or optimizing potent dual-target SARS-CoV-2 inhibitors.展开更多
The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and hist...The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and historical context,DL offers a powerful complement by enhancing the speed,objectivity,and precision of the classification process.This research explores the significance of image data augmentation techniques in optimizing the performance of convolutional neural networks(CNNs)for geological image analysis,particularly in the classification of igneous,metamorphic,and sedimentary rock types from rock thin section(RTS)images.This study primarily focuses on classic image augmentation techniques and evaluates their impact on model accuracy and precision.Results demonstrate that augmentation techniques like Equalize significantly enhance the model's classification capabilities,achieving an F1-Score of 0.9869 for igneous rocks,0.9884 for metamorphic rocks,and 0.9929 for sedimentary rocks,representing improvements compared to the baseline original results.Moreover,the weighted average F1-Score across all classes and techniques is 0.9886,indicating an enhancement.Conversely,methods like Distort lead to decreased accuracy and F1-Score,with an F1-Score of 0.949 for igneous rocks,0.954 for metamorphic rocks,and 0.9416 for sedimentary rocks,exacerbating the performance compared to the baseline.The study underscores the practicality of image data augmentation in geological image classification and advocates for the adoption of DL methods in this domain for automation and improved results.The findings of this study can benefit various fields,including remote sensing,mineral exploration,and environmental monitoring,by enhancing the accuracy of geological image analysis both for scientific research and industrial applications.展开更多
In this paper,we propose a neural network approach to learn the parameters of a class of stochastic Lotka-Volterra systems.Approximations of the mean and covariance matrix of the observational variables are obtained f...In this paper,we propose a neural network approach to learn the parameters of a class of stochastic Lotka-Volterra systems.Approximations of the mean and covariance matrix of the observational variables are obtained from the Euler-Maruyama discretization of the underlying stochastic differential equations(SDEs),based on which the loss function is built.The stochastic gradient descent method is applied in the neural network training.Numerical experiments demonstrate the effectiveness of our method.展开更多
Background:Convolutional neural networks(CNN)have achieved remarkable success in medical image analysis.However,unlike some general-domain tasks where model accuracy is paramount,medical applications demand both accur...Background:Convolutional neural networks(CNN)have achieved remarkable success in medical image analysis.However,unlike some general-domain tasks where model accuracy is paramount,medical applications demand both accuracy and explainability due to the high stakes affecting patients'lives.Based on model explanations,clinicians can evaluate the diagnostic decisions suggested by CNN.Nevertheless,prior explainable artificial intelligence methods treat medical image tasks akin to general vision tasks,following end-to-end paradigms to generate explanations and frequently overlooking crucial clinical domain knowledge.Methods:We propose a plug-and-play module that explicitly integrates anatomic boundary information into the explanation process for CNN-based thoracopathy classifiers.To generate the anatomic boundary of the lung parenchyma,we utilize a lung segmentation model developed on external public datasets and deploy it on the unseen target dataset to constrain model ex-planations within the lung parenchyma for the clinical task of thoracopathy classification.Results:Assessed by the intersection over union and dice similarity coefficient between model-extracted explanations and expert-annotated lesion areas,our method consistently outperformed the baseline devoid of clinical domain knowledge in 71 out of 72 scenarios,encompassing 3 CNN architectures(VGG-11,ResNet-18,and AlexNet),2 classification settings(binary and multi-label),3 explanation methods(Saliency Map,Grad-CAM,and Integrated Gradients),and 4 co-occurred thoracic diseases(Atelectasis,Fracture,Mass,and Pneumothorax).Conclusions:We underscore the effectiveness of leveraging radiology knowledge in improving model explanations for CNN and envisage that it could inspire future efforts to integrate clinical domain knowledge into medical image analysis.展开更多
Patients in intensive care units(ICUs)require rapid critical decision making.Modern ICUs are data rich,where information streams from diverse sources.Machine learning(ML)and neural networks(NN)can leverage the rich da...Patients in intensive care units(ICUs)require rapid critical decision making.Modern ICUs are data rich,where information streams from diverse sources.Machine learning(ML)and neural networks(NN)can leverage the rich data for prognostication and clinical care.They can handle complex nonlinear relation-ships in medical data and have advantages over traditional predictive methods.A number of models are used:(1)Feedforward networks;and(2)Recurrent NN and convolutional NN to predict key outcomes such as mortality,length of stay in the ICU and the likelihood of complications.Current NN models exist in silos;their integration into clinical workflow requires greater transparency on data that are analyzed.Most models that are accurate enough for use in clinical care operate as‘black-boxes’in which the logic behind their decision making is opaque.Advan-ces have occurred to see through the opacity and peer into the processing of the black-box.In the near future ML is positioned to help in clinical decision making far beyond what is currently possible.Transparency is the first step toward vali-dation which is followed by clinical trust and adoption.In summary,NNs have the transformative ability to enhance predictive accuracy and improve patient management in ICUs.The concept should soon be turning into reality.展开更多
The Internet of Things(IoT)ecosystem faces growing security challenges because it is projected to have 76.88 billion devices by 2025 and $1.4 trillion market value by 2027,operating in distributed networks with resour...The Internet of Things(IoT)ecosystem faces growing security challenges because it is projected to have 76.88 billion devices by 2025 and $1.4 trillion market value by 2027,operating in distributed networks with resource limitations and diverse system architectures.The current conventional intrusion detection systems(IDS)face scalability problems and trust-related issues,but blockchain-based solutions face limitations because of their low transaction throughput(Bitcoin:7 TPS(Transactions Per Second),Ethereum:15-30 TPS)and high latency.The research introduces MBID(Multi-Tier Blockchain Intrusion Detection)as a groundbreaking Multi-Tier Blockchain Intrusion Detection System with AI-Enhanced Detection,which solves the problems in huge IoT networks.The MBID system uses a four-tier architecture that includes device,edge,fog,and cloud layers with blockchain implementations and Physics-Informed Neural Networks(PINNs)for edge-based anomaly detection and a dual consensus mechanism that uses Honesty-based Distributed Proof-of-Authority(HDPoA)and Delegated Proof of Stake(DPoS).The system achieves scalability and efficiency through the combination of dynamic sharding and Interplanetary File System(IPFS)integration.Experimental evaluations demonstrate exceptional performance,achieving a detection accuracy of 99.84%,an ultra-low false positive rate of 0.01% with a False Negative Rate of 0.15%,and a near-instantaneous edge detection latency of 0.40 ms.The system demonstrated an aggregate throughput of 214.57 TPS in a 3-shard configuration,providing a clear,evidence-based path for horizontally scaling to support overmillions of devices with exceeding throughput.The proposed architecture represents a significant advancement in blockchain-based security for IoT networks,effectively balancing the trade-offs between scalability,security,and decentralization.展开更多
Artificial intelligence(AI)is a revolutionizing problem-solver across various domains,including scientific research.Its application to chemical processes holds remarkable potential for rapid optimization of protocols ...Artificial intelligence(AI)is a revolutionizing problem-solver across various domains,including scientific research.Its application to chemical processes holds remarkable potential for rapid optimization of protocols and methods.A notable application of AI is in the photoFenton degradation of organic compounds.Despite the high novelty and recent surge of interest in this area,a comprehensive synthesis of existing literature on AI applications in the photo-Fenton process is lacking.This review aims to bridge this gap by providing an in-depth summary of the state-of-the-art use of artificial neural networks(ANN)in the photo-Fenton process,with the goal of aiding researchers in the water treatment field to identify the most crucial and relevant variables.It examines the types and architectures of ANNs,input and output variables,and the efficiency of these networks.The findings reveal a rapidly expanding field with increasing publications highlighting AI's potential to optimize the photo-Fenton process.This review also discusses the benefits and drawbacks of using ANNs,emphasizing the need for further research to advance this promising area.展开更多
3D medical image reconstruction has significantly enhanced diagnostic accuracy,yet the reliance on densely sampled projection data remains a major limitation in clinical practice.Sparse-angle X-ray imaging,though safe...3D medical image reconstruction has significantly enhanced diagnostic accuracy,yet the reliance on densely sampled projection data remains a major limitation in clinical practice.Sparse-angle X-ray imaging,though safer and faster,poses challenges for accurate volumetric reconstruction due to limited spatial information.This study proposes a 3D reconstruction neural network based on adaptive weight fusion(AdapFusionNet)to achieve high-quality 3D medical image reconstruction from sparse-angle X-ray images.To address the issue of spatial inconsistency in multi-angle image reconstruction,an innovative adaptive fusion module was designed to score initial reconstruction results during the inference stage and perform weighted fusion,thereby improving the final reconstruction quality.The reconstruction network is built on an autoencoder(AE)framework and uses orthogonal-angle X-ray images(frontal and lateral projections)as inputs.The encoder extracts 2D features,which the decoder maps into 3D space.This study utilizes a lung CT dataset to obtain complete three-dimensional volumetric data,from which digitally reconstructed radiographs(DRR)are generated at various angles to simulate X-ray images.Since real-world clinical X-ray images rarely come with perfectly corresponding 3D“ground truth,”using CT scans as the three-dimensional reference effectively supports the training and evaluation of deep networks for sparse-angle X-ray 3D reconstruction.Experiments conducted on the LIDC-IDRI dataset with simulated X-ray images(DRR images)as training data demonstrate the superior performance of AdapFusionNet compared to other fusion methods.Quantitative results show that AdapFusionNet achieves SSIM,PSNR,and MAE values of 0.332,13.404,and 0.163,respectively,outperforming other methods(SingleViewNet:0.289,12.363,0.182;AvgFusionNet:0.306,13.384,0.159).Qualitative analysis further confirms that AdapFusionNet significantly enhances the reconstruction of lung and chest contours while effectively reducing noise during the reconstruction process.The findings demonstrate that AdapFusionNet offers significant advantages in 3D reconstruction of sparse-angle X-ray images.展开更多
Electric vehicles(EVs)are gradually being deployed in the transportation sector.Although they have a high impact on reducing greenhouse gas emissions,their penetration is challenged by their random energy demand and d...Electric vehicles(EVs)are gradually being deployed in the transportation sector.Although they have a high impact on reducing greenhouse gas emissions,their penetration is challenged by their random energy demand and difficult scheduling of their optimal charging.To cope with these problems,this paper presents a novel approach for photovoltaic grid-connected microgrid EV charging station energy demand forecasting.The present study is part of a comprehensive framework involving emerging technologies such as drones and artificial intelligence designed to support the EVs’charging scheduling task.By using predictive algorithms for solar generation and load demand estimation,this approach aimed at ensuring dynamic and efficient energy flow between the solar energy source,the grid and the electric vehicles.The main contribution of this paper lies in developing an intelligent approach based on deep recurrent neural networks to forecast the energy demand using only its previous records.Therefore,various forecasters based on Long Short-term Memory,Gated Recurrent Unit,and their bi-directional and stacked variants were investigated using a real dataset collected from an EV charging station located at Trieste University(Italy).The developed forecasters have been evaluated and compared according to different metrics,including R,RMSE,MAE,and MAPE.We found that the obtained R values for both PV power generation and energy demand ranged between 97%and 98%.These study findings can be used for reliable and efficient decision-making on the management side of the optimal scheduling of the charging operations.展开更多
Breast Cancer(BC)remains a leadingmalignancy among women,resulting in highmortality rates.Early and accurate detection is crucial for improving patient outcomes.Traditional diagnostic tools,while effective,have limita...Breast Cancer(BC)remains a leadingmalignancy among women,resulting in highmortality rates.Early and accurate detection is crucial for improving patient outcomes.Traditional diagnostic tools,while effective,have limitations that reduce their accessibility and accuracy.This study investigates the use ofConvolutionalNeuralNetworks(CNNs)to enhance the diagnostic process of BC histopathology.Utilizing the BreakHis dataset,which contains thousands of histopathological images,we developed a CNN model designed to improve the speed and accuracy of image analysis.Our CNN architecture was designed with multiple convolutional layers,max-pooling layers,and a fully connected network optimized for feature extraction and classification.Hyperparameter tuning was conducted to identify the optimal learning rate,batch size,and number of epochs,ensuring robust model performance.The dataset was divided into training(80%),validation(10%),and testing(10%)subsets,with performance evaluated using accuracy,precision,recall,and F1-score metrics.Our CNN model achieved a magnification-independent accuracy of 97.72%,with specific accuracies of 97.50%at 40×,97.61%at 100×,99.06%at 200×,and 97.25%at 400×magnification levels.These results demonstrate the model’s superior performance relative to existing methods.The integration of CNNs in diagnostic workflows can potentially reduce pathologist workload,minimize interpretation errors,and increase the availability of diagnostic testing,thereby improving BC management and patient survival rates.This study highlights the effectiveness of deep learning in automating BC histopathological classification and underscores the potential for AI-driven diagnostic solutions to improve patient care.展开更多
Federated Graph Neural Networks (FedGNNs) have achieved significant success in representation learning for graph data, enabling collaborative training among multiple parties without sharing their raw graph data and so...Federated Graph Neural Networks (FedGNNs) have achieved significant success in representation learning for graph data, enabling collaborative training among multiple parties without sharing their raw graph data and solving the data isolation problem faced by centralized GNNs in data-sensitive scenarios. Despite the plethora of prior work on inference attacks against centralized GNNs, the vulnerability of FedGNNs to inference attacks has not yet been widely explored. It is still unclear whether the privacy leakage risks of centralized GNNs will also be introduced in FedGNNs. To bridge this gap, we present PIAFGNN, the first property inference attack (PIA) against FedGNNs. Compared with prior works on centralized GNNs, in PIAFGNN, the attacker can only obtain the global embedding gradient distributed by the central server. The attacker converts the task of stealing the target user’s local embeddings into a regression problem, using a regression model to generate the target graph node embeddings. By training shadow models and property classifiers, the attacker can infer the basic property information within the target graph that is of interest. Experiments on three benchmark graph datasets demonstrate that PIAFGNN achieves attack accuracy of over 70% in most cases, even approaching the attack accuracy of inference attacks against centralized GNNs in some instances, which is much higher than the attack accuracy of the random guessing method. Furthermore, we observe that common defense mechanisms cannot mitigate our attack without affecting the model’s performance on mainly classification tasks.展开更多
Accurate estimation of evapotranspiration(ET)is crucial for efficient water resource management,particularly in the face of climate change and increasing water scarcity.This study performs a bibliometric analysis of 3...Accurate estimation of evapotranspiration(ET)is crucial for efficient water resource management,particularly in the face of climate change and increasing water scarcity.This study performs a bibliometric analysis of 352 articles and a systematic review of 35 peer-reviewed papers,selected according to PRISMA guidelines,to evaluate the performance of Hybrid Artificial Neural Networks(HANNs)in ET estimation.The findings demonstrate that HANNs,particularly those combining Multilayer Perceptrons(MLPs),Recurrent Neural Networks(RNNs),and Convolutional Neural Networks(CNNs),are highly effective in capturing the complex nonlinear relationships and tem-poral dependencies characteristic of hydrological processes.These hybrid models,often integrated with optimization algorithms and fuzzy logic frameworks,significantly improve the predictive accuracy and generalization capabilities of ET estimation.The growing adoption of advanced evaluation metrics,such as Kling-Gupta Efficiency(KGE)and Taylor Diagrams,highlights the increasing demand for more robust performance assessments beyond traditional methods.Despite the promising results,challenges remain,particularly regarding model interpretability,computational efficiency,and data scarcity.Future research should prioritize the integration of interpretability techniques,such as attention mechanisms,Local Interpretable Model-Agnostic Explanations(LIME),and feature importance analysis,to enhance model transparency and foster stakeholder trust.Additionally,improving HANN models’scalability and computational efficiency is crucial,especially for large-scale,real-world applications.Approaches such as transfer learning,parallel processing,and hyperparameter optimization will be essential in overcoming these challenges.This study underscores the transformative potential of HANN models for precise ET estimation,particularly in water-scarce and climate-vulnerable regions.By integrating CNNs for automatic feature extraction and leveraging hybrid architectures,HANNs offer considerable advantages for optimizing water management,particularly agriculture.Addressing challenges related to interpretability and scalability will be vital to ensuring the widespread deployment and operational success of HANNs in global water resource management.展开更多
Spiking neural networks(SNNs)represent a biologically-inspired computational framework that bridges neuroscience and artificial intelligence,offering unique advantages in temporal data processing,energy efficiency,and...Spiking neural networks(SNNs)represent a biologically-inspired computational framework that bridges neuroscience and artificial intelligence,offering unique advantages in temporal data processing,energy efficiency,and real-time decision-making.This paper explores the evolution of SNN technologies,emphasizing their integration with advanced learning mechanisms such as spike-timing-dependent plasticity(STDP)and hybridization with deep learning architectures.Leveraging memristors as nanoscale synaptic devices,we demonstrate significant enhancements in energy efficiency,adaptability,and scalability,addressing key challenges in neuromorphic computing.Through phase portraits and nonlinear dynamics analysis,we validate the system’s stability and robustness under diverse workloads.These advancements position SNNs as a transformative technology for applications in robotics,IoT,and adaptive low-power AI systems,paving the way for future innovations in neuromorphic hardware and hybrid learning paradigms.展开更多
Developing an accurate and efficient comprehensive water quality prediction model and its assessment method is crucial for the prevention and control of water pollution.Deep learning(DL),as one of the most promising t...Developing an accurate and efficient comprehensive water quality prediction model and its assessment method is crucial for the prevention and control of water pollution.Deep learning(DL),as one of the most promising technologies today,plays a crucial role in the effective assessment of water body health,which is essential for water resource management.This study models using both the original dataset and a dataset augmented with Generative Adversarial Networks(GAN).It integrates optimization algorithms(OA)with Convolutional Neural Networks(CNN)to propose a comprehensive water quality model evaluation method aiming at identifying the optimal models for different pollutants.Specifically,after preprocessing the spectral dataset,data augmentation was conducted to obtain two datasets.Then,six new models were developed on these datasets using particle swarm optimization(PSO),genetic algorithm(GA),and simulated annealing(SA)combined with CNN to simulate and forecast the concentrations of three water pollutants:Chemical Oxygen Demand(COD),Total Nitrogen(TN),and Total Phosphorus(TP).Finally,seven model evaluation methods,including uncertainty analysis,were used to evaluate the constructed models and select the optimal models for the three pollutants.The evaluation results indicate that the GPSCNN model performed best in predicting COD and TP concentrations,while the GGACNN model excelled in TN concentration prediction.Compared to existing technologies,the proposed models and evaluation methods provide a more comprehensive and rapid approach to water body prediction and assessment,offering new insights and methods for water pollution prevention and control.展开更多
Graph Neural Networks(GNNs)have demonstrated outstanding capabilities in processing graph-structured data and are increasingly being integrated into large-scale pre-trained models,such as Large Language Models(LLMs),t...Graph Neural Networks(GNNs)have demonstrated outstanding capabilities in processing graph-structured data and are increasingly being integrated into large-scale pre-trained models,such as Large Language Models(LLMs),to enhance structural reasoning,knowledge retrieval,and memory management.The expansion of their application scope imposes higher requirements on the robustness of GNNs.However,as GNNs are applied to more dynamic and heterogeneous environments,they become increasingly vulnerable to real-world perturbations.In particular,graph data frequently encounters joint adversarial perturbations that simultaneously affect both structures and features,which are significantly more challenging than isolated attacks.These disruptions,caused by incomplete data,malicious attacks,or inherent noise,pose substantial threats to the stable and reliable performance of traditional GNN models.To address this issue,this study proposes the Dual-Shield Graph Neural Network(DSGNN),a defense model that simultaneously mitigates structural and feature perturbations.DSGNN utilizes two parallel GNN channels to independently process structural noise and feature noise,and introduces an adaptive fusion mechanism that integrates information from both pathways to generate robust node representations.Theoretical analysis demonstrates that DSGNN achieves a tighter robustness boundary under joint perturbations compared to conventional single-channel methods.Experimental evaluations across Cora,CiteSeer,and Industry datasets show that DSGNN achieves the highest average classification accuracy under various adversarial settings,reaching 81.24%,71.94%,and 81.66%,respectively,outperforming GNNGuard,GCN-Jaccard,GCN-SVD,RGCN,and NoisyGNN.These results underscore the importance of multi-view perturbation decoupling in constructing resilient GNN models for real-world applications.展开更多
Spiking neural networks(SNN)represent a paradigm shift toward discrete,event-driven neural computation that mirrors biological brain mechanisms.This survey systematically examines current SNN research,focusing on trai...Spiking neural networks(SNN)represent a paradigm shift toward discrete,event-driven neural computation that mirrors biological brain mechanisms.This survey systematically examines current SNN research,focusing on training methodologies,hardware implementations,and practical applications.We analyze four major training paradigms:ANN-to-SNN conversion,direct gradient-based training,spike-timing-dependent plasticity(STDP),and hybrid approaches.Our review encompasses major specialized hardware platforms:Intel Loihi,IBM TrueNorth,SpiNNaker,and BrainScaleS,analyzing their capabilities and constraints.We survey applications spanning computer vision,robotics,edge computing,and brain-computer interfaces,identifying where SNN provide compelling advantages.Our comparative analysis reveals SNN offer significant energy efficiency improvements(1000-10000×reduction)and natural temporal processing,while facing challenges in scalability and training complexity.We identify critical research directions including improved gradient estimation,standardized benchmarking protocols,and hardware-software co-design approaches.This survey provides researchers and practitioners with a comprehensive understanding of current SNN capabilities,limitations,and future prospects.展开更多
Accurate and efficient prediction of the distribution of surface loads on buildings subjected to explosive effects is crucial for rapidly calculating structural dynamic responses,establishing effective protective meas...Accurate and efficient prediction of the distribution of surface loads on buildings subjected to explosive effects is crucial for rapidly calculating structural dynamic responses,establishing effective protective measures,and designing civil defense engineering solutions.Current state-of-the-art methods face several issues:Experimental research is difficult and costly to implement,theoretical research is limited to simple geometries and lacks precision,and direct simulations require substantial computational resources.To address these challenges,this paper presents a data-driven method for predicting blast loads on building surfaces.This approach increases both the accuracy and computational efficiency of load predictions when the geometry of the building changes while the explosive yield remains constant,significantly improving its applicability in complex scenarios.This study introduces an innovative encoder-decoder graph neural network model named BlastGraphNet,which uses a message-passing mechanism to predict the overpressure and impulse load distributions on buildings with conventional and complex geometries during explosive events.The model also facilitates related downstream applications,such as damage mode identification and rapid assessment of virtual city explosions.The calculation results indicate that the prediction error of the model for conventional building tests is less than 2%,and its inference speed is 3-4 orders of magnitude faster than that of state-of-the-art numerical methods.In extreme test cases involving buildings with complex geometries and building clusters,the method achieved high accuracy and excellent generalizability.The strong adaptability and generalizability of BlastGraphNet confirm that this novel method enables precise real-time prediction of blast loads and provides a new paradigm for damage assessment in protective engineering.展开更多
文摘The increasing popularity of the Internet and the widespread use of information technology have led to a rise in the number and sophistication of network attacks and security threats.Intrusion detection systems are crucial to network security,playing a pivotal role in safeguarding networks from potential threats.However,in the context of an evolving landscape of sophisticated and elusive attacks,existing intrusion detection methodologies often overlook critical aspects such as changes in network topology over time and interactions between hosts.To address these issues,this paper proposes a real-time network intrusion detection method based on graph neural networks.The proposedmethod leverages the advantages of graph neural networks and employs a straightforward graph construction method to represent network traffic as dynamic graph-structured data.Additionally,a graph convolution operation with a multi-head attention mechanism is utilized to enhance the model’s ability to capture the intricate relationships within the graph structure comprehensively.Furthermore,it uses an integrated graph neural network to address dynamic graphs’structural and topological changes at different time points and the challenges of edge embedding in intrusion detection data.The edge classification problem is effectively transformed into node classification by employing a line graph data representation,which facilitates fine-grained intrusion detection tasks on dynamic graph node feature representations.The efficacy of the proposed method is evaluated using two commonly used intrusion detection datasets,UNSW-NB15 and NF-ToN-IoT-v2,and results are compared with previous studies in this field.The experimental results demonstrate that our proposed method achieves 99.3%and 99.96%accuracy on the two datasets,respectively,and outperforms the benchmark model in several evaluation metrics.
基金supported by the Beijing Natural Science Foundation(Grant No.L223013)。
文摘For the diagnostics and health management of lithium-ion batteries,numerous models have been developed to understand their degradation characteristics.These models typically fall into two categories:data-driven models and physical models,each offering unique advantages but also facing limitations.Physics-informed neural networks(PINNs)provide a robust framework to integrate data-driven models with physical principles,ensuring consistency with underlying physics while enabling generalization across diverse operational conditions.This study introduces a PINN-based approach to reconstruct open circuit voltage(OCV)curves and estimate key ageing parameters at both the cell and electrode levels.These parameters include available capacity,electrode capacities,and lithium inventory capacity.The proposed method integrates OCV reconstruction models as functional components into convolutional neural networks(CNNs)and is validated using a public dataset.The results reveal that the estimated ageing parameters closely align with those obtained through offline OCV tests,with errors in reconstructed OCV curves remaining within 15 mV.This demonstrates the ability of the method to deliver fast and accurate degradation diagnostics at the electrode level,advancing the potential for precise and efficient battery health management.
基金the North Dakota Industrial Commission (NDIC) for their financial supportprovided by the University of North Dakota Computational Research Center。
文摘Accurate estimation of mineralogy from geophysical well logs is crucial for characterizing geological formations,particularly in hydrocarbon exploration,CO_(2) sequestration,and geothermal energy development.Current techniques,such as multimineral petrophysical analysis,offer details into mineralogical distribution.However,it is inherently time-intensive and demands substantial geological expertise for accurate model evaluation.Furthermore,traditional machine learning techniques often struggle to predict mineralogy accurately and sometimes produce estimations that violate fundamental physical principles.To address this,we present a new approach using Physics-Integrated Neural Networks(PINNs),that combines data-driven learning with domain-specific physical constraints,embedding petrophysical relationships directly into the neural network architecture.This approach enforces that predictions adhere to physical laws.The methodology is applied to the Broom Creek Deep Saline aquifer,a CO_(2) sequestration site in the Williston Basin,to predict the volumes of key mineral constituents—quartz,dolomite,feldspar,anhydrite,illite—along with porosity.Compared to traditional artificial neural networks (ANN),the PINN approach demonstrates higher accuracy and better generalizability,significantly enhancing predictive performance on unseen well datasets.The average mean error across the three blind wells is 0.123 for ANN and 0.042 for PINN,highlighting the superior accuracy of the PINN approach.This method reduces uncertainties in reservoir characterization by improving the reliability of mineralogy and porosity predictions,providing a more robust tool for decision-making in various subsurface geoscience applications.
基金supported by Interdisciplinary Innova-tion Project of“Bioarchaeology Laboratory”of Jilin University,China,and“MedicineþX”Interdisciplinary Innovation Team of Norman Bethune Health Science Center of Jilin University,China(Grant No.:2022JBGS05).
文摘Severe acute respiratory syndrome coronavirus 2(SARS-CoV-2)mutations are influenced by random and uncontrollable factors,and the risk of the next widespread epidemic remains.Dual-target drugs that synergistically act on two targets exhibit strong therapeutic effects and advantages against mutations.In this study,a novel computational workflow was developed to design dual-target SARS-CoV-2 candidate inhibitors with the Envelope protein and Main protease selected as the two target proteins.The drug-like molecules of our self-constructed 3D scaffold database were used as high-throughput molecular docking probes for feature extraction of two target protein pockets.A multi-layer perceptron(MLP)was employed to embed the binding affinities into a latent space as conditional vectors to control conditional distribution.Utilizing a conditional generative neural network,cG-SchNet,with 3D Euclidean group(E3)symmetries,the conditional probability distributions of molecular 3D structures were acquired and a set of novel SARS-CoV-2 dual-target candidate inhibitors were generated.The 1D probability,2D joint probability,and 2D cumulative probability distribution results indicate that the generated sets are significantly enhanced compared to the training set in the high binding affinity area.Among the 201 generated molecules,42 molecules exhibited a sum binding affinity exceeding 17.0 kcal/mol while 9 of them having a sum binding affinity exceeding 19.0 kcal/mol,demonstrating structure diversity along with strong dual-target affinities,good absorption,distribution,metabolism,excretion,and toxicity(ADMET)properties,and ease of synthesis.Dual-target drugs are rare and difficult to find,and our“high-throughput docking-multi-conditional generation”workflow offers a wide range of options for designing or optimizing potent dual-target SARS-CoV-2 inhibitors.
文摘The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and historical context,DL offers a powerful complement by enhancing the speed,objectivity,and precision of the classification process.This research explores the significance of image data augmentation techniques in optimizing the performance of convolutional neural networks(CNNs)for geological image analysis,particularly in the classification of igneous,metamorphic,and sedimentary rock types from rock thin section(RTS)images.This study primarily focuses on classic image augmentation techniques and evaluates their impact on model accuracy and precision.Results demonstrate that augmentation techniques like Equalize significantly enhance the model's classification capabilities,achieving an F1-Score of 0.9869 for igneous rocks,0.9884 for metamorphic rocks,and 0.9929 for sedimentary rocks,representing improvements compared to the baseline original results.Moreover,the weighted average F1-Score across all classes and techniques is 0.9886,indicating an enhancement.Conversely,methods like Distort lead to decreased accuracy and F1-Score,with an F1-Score of 0.949 for igneous rocks,0.954 for metamorphic rocks,and 0.9416 for sedimentary rocks,exacerbating the performance compared to the baseline.The study underscores the practicality of image data augmentation in geological image classification and advocates for the adoption of DL methods in this domain for automation and improved results.The findings of this study can benefit various fields,including remote sensing,mineral exploration,and environmental monitoring,by enhancing the accuracy of geological image analysis both for scientific research and industrial applications.
基金Supported by the National Natural Science Foundation of China(11971458,11471310)。
文摘In this paper,we propose a neural network approach to learn the parameters of a class of stochastic Lotka-Volterra systems.Approximations of the mean and covariance matrix of the observational variables are obtained from the Euler-Maruyama discretization of the underlying stochastic differential equations(SDEs),based on which the loss function is built.The stochastic gradient descent method is applied in the neural network training.Numerical experiments demonstrate the effectiveness of our method.
文摘Background:Convolutional neural networks(CNN)have achieved remarkable success in medical image analysis.However,unlike some general-domain tasks where model accuracy is paramount,medical applications demand both accuracy and explainability due to the high stakes affecting patients'lives.Based on model explanations,clinicians can evaluate the diagnostic decisions suggested by CNN.Nevertheless,prior explainable artificial intelligence methods treat medical image tasks akin to general vision tasks,following end-to-end paradigms to generate explanations and frequently overlooking crucial clinical domain knowledge.Methods:We propose a plug-and-play module that explicitly integrates anatomic boundary information into the explanation process for CNN-based thoracopathy classifiers.To generate the anatomic boundary of the lung parenchyma,we utilize a lung segmentation model developed on external public datasets and deploy it on the unseen target dataset to constrain model ex-planations within the lung parenchyma for the clinical task of thoracopathy classification.Results:Assessed by the intersection over union and dice similarity coefficient between model-extracted explanations and expert-annotated lesion areas,our method consistently outperformed the baseline devoid of clinical domain knowledge in 71 out of 72 scenarios,encompassing 3 CNN architectures(VGG-11,ResNet-18,and AlexNet),2 classification settings(binary and multi-label),3 explanation methods(Saliency Map,Grad-CAM,and Integrated Gradients),and 4 co-occurred thoracic diseases(Atelectasis,Fracture,Mass,and Pneumothorax).Conclusions:We underscore the effectiveness of leveraging radiology knowledge in improving model explanations for CNN and envisage that it could inspire future efforts to integrate clinical domain knowledge into medical image analysis.
文摘Patients in intensive care units(ICUs)require rapid critical decision making.Modern ICUs are data rich,where information streams from diverse sources.Machine learning(ML)and neural networks(NN)can leverage the rich data for prognostication and clinical care.They can handle complex nonlinear relation-ships in medical data and have advantages over traditional predictive methods.A number of models are used:(1)Feedforward networks;and(2)Recurrent NN and convolutional NN to predict key outcomes such as mortality,length of stay in the ICU and the likelihood of complications.Current NN models exist in silos;their integration into clinical workflow requires greater transparency on data that are analyzed.Most models that are accurate enough for use in clinical care operate as‘black-boxes’in which the logic behind their decision making is opaque.Advan-ces have occurred to see through the opacity and peer into the processing of the black-box.In the near future ML is positioned to help in clinical decision making far beyond what is currently possible.Transparency is the first step toward vali-dation which is followed by clinical trust and adoption.In summary,NNs have the transformative ability to enhance predictive accuracy and improve patient management in ICUs.The concept should soon be turning into reality.
基金supported in part by Multimedia University under the Research Fellow Grant MMUI/250008in part by Telekom Research&Development Sdn Bhd underGrantRDTC/241149Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R140),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The Internet of Things(IoT)ecosystem faces growing security challenges because it is projected to have 76.88 billion devices by 2025 and $1.4 trillion market value by 2027,operating in distributed networks with resource limitations and diverse system architectures.The current conventional intrusion detection systems(IDS)face scalability problems and trust-related issues,but blockchain-based solutions face limitations because of their low transaction throughput(Bitcoin:7 TPS(Transactions Per Second),Ethereum:15-30 TPS)and high latency.The research introduces MBID(Multi-Tier Blockchain Intrusion Detection)as a groundbreaking Multi-Tier Blockchain Intrusion Detection System with AI-Enhanced Detection,which solves the problems in huge IoT networks.The MBID system uses a four-tier architecture that includes device,edge,fog,and cloud layers with blockchain implementations and Physics-Informed Neural Networks(PINNs)for edge-based anomaly detection and a dual consensus mechanism that uses Honesty-based Distributed Proof-of-Authority(HDPoA)and Delegated Proof of Stake(DPoS).The system achieves scalability and efficiency through the combination of dynamic sharding and Interplanetary File System(IPFS)integration.Experimental evaluations demonstrate exceptional performance,achieving a detection accuracy of 99.84%,an ultra-low false positive rate of 0.01% with a False Negative Rate of 0.15%,and a near-instantaneous edge detection latency of 0.40 ms.The system demonstrated an aggregate throughput of 214.57 TPS in a 3-shard configuration,providing a clear,evidence-based path for horizontally scaling to support overmillions of devices with exceeding throughput.The proposed architecture represents a significant advancement in blockchain-based security for IoT networks,effectively balancing the trade-offs between scalability,security,and decentralization.
基金financial support provided by the Valencian Regional Governement(Grant No.CIPROM2023/037)Davide Palma and Alessandra Bianco Prevot acknowledge support from the Project CH4.0 under the MUR program"Dipartimenti di Eccellenza 2023-2027"(Grant No.CUP:D13C22003520001).
文摘Artificial intelligence(AI)is a revolutionizing problem-solver across various domains,including scientific research.Its application to chemical processes holds remarkable potential for rapid optimization of protocols and methods.A notable application of AI is in the photoFenton degradation of organic compounds.Despite the high novelty and recent surge of interest in this area,a comprehensive synthesis of existing literature on AI applications in the photo-Fenton process is lacking.This review aims to bridge this gap by providing an in-depth summary of the state-of-the-art use of artificial neural networks(ANN)in the photo-Fenton process,with the goal of aiding researchers in the water treatment field to identify the most crucial and relevant variables.It examines the types and architectures of ANNs,input and output variables,and the efficiency of these networks.The findings reveal a rapidly expanding field with increasing publications highlighting AI's potential to optimize the photo-Fenton process.This review also discusses the benefits and drawbacks of using ANNs,emphasizing the need for further research to advance this promising area.
基金Supported by Sichuan Science and Technology Program(2023YFSY0026,2023YFH0004).
文摘3D medical image reconstruction has significantly enhanced diagnostic accuracy,yet the reliance on densely sampled projection data remains a major limitation in clinical practice.Sparse-angle X-ray imaging,though safer and faster,poses challenges for accurate volumetric reconstruction due to limited spatial information.This study proposes a 3D reconstruction neural network based on adaptive weight fusion(AdapFusionNet)to achieve high-quality 3D medical image reconstruction from sparse-angle X-ray images.To address the issue of spatial inconsistency in multi-angle image reconstruction,an innovative adaptive fusion module was designed to score initial reconstruction results during the inference stage and perform weighted fusion,thereby improving the final reconstruction quality.The reconstruction network is built on an autoencoder(AE)framework and uses orthogonal-angle X-ray images(frontal and lateral projections)as inputs.The encoder extracts 2D features,which the decoder maps into 3D space.This study utilizes a lung CT dataset to obtain complete three-dimensional volumetric data,from which digitally reconstructed radiographs(DRR)are generated at various angles to simulate X-ray images.Since real-world clinical X-ray images rarely come with perfectly corresponding 3D“ground truth,”using CT scans as the three-dimensional reference effectively supports the training and evaluation of deep networks for sparse-angle X-ray 3D reconstruction.Experiments conducted on the LIDC-IDRI dataset with simulated X-ray images(DRR images)as training data demonstrate the superior performance of AdapFusionNet compared to other fusion methods.Quantitative results show that AdapFusionNet achieves SSIM,PSNR,and MAE values of 0.332,13.404,and 0.163,respectively,outperforming other methods(SingleViewNet:0.289,12.363,0.182;AvgFusionNet:0.306,13.384,0.159).Qualitative analysis further confirms that AdapFusionNet significantly enhances the reconstruction of lung and chest contours while effectively reducing noise during the reconstruction process.The findings demonstrate that AdapFusionNet offers significant advantages in 3D reconstruction of sparse-angle X-ray images.
基金University of Jeddah,Jeddah,Saudi Arabia,grant No.(UJ-23-SRP-10).
文摘Electric vehicles(EVs)are gradually being deployed in the transportation sector.Although they have a high impact on reducing greenhouse gas emissions,their penetration is challenged by their random energy demand and difficult scheduling of their optimal charging.To cope with these problems,this paper presents a novel approach for photovoltaic grid-connected microgrid EV charging station energy demand forecasting.The present study is part of a comprehensive framework involving emerging technologies such as drones and artificial intelligence designed to support the EVs’charging scheduling task.By using predictive algorithms for solar generation and load demand estimation,this approach aimed at ensuring dynamic and efficient energy flow between the solar energy source,the grid and the electric vehicles.The main contribution of this paper lies in developing an intelligent approach based on deep recurrent neural networks to forecast the energy demand using only its previous records.Therefore,various forecasters based on Long Short-term Memory,Gated Recurrent Unit,and their bi-directional and stacked variants were investigated using a real dataset collected from an EV charging station located at Trieste University(Italy).The developed forecasters have been evaluated and compared according to different metrics,including R,RMSE,MAE,and MAPE.We found that the obtained R values for both PV power generation and energy demand ranged between 97%and 98%.These study findings can be used for reliable and efficient decision-making on the management side of the optimal scheduling of the charging operations.
基金funded by the Deanship of Graduate Studies and Scientific Research at Jouf University under grant No.(DGSSR-2024-02-01096).
文摘Breast Cancer(BC)remains a leadingmalignancy among women,resulting in highmortality rates.Early and accurate detection is crucial for improving patient outcomes.Traditional diagnostic tools,while effective,have limitations that reduce their accessibility and accuracy.This study investigates the use ofConvolutionalNeuralNetworks(CNNs)to enhance the diagnostic process of BC histopathology.Utilizing the BreakHis dataset,which contains thousands of histopathological images,we developed a CNN model designed to improve the speed and accuracy of image analysis.Our CNN architecture was designed with multiple convolutional layers,max-pooling layers,and a fully connected network optimized for feature extraction and classification.Hyperparameter tuning was conducted to identify the optimal learning rate,batch size,and number of epochs,ensuring robust model performance.The dataset was divided into training(80%),validation(10%),and testing(10%)subsets,with performance evaluated using accuracy,precision,recall,and F1-score metrics.Our CNN model achieved a magnification-independent accuracy of 97.72%,with specific accuracies of 97.50%at 40×,97.61%at 100×,99.06%at 200×,and 97.25%at 400×magnification levels.These results demonstrate the model’s superior performance relative to existing methods.The integration of CNNs in diagnostic workflows can potentially reduce pathologist workload,minimize interpretation errors,and increase the availability of diagnostic testing,thereby improving BC management and patient survival rates.This study highlights the effectiveness of deep learning in automating BC histopathological classification and underscores the potential for AI-driven diagnostic solutions to improve patient care.
基金supported by the National Natural Science Foundation of China(Nos.62176122 and 62061146002).
文摘Federated Graph Neural Networks (FedGNNs) have achieved significant success in representation learning for graph data, enabling collaborative training among multiple parties without sharing their raw graph data and solving the data isolation problem faced by centralized GNNs in data-sensitive scenarios. Despite the plethora of prior work on inference attacks against centralized GNNs, the vulnerability of FedGNNs to inference attacks has not yet been widely explored. It is still unclear whether the privacy leakage risks of centralized GNNs will also be introduced in FedGNNs. To bridge this gap, we present PIAFGNN, the first property inference attack (PIA) against FedGNNs. Compared with prior works on centralized GNNs, in PIAFGNN, the attacker can only obtain the global embedding gradient distributed by the central server. The attacker converts the task of stealing the target user’s local embeddings into a regression problem, using a regression model to generate the target graph node embeddings. By training shadow models and property classifiers, the attacker can infer the basic property information within the target graph that is of interest. Experiments on three benchmark graph datasets demonstrate that PIAFGNN achieves attack accuracy of over 70% in most cases, even approaching the attack accuracy of inference attacks against centralized GNNs in some instances, which is much higher than the attack accuracy of the random guessing method. Furthermore, we observe that common defense mechanisms cannot mitigate our attack without affecting the model’s performance on mainly classification tasks.
文摘Accurate estimation of evapotranspiration(ET)is crucial for efficient water resource management,particularly in the face of climate change and increasing water scarcity.This study performs a bibliometric analysis of 352 articles and a systematic review of 35 peer-reviewed papers,selected according to PRISMA guidelines,to evaluate the performance of Hybrid Artificial Neural Networks(HANNs)in ET estimation.The findings demonstrate that HANNs,particularly those combining Multilayer Perceptrons(MLPs),Recurrent Neural Networks(RNNs),and Convolutional Neural Networks(CNNs),are highly effective in capturing the complex nonlinear relationships and tem-poral dependencies characteristic of hydrological processes.These hybrid models,often integrated with optimization algorithms and fuzzy logic frameworks,significantly improve the predictive accuracy and generalization capabilities of ET estimation.The growing adoption of advanced evaluation metrics,such as Kling-Gupta Efficiency(KGE)and Taylor Diagrams,highlights the increasing demand for more robust performance assessments beyond traditional methods.Despite the promising results,challenges remain,particularly regarding model interpretability,computational efficiency,and data scarcity.Future research should prioritize the integration of interpretability techniques,such as attention mechanisms,Local Interpretable Model-Agnostic Explanations(LIME),and feature importance analysis,to enhance model transparency and foster stakeholder trust.Additionally,improving HANN models’scalability and computational efficiency is crucial,especially for large-scale,real-world applications.Approaches such as transfer learning,parallel processing,and hyperparameter optimization will be essential in overcoming these challenges.This study underscores the transformative potential of HANN models for precise ET estimation,particularly in water-scarce and climate-vulnerable regions.By integrating CNNs for automatic feature extraction and leveraging hybrid architectures,HANNs offer considerable advantages for optimizing water management,particularly agriculture.Addressing challenges related to interpretability and scalability will be vital to ensuring the widespread deployment and operational success of HANNs in global water resource management.
基金Supported by CUP(J53C22003010006,J43C24000230007)ICREA2019.
文摘Spiking neural networks(SNNs)represent a biologically-inspired computational framework that bridges neuroscience and artificial intelligence,offering unique advantages in temporal data processing,energy efficiency,and real-time decision-making.This paper explores the evolution of SNN technologies,emphasizing their integration with advanced learning mechanisms such as spike-timing-dependent plasticity(STDP)and hybridization with deep learning architectures.Leveraging memristors as nanoscale synaptic devices,we demonstrate significant enhancements in energy efficiency,adaptability,and scalability,addressing key challenges in neuromorphic computing.Through phase portraits and nonlinear dynamics analysis,we validate the system’s stability and robustness under diverse workloads.These advancements position SNNs as a transformative technology for applications in robotics,IoT,and adaptive low-power AI systems,paving the way for future innovations in neuromorphic hardware and hybrid learning paradigms.
基金Supported by Natural Science Basic Research Plan in Shaanxi Province of China(Program No.2022JM-396)the Strategic Priority Research Program of the Chinese Academy of Sciences,Grant No.XDA23040101+4 种基金Shaanxi Province Key Research and Development Projects(Program No.2023-YBSF-437)Xi'an Shiyou University Graduate Student Innovation Fund Program(Program No.YCX2412041)State Key Laboratory of Air Traffic Management System and Technology(SKLATM202001)Tianjin Education Commission Research Program Project(2020KJ028)Fundamental Research Funds for the Central Universities(3122019132)。
文摘Developing an accurate and efficient comprehensive water quality prediction model and its assessment method is crucial for the prevention and control of water pollution.Deep learning(DL),as one of the most promising technologies today,plays a crucial role in the effective assessment of water body health,which is essential for water resource management.This study models using both the original dataset and a dataset augmented with Generative Adversarial Networks(GAN).It integrates optimization algorithms(OA)with Convolutional Neural Networks(CNN)to propose a comprehensive water quality model evaluation method aiming at identifying the optimal models for different pollutants.Specifically,after preprocessing the spectral dataset,data augmentation was conducted to obtain two datasets.Then,six new models were developed on these datasets using particle swarm optimization(PSO),genetic algorithm(GA),and simulated annealing(SA)combined with CNN to simulate and forecast the concentrations of three water pollutants:Chemical Oxygen Demand(COD),Total Nitrogen(TN),and Total Phosphorus(TP).Finally,seven model evaluation methods,including uncertainty analysis,were used to evaluate the constructed models and select the optimal models for the three pollutants.The evaluation results indicate that the GPSCNN model performed best in predicting COD and TP concentrations,while the GGACNN model excelled in TN concentration prediction.Compared to existing technologies,the proposed models and evaluation methods provide a more comprehensive and rapid approach to water body prediction and assessment,offering new insights and methods for water pollution prevention and control.
基金funded by the Key Research and Development Program of Zhejiang Province No.2023C01141the Science and Technology Innovation Community Project of the Yangtze River Delta No.23002410100suported by the Open Research Fund of the State Key Laboratory of Blockchain and Data Security,Zhejiang University.
文摘Graph Neural Networks(GNNs)have demonstrated outstanding capabilities in processing graph-structured data and are increasingly being integrated into large-scale pre-trained models,such as Large Language Models(LLMs),to enhance structural reasoning,knowledge retrieval,and memory management.The expansion of their application scope imposes higher requirements on the robustness of GNNs.However,as GNNs are applied to more dynamic and heterogeneous environments,they become increasingly vulnerable to real-world perturbations.In particular,graph data frequently encounters joint adversarial perturbations that simultaneously affect both structures and features,which are significantly more challenging than isolated attacks.These disruptions,caused by incomplete data,malicious attacks,or inherent noise,pose substantial threats to the stable and reliable performance of traditional GNN models.To address this issue,this study proposes the Dual-Shield Graph Neural Network(DSGNN),a defense model that simultaneously mitigates structural and feature perturbations.DSGNN utilizes two parallel GNN channels to independently process structural noise and feature noise,and introduces an adaptive fusion mechanism that integrates information from both pathways to generate robust node representations.Theoretical analysis demonstrates that DSGNN achieves a tighter robustness boundary under joint perturbations compared to conventional single-channel methods.Experimental evaluations across Cora,CiteSeer,and Industry datasets show that DSGNN achieves the highest average classification accuracy under various adversarial settings,reaching 81.24%,71.94%,and 81.66%,respectively,outperforming GNNGuard,GCN-Jaccard,GCN-SVD,RGCN,and NoisyGNN.These results underscore the importance of multi-view perturbation decoupling in constructing resilient GNN models for real-world applications.
文摘Spiking neural networks(SNN)represent a paradigm shift toward discrete,event-driven neural computation that mirrors biological brain mechanisms.This survey systematically examines current SNN research,focusing on training methodologies,hardware implementations,and practical applications.We analyze four major training paradigms:ANN-to-SNN conversion,direct gradient-based training,spike-timing-dependent plasticity(STDP),and hybrid approaches.Our review encompasses major specialized hardware platforms:Intel Loihi,IBM TrueNorth,SpiNNaker,and BrainScaleS,analyzing their capabilities and constraints.We survey applications spanning computer vision,robotics,edge computing,and brain-computer interfaces,identifying where SNN provide compelling advantages.Our comparative analysis reveals SNN offer significant energy efficiency improvements(1000-10000×reduction)and natural temporal processing,while facing challenges in scalability and training complexity.We identify critical research directions including improved gradient estimation,standardized benchmarking protocols,and hardware-software co-design approaches.This survey provides researchers and practitioners with a comprehensive understanding of current SNN capabilities,limitations,and future prospects.
基金supported by the National Natural Science Founion of China(U2241285).
文摘Accurate and efficient prediction of the distribution of surface loads on buildings subjected to explosive effects is crucial for rapidly calculating structural dynamic responses,establishing effective protective measures,and designing civil defense engineering solutions.Current state-of-the-art methods face several issues:Experimental research is difficult and costly to implement,theoretical research is limited to simple geometries and lacks precision,and direct simulations require substantial computational resources.To address these challenges,this paper presents a data-driven method for predicting blast loads on building surfaces.This approach increases both the accuracy and computational efficiency of load predictions when the geometry of the building changes while the explosive yield remains constant,significantly improving its applicability in complex scenarios.This study introduces an innovative encoder-decoder graph neural network model named BlastGraphNet,which uses a message-passing mechanism to predict the overpressure and impulse load distributions on buildings with conventional and complex geometries during explosive events.The model also facilitates related downstream applications,such as damage mode identification and rapid assessment of virtual city explosions.The calculation results indicate that the prediction error of the model for conventional building tests is less than 2%,and its inference speed is 3-4 orders of magnitude faster than that of state-of-the-art numerical methods.In extreme test cases involving buildings with complex geometries and building clusters,the method achieved high accuracy and excellent generalizability.The strong adaptability and generalizability of BlastGraphNet confirm that this novel method enables precise real-time prediction of blast loads and provides a new paradigm for damage assessment in protective engineering.