The satellite-terrestrial networks possess the ability to transcend geographical constraints inherent in traditional communication networks,enabling global coverage and offering users ubiquitous computing power suppor...The satellite-terrestrial networks possess the ability to transcend geographical constraints inherent in traditional communication networks,enabling global coverage and offering users ubiquitous computing power support,which is an important development direction of future communications.In this paper,we take into account a multi-scenario network model under the coverage of low earth orbit(LEO)satellite,which can provide computing resources to users in faraway areas to improve task processing efficiency.However,LEO satellites experience limitations in computing and communication resources and the channels are time-varying and complex,which makes the extraction of state information a daunting task.Therefore,we explore the dynamic resource management issue pertaining to joint computing,communication resource allocation and power control for multi-access edge computing(MEC).In order to tackle this formidable issue,we undertake the task of transforming the issue into a Markov decision process(MDP)problem and propose the self-attention based dynamic resource management(SABDRM)algorithm,which effectively extracts state information features to enhance the training process.Simulation results show that the proposed algorithm is capable of effectively reducing the long-term average delay and energy consumption of the tasks.展开更多
Reliable traffic flow prediction is crucial for mitigating urban congestion.This paper proposes Attentionbased spatiotemporal Interactive Dynamic Graph Convolutional Network(AIDGCN),a novel architecture integrating In...Reliable traffic flow prediction is crucial for mitigating urban congestion.This paper proposes Attentionbased spatiotemporal Interactive Dynamic Graph Convolutional Network(AIDGCN),a novel architecture integrating Interactive Dynamic Graph Convolution Network(IDGCN)with Temporal Multi-Head Trend-Aware Attention.Its core innovation lies in IDGCN,which uniquely splits sequences into symmetric intervals for interactive feature sharing via dynamic graphs,and a novel attention mechanism incorporating convolutional operations to capture essential local traffic trends—addressing a critical gap in standard attention for continuous data.For 15-and 60-min forecasting on METR-LA,AIDGCN achieves MAEs of 0.75%and 0.39%,and RMSEs of 1.32%and 0.14%,respectively.In the 60-min long-term forecasting of the PEMS-BAY dataset,the AIDGCN out-performs the MRA-BGCN method by 6.28%,4.93%,and 7.17%in terms of MAE,RMSE,and MAPE,respectively.Experimental results demonstrate the superiority of our pro-posed model over state-of-the-art methods.展开更多
The ability to accurately predict urban traffic flows is crucial for optimising city operations.Consequently,various methods for forecasting urban traffic have been developed,focusing on analysing historical data to u...The ability to accurately predict urban traffic flows is crucial for optimising city operations.Consequently,various methods for forecasting urban traffic have been developed,focusing on analysing historical data to understand complex mobility patterns.Deep learning techniques,such as graph neural networks(GNNs),are popular for their ability to capture spatio-temporal dependencies.However,these models often become overly complex due to the large number of hyper-parameters involved.In this study,we introduce Dynamic Multi-Graph Spatial-Temporal Graph Neural Ordinary Differential Equation Networks(DMST-GNODE),a framework based on ordinary differential equations(ODEs)that autonomously discovers effective spatial-temporal graph neural network(STGNN)architectures for traffic prediction tasks.The comparative analysis of DMST-GNODE and baseline models indicates that DMST-GNODE model demonstrates superior performance across multiple datasets,consistently achieving the lowest Root Mean Square Error(RMSE)and Mean Absolute Error(MAE)values,alongside the highest accuracy.On the BKK(Bangkok)dataset,it outperformed other models with an RMSE of 3.3165 and an accuracy of 0.9367 for a 20-min interval,maintaining this trend across 40 and 60 min.Similarly,on the PeMS08 dataset,DMST-GNODE achieved the best performance with an RMSE of 19.4863 and an accuracy of 0.9377 at 20 min,demonstrating its effectiveness over longer periods.The Los_Loop dataset results further emphasise this model’s advantage,with an RMSE of 3.3422 and an accuracy of 0.7643 at 20 min,consistently maintaining superiority across all time intervals.These numerical highlights indicate that DMST-GNODE not only outperforms baseline models but also achieves higher accuracy and lower errors across different time intervals and datasets.展开更多
The increasing popularity of the Internet and the widespread use of information technology have led to a rise in the number and sophistication of network attacks and security threats.Intrusion detection systems are cr...The increasing popularity of the Internet and the widespread use of information technology have led to a rise in the number and sophistication of network attacks and security threats.Intrusion detection systems are crucial to network security,playing a pivotal role in safeguarding networks from potential threats.However,in the context of an evolving landscape of sophisticated and elusive attacks,existing intrusion detection methodologies often overlook critical aspects such as changes in network topology over time and interactions between hosts.To address these issues,this paper proposes a real-time network intrusion detection method based on graph neural networks.The proposedmethod leverages the advantages of graph neural networks and employs a straightforward graph construction method to represent network traffic as dynamic graph-structured data.Additionally,a graph convolution operation with a multi-head attention mechanism is utilized to enhance the model’s ability to capture the intricate relationships within the graph structure comprehensively.Furthermore,it uses an integrated graph neural network to address dynamic graphs’structural and topological changes at different time points and the challenges of edge embedding in intrusion detection data.The edge classification problem is effectively transformed into node classification by employing a line graph data representation,which facilitates fine-grained intrusion detection tasks on dynamic graph node feature representations.The efficacy of the proposed method is evaluated using two commonly used intrusion detection datasets,UNSW-NB15 and NF-ToN-IoT-v2,and results are compared with previous studies in this field.The experimental results demonstrate that our proposed method achieves 99.3%and 99.96%accuracy on the two datasets,respectively,and outperforms the benchmark model in several evaluation metrics.展开更多
The real-time path optimization for heterogeneous vehicle fleets in large-scale road networks presents significant challenges due to conflicting traffic demands and imbalanced resource allocation.While existing vehicl...The real-time path optimization for heterogeneous vehicle fleets in large-scale road networks presents significant challenges due to conflicting traffic demands and imbalanced resource allocation.While existing vehicleto-infrastructure coordination frameworks partially address congestion mitigation,they often neglect priority-aware optimization and exhibit algorithmic bias toward dominant vehicle classes—critical limitations in mixed-priority scenarios involving emergency vehicles.To bridge this gap,this study proposes a preference game-theoretic coordination framework with adaptive strategy transfer protocol,explicitly balancing system-wide efficiency(measured by network throughput)with priority vehicle rights protection(quantified via time-sensitive utility functions).The approach innovatively combines(1)a multi-vehicle dynamic routing model with quantifiable preference weights,and(2)a distributed Nash equilibrium solver updated using replicator sub-dynamic models.The framework was evaluated on an urban road network containing 25 intersections with mixed priority ratios(10%–30%of vehicles with priority access demand),and the framework showed consistent benefits on four benchmarks(Social routing algorithm,Shortest path algorithm,The comprehensive path optimisation model,The emergency vehicle timing collaborative evolution path optimization method)showed consistent benefits.Results showthat across different traffic demand configurations,the proposed method reduces the average vehicle traveling time by at least 365 s,increases the road network throughput by 48.61%,and effectively balances the road loads.This approach successfully meets the diverse traffic demands of various vehicle types while optimizing road resource allocations.The proposed coordination paradigm advances theoretical foundations for fairness-aware traffic optimization while offering implementable strategies for next-generation cooperative vehicle-road systems,particularly in smart city deployments requiring mixed-priority mobility guarantees.展开更多
Social interaction with peer pressure is widely studied in social network analysis.Game theory can be utilized to model dynamic social interaction,and one class of game network models assumes that people’s decision p...Social interaction with peer pressure is widely studied in social network analysis.Game theory can be utilized to model dynamic social interaction,and one class of game network models assumes that people’s decision payoff functions hinge on individual covariates and the choices of their friends.However,peer pressure would be misidentified and induce a non-negligible bias when incomplete covariates are involved in the game model.For this reason,we develop a generalized constant peer effects model based on homogeneity structure in dynamic social networks.The new model can effectively avoid bias through homogeneity pursuit and can be applied to a wider range of scenarios.To estimate peer pressure in the model,we first present two algorithms based on the initialize expand merge method and the polynomial-time twostage method to estimate homogeneity parameters.Then we apply the nested pseudo-likelihood method and obtain consistent estimators of peer pressure.Simulation evaluations show that our proposed methodology can achieve desirable and effective results in terms of the community misclassification rate and parameter estimation error.We also illustrate the advantages of our model in the empirical analysis when compared with a benchmark model.展开更多
Lightweight convolutional neural networks(CNNs)have simple structures but struggle to comprehensively and accurately extract important semantic information from images.While attention mechanisms can enhance CNNs by le...Lightweight convolutional neural networks(CNNs)have simple structures but struggle to comprehensively and accurately extract important semantic information from images.While attention mechanisms can enhance CNNs by learning distinctive representations,most existing spatial and hybrid attention methods focus on local regions with extensive parameters,making them unsuitable for lightweight CNNs.In this paper,we propose a self-attention mechanism tailored for lightweight networks,namely the brief self-attention module(BSAM).BSAM consists of the brief spatial attention(BSA)and advanced channel attention blocks.Unlike conventional self-attention methods with many parameters,our BSA block improves the performance of lightweight networks by effectively learning global semantic representations.Moreover,BSAM can be seamlessly integrated into lightweight CNNs for end-to-end training,maintaining the network’s lightweight and mobile characteristics.We validate the effectiveness of the proposed method on image classification tasks using the Food-101,Caltech-256,and Mini-ImageNet datasets.展开更多
For image compression sensing reconstruction,most algorithms use the method of reconstructing image blocks one by one and stacking many convolutional layers,which usually have defects of obvious block effects,high com...For image compression sensing reconstruction,most algorithms use the method of reconstructing image blocks one by one and stacking many convolutional layers,which usually have defects of obvious block effects,high computational complexity,and long reconstruction time.An image compressed sensing reconstruction network based on self-attention mechanism(SAMNet)was proposed.For the compressed sampling,self-attention convolution was designed,which was conducive to capturing richer features,so that the compressed sensing measurement value retained more image structure information.For the reconstruction,a self-attention mechanism was introduced in the convolutional neural network.A reconstruction network including residual blocks,bottleneck transformer(BoTNet),and dense blocks was proposed,which strengthened the transfer of image features and reduced the amount of parameters dramatically.Under the Set5 dataset,when the measurement rates are 0.01,0.04,0.10,and 0.25,the average peak signal-to-noise ratio(PSNR)of SAMNet is improved by 1.27,1.23,0.50,and 0.15 dB,respectively,compared to the CSNet+.The running time of reconstructing a 256×256 image is reduced by 0.1473,0.1789,0.2310,and 0.2524 s compared to ReconNet.Experimental results showed that SAMNet improved the quality of reconstructed images and reduced the reconstruction time.展开更多
As the complexity of scientific satellite missions increases,the requirements for their magnetic fields,magnetic field fluctuations,and even magnetic field gradients and variations become increasingly stringent.Additi...As the complexity of scientific satellite missions increases,the requirements for their magnetic fields,magnetic field fluctuations,and even magnetic field gradients and variations become increasingly stringent.Additionally,there is a growing need to address the alternating magnetic fields produced by the spacecraft itself.This paper introduces a novel modeling method for spacecraft magnetic dipoles using an integrated self-attention mechanism and a transformer combined with Kolmogorov-Arnold Networks.The self-attention mechanism captures correlations among globally sparse data,establishing dependencies b.etween sparse magnetometer readings.Concurrently,the Kolmogorov-Arnold Network,proficient in modeling implicit numerical relationships between data features,enhances the ability to learn subtle patterns.Comparative experiments validate the capability of the proposed method to precisely model magnetic dipoles,achieving maximum Root Mean Square Errors of 24.06 mA·m^(2)and 0.32 cm for size and location modeling,respectively.The spacecraft magnetic model established using this method accurately computes magnetic fields and alternating magnetic fields at designated surfaces or points.This approach facilitates the rapid and precise construction of individual and complete spacecraft magnetic models,enabling the verification of magnetic specifications from the spacecraft design phase.展开更多
The development of deep learning has made non-biochemical methods for molecular property prediction screening a reality,which can increase the experimental speed and reduce the experimental cost of relevant experiment...The development of deep learning has made non-biochemical methods for molecular property prediction screening a reality,which can increase the experimental speed and reduce the experimental cost of relevant experiments.There are currently two main approaches to representing molecules:(a)representing molecules by fixing molecular descriptors,and(b)representing molecules by graph convolutional neural networks.Currently,both of these Representative methods have achieved some results in their respective experiments.Based on past efforts,we propose a Dual Self-attention Fusion Message Neural Network(DSFMNN).DSFMNN uses a combination of dual self-attention mechanism and graph convolutional neural network.Advantages of DSFMNN:(1)The dual self-attention mechanism focuses not only on the relationship between individual subunits in a molecule but also on the relationship between the atoms and chemical bonds contained in each subunit.(2)On the directed molecular graph,a message delivery approach centered on directed molecular bonds is used.We test the performance of the model on eight publicly available datasets and compare the performance with several models.Based on the current experimental results,DSFMNN has superior performance compared to previous models on the datasets applied in this paper.展开更多
Milling force is key to the understanding of cutting mechanism and the control of machining process.Traditional milling force models have limited prediction accuracy due to their simplified conditions and incomplete k...Milling force is key to the understanding of cutting mechanism and the control of machining process.Traditional milling force models have limited prediction accuracy due to their simplified conditions and incomplete knowledge contained for model construction.On the other hand,due to the lack of guidance from physics,the data-driven models lack interpretability,making them challenging to generalize to practical applications.To meet these difficulties,a deep network model guided by milling dynamics is proposed in this study to predict the instantaneous milling force and spindle vibration under varying cutting conditions.The model uses a milling dynamics model to generate data sets to pre-train the deep network and then integrates the experimental data for fine-tuning to improve the model’s generalization and accuracy.Additionally,the vibration equation is incorporated into the loss function as the physical constraint,enhancing the model’s interpretability.A milling experiment is conducted to validate the effectiveness of the proposed model,and the results indicate that the physics incorporated could improve the network learning capability and interpretability.The predicted results are in good agreement with the measured values,with an average error as low as 2.6705%.The prediction accuracy is increased by 24.4367%compared to the pure data-driven model.展开更多
Underground engineering in extreme environments necessitates understanding rock mechanical behavior under coupled high-temperature and dynamic loading conditions.This study presents an innovative multi-scale cross-pla...Underground engineering in extreme environments necessitates understanding rock mechanical behavior under coupled high-temperature and dynamic loading conditions.This study presents an innovative multi-scale cross-platform PFC-FDEM coupling methodology that bridges microscopic thermal damage mechanisms with macroscopic dynamic fracture responses.The breakthrough coupling framework introduces:(1)bidirectional information transfer protocols enabling seamless integration between PFC’s particle-scale thermal damage characterization and FDEM’s continuum-scale fracture propagation,(2)multi-physics mapping algorithms that preserve crack network geometric invariants during scale transitions,and(3)cross-platform cohesive zone implementations for accurate SHTB dynamic loading simulation.The coupled approach reveals distinct three-stage crack evolution characteristics with temperature-dependent density following an exponential model.High-temperature exposure significantly reduces dynamic strength ratio(60%at 800℃)and diminishes strain-rate sensitivity,with dynamic increase factor decreasing from 1.0 to 2.2(25℃)to 1.0-1.3(800℃).Critically,the coupling methodology captures fundamental energy redistribution mechanisms:thermal crack networks alter elastic energy proportion from 75%to 35%while increasing fracture energy from 5%to 30%.Numerical predictions demonstrate excellent experimental agreement(±8%peak stress-strain errors),validating the PFC-FDEM coupling accuracy.This integrated framework provides essential computational tools for predicting complex thermal-mechanical rock behavior in underground engineering applications.展开更多
The integration of satellite communication network and cellular network has a great potential to enable ubiquitous connectivity in future communication networks.Among numerous related application scenarios,the direct ...The integration of satellite communication network and cellular network has a great potential to enable ubiquitous connectivity in future communication networks.Among numerous related application scenarios,the direct connection of mobile phone to satellite has attracted increasing attention.However,the spectrum scarcity in the sub-6 GHz band and low spectrum utilization prevents its popularity.To address these problem,in this paper,we propose a dynamic spectrum sharing method for satellite network and cellular network based on beam-hopping.Specifically,we first develop a centralized dynamic spectrum sharing architecture based on beam-hopping,and propose a delay pre-compensation scheme for beam hopping pattern.Then,an optimization problem is formulated to maximize the overall capacity of the integrated network,with considering the service requirements,the fairness between beam positions and mixed co-channel interference,etc.To solve this problem,a polling-based dynamic resource allocation algorithm is proposed.Simulation results confirm that the proposed algorithm can effectively reduce the serious cochannel interference between different beams or different systems,and improve the spectrum utilization rate as well as system capacity.展开更多
The successful application of perimeter control of urban traffic system strongly depends on the macroscopic fundamental diagram of the targeted region.Despite intensive studies on the partitioning of urban road networ...The successful application of perimeter control of urban traffic system strongly depends on the macroscopic fundamental diagram of the targeted region.Despite intensive studies on the partitioning of urban road networks,the dynamic partitioning of urban regions reflecting the propagation of congestion remains an open question.This paper proposes to partition the network into homogeneous sub-regions based on random walk algorithm.Starting from selected random walkers,the road network is partitioned from the early morning when congestion emerges.A modified Akaike information criterion is defined to find the optimal number of partitions.Region boundary adjustment algorithms are adopted to optimize the partitioning results to further ensure the correlation of partitions.The traffic data of Melbourne city are used to verify the effectiveness of the proposed partitioning method.展开更多
The pH-sensitive hydrogels play a crucial role in applications such as soft robotics,drug delivery,and biomedical sensors,as they require precise control of swelling behaviors and stress distributions.Traditional expe...The pH-sensitive hydrogels play a crucial role in applications such as soft robotics,drug delivery,and biomedical sensors,as they require precise control of swelling behaviors and stress distributions.Traditional experimental methods struggle to capture stress distributions due to technical limitations,while numerical approaches are often computationally intensive.This study presents a hybrid framework combining analytical modeling and machine learning(ML)to overcome these challenges.An analytical model is used to simulate transient swelling behaviors and stress distributions,and is confirmed to be viable through the comparison of the obtained simulation results with the existing experimental swelling data.The predictions from this model are used to train neural networks,including a two-step augmented architecture.The initial neural network predicts hydration values,which are then fed into a second network to predict stress distributions,effectively capturing nonlinear interdependencies.This approach achieves mean absolute errors(MAEs)as low as 0.031,with average errors of 1.9%for the radial stress and 2.55%for the hoop stress.This framework significantly enhances the predictive accuracy and reduces the computational complexity,offering actionable insights for optimizing hydrogel-based systems.展开更多
Recent advances in statistical physics highlight the significant potential of machine learning for phase transition recognition.This study introduces a deep learning framework based on graph neural network to investig...Recent advances in statistical physics highlight the significant potential of machine learning for phase transition recognition.This study introduces a deep learning framework based on graph neural network to investigate non-equilibrium phase transitions,specifically focusing on the directed percolation process.By converting lattices with varying dimensions and connectivity schemes into graph structures and embedding the temporal evolution of the percolation process into node features,our approach enables unified analysis across diverse systems.The framework utilizes a multi-layer graph attention mechanism combined with global pooling to autonomously extract critical features from local dynamics to global phase transition signatures.The model successfully predicts percolation thresholds without relying on lattice geometry,demonstrating its robustness and versatility.Our approach not only offers new insights into phase transition studies but also provides a powerful tool for analyzing complex dynamical systems across various domains.展开更多
The rapid growth of low-Earth-orbit satellites has injected new vitality into future service provisioning.However,given the inherent volatility of network traffic,ensuring differentiated quality of service in highly d...The rapid growth of low-Earth-orbit satellites has injected new vitality into future service provisioning.However,given the inherent volatility of network traffic,ensuring differentiated quality of service in highly dynamic networks remains a significant challenge.In this paper,we propose an online learning-based resource scheduling scheme for satellite-terrestrial integrated networks(STINs)aimed at providing on-demand services with minimal resource utilization.Specifically,we focus on:①accurately characterizing the STIN channel,②predicting resource demand with uncertainty guarantees,and③implementing mixed timescale resource scheduling.For the STIN channel,we adopt the 3rd Generation Partnership Project channel and antenna models for non-terrestrial networks.We employ a one-dimensional convolution and attention-assisted long short-term memory architecture for average demand prediction,while introducing conformal prediction to mitigate uncertainties arising from burst traffic.Additionally,we develop a dual-timescale optimization framework that includes resource reservation on a larger timescale and resource adjustment on a smaller timescale.We also designed an online resource scheduling algorithm based on online convex optimization to guarantee long-term performance with limited knowledge of time-varying network information.Based on the Network Simulator 3 implementation of the STIN channel under our high-fidelity satellite Internet simulation platform,numerical results using a real-world dataset demonstrate the accuracy and efficiency of the prediction algorithms and online resource scheduling scheme.展开更多
In order to solve the problem that the star point positioning accuracy of the star sensor in near space is decreased due to atmospheric background stray light and rapid maneuvering of platform, this paper proposes a s...In order to solve the problem that the star point positioning accuracy of the star sensor in near space is decreased due to atmospheric background stray light and rapid maneuvering of platform, this paper proposes a star point positioning algorithm based on the capsule network whose input and output are both vectors. First, a PCTL (Probability-Coordinate Transformation Layer) is designed to represent the mapping relationship between the probability output of the capsule network and the star point sub-pixel coordinates. Then, Coordconv Layer is introduced to implement explicit encoding of space information and the probability is used as the centroid weight to achieve the conversion between probability and star point sub-pixel coordinates, which improves the network’s ability to perceive star point positions. Finally, based on the dynamic imaging principle of star sensors and the characteristics of near-space environment, a star map dataset for algorithm training and testing is constructed. The simulation results show that the proposed algorithm reduces the MAE (Mean Absolute Error) and RMSE (Root Mean Square Error) of the star point positioning by 36.1% and 41.7% respectively compared with the traditional algorithm. The research results can provide important theory and technical support for the scheme design, index demonstration, test and evaluation of large dynamic star sensors in near space.展开更多
Delay aware routing is now widely used to provide efficient network transmission. However, for newly developing or developed mobile communication networks(MCN), only limited delay data can be obtained. In such a netwo...Delay aware routing is now widely used to provide efficient network transmission. However, for newly developing or developed mobile communication networks(MCN), only limited delay data can be obtained. In such a network, the delay is with epistemic uncertainty, which makes the traditional routing scheme based on deterministic theory or probability theory not applicable. Motivated by this problem, the MCN with epistemic uncertainty is first summarized as a dynamic uncertain network based on uncertainty theory, which is widely applied to model epistemic uncertainties. Then by modeling the uncertain end-toend delay, a new delay bounded routing scheme is proposed to find the path with the maximum belief degree that satisfies the delay threshold for the dynamic uncertain network. Finally, a lowEarth-orbit satellite communication network(LEO-SCN) is used as a case to verify the effectiveness of our routing scheme. It is first modeled as a dynamic uncertain network, and then the delay bounded paths with the maximum belief degree are computed and compared under different delay thresholds.展开更多
基金supported by the National Key Research and Development Plan(No.2022YFB2902701)the key Natural Science Foundation of Shenzhen(No.JCYJ20220818102209020).
文摘The satellite-terrestrial networks possess the ability to transcend geographical constraints inherent in traditional communication networks,enabling global coverage and offering users ubiquitous computing power support,which is an important development direction of future communications.In this paper,we take into account a multi-scenario network model under the coverage of low earth orbit(LEO)satellite,which can provide computing resources to users in faraway areas to improve task processing efficiency.However,LEO satellites experience limitations in computing and communication resources and the channels are time-varying and complex,which makes the extraction of state information a daunting task.Therefore,we explore the dynamic resource management issue pertaining to joint computing,communication resource allocation and power control for multi-access edge computing(MEC).In order to tackle this formidable issue,we undertake the task of transforming the issue into a Markov decision process(MDP)problem and propose the self-attention based dynamic resource management(SABDRM)algorithm,which effectively extracts state information features to enhance the training process.Simulation results show that the proposed algorithm is capable of effectively reducing the long-term average delay and energy consumption of the tasks.
文摘Reliable traffic flow prediction is crucial for mitigating urban congestion.This paper proposes Attentionbased spatiotemporal Interactive Dynamic Graph Convolutional Network(AIDGCN),a novel architecture integrating Interactive Dynamic Graph Convolution Network(IDGCN)with Temporal Multi-Head Trend-Aware Attention.Its core innovation lies in IDGCN,which uniquely splits sequences into symmetric intervals for interactive feature sharing via dynamic graphs,and a novel attention mechanism incorporating convolutional operations to capture essential local traffic trends—addressing a critical gap in standard attention for continuous data.For 15-and 60-min forecasting on METR-LA,AIDGCN achieves MAEs of 0.75%and 0.39%,and RMSEs of 1.32%and 0.14%,respectively.In the 60-min long-term forecasting of the PEMS-BAY dataset,the AIDGCN out-performs the MRA-BGCN method by 6.28%,4.93%,and 7.17%in terms of MAE,RMSE,and MAPE,respectively.Experimental results demonstrate the superiority of our pro-posed model over state-of-the-art methods.
文摘The ability to accurately predict urban traffic flows is crucial for optimising city operations.Consequently,various methods for forecasting urban traffic have been developed,focusing on analysing historical data to understand complex mobility patterns.Deep learning techniques,such as graph neural networks(GNNs),are popular for their ability to capture spatio-temporal dependencies.However,these models often become overly complex due to the large number of hyper-parameters involved.In this study,we introduce Dynamic Multi-Graph Spatial-Temporal Graph Neural Ordinary Differential Equation Networks(DMST-GNODE),a framework based on ordinary differential equations(ODEs)that autonomously discovers effective spatial-temporal graph neural network(STGNN)architectures for traffic prediction tasks.The comparative analysis of DMST-GNODE and baseline models indicates that DMST-GNODE model demonstrates superior performance across multiple datasets,consistently achieving the lowest Root Mean Square Error(RMSE)and Mean Absolute Error(MAE)values,alongside the highest accuracy.On the BKK(Bangkok)dataset,it outperformed other models with an RMSE of 3.3165 and an accuracy of 0.9367 for a 20-min interval,maintaining this trend across 40 and 60 min.Similarly,on the PeMS08 dataset,DMST-GNODE achieved the best performance with an RMSE of 19.4863 and an accuracy of 0.9377 at 20 min,demonstrating its effectiveness over longer periods.The Los_Loop dataset results further emphasise this model’s advantage,with an RMSE of 3.3422 and an accuracy of 0.7643 at 20 min,consistently maintaining superiority across all time intervals.These numerical highlights indicate that DMST-GNODE not only outperforms baseline models but also achieves higher accuracy and lower errors across different time intervals and datasets.
文摘The increasing popularity of the Internet and the widespread use of information technology have led to a rise in the number and sophistication of network attacks and security threats.Intrusion detection systems are crucial to network security,playing a pivotal role in safeguarding networks from potential threats.However,in the context of an evolving landscape of sophisticated and elusive attacks,existing intrusion detection methodologies often overlook critical aspects such as changes in network topology over time and interactions between hosts.To address these issues,this paper proposes a real-time network intrusion detection method based on graph neural networks.The proposedmethod leverages the advantages of graph neural networks and employs a straightforward graph construction method to represent network traffic as dynamic graph-structured data.Additionally,a graph convolution operation with a multi-head attention mechanism is utilized to enhance the model’s ability to capture the intricate relationships within the graph structure comprehensively.Furthermore,it uses an integrated graph neural network to address dynamic graphs’structural and topological changes at different time points and the challenges of edge embedding in intrusion detection data.The edge classification problem is effectively transformed into node classification by employing a line graph data representation,which facilitates fine-grained intrusion detection tasks on dynamic graph node feature representations.The efficacy of the proposed method is evaluated using two commonly used intrusion detection datasets,UNSW-NB15 and NF-ToN-IoT-v2,and results are compared with previous studies in this field.The experimental results demonstrate that our proposed method achieves 99.3%and 99.96%accuracy on the two datasets,respectively,and outperforms the benchmark model in several evaluation metrics.
基金funded by the National Key Research and Development Program Project 2022YFB4300404.
文摘The real-time path optimization for heterogeneous vehicle fleets in large-scale road networks presents significant challenges due to conflicting traffic demands and imbalanced resource allocation.While existing vehicleto-infrastructure coordination frameworks partially address congestion mitigation,they often neglect priority-aware optimization and exhibit algorithmic bias toward dominant vehicle classes—critical limitations in mixed-priority scenarios involving emergency vehicles.To bridge this gap,this study proposes a preference game-theoretic coordination framework with adaptive strategy transfer protocol,explicitly balancing system-wide efficiency(measured by network throughput)with priority vehicle rights protection(quantified via time-sensitive utility functions).The approach innovatively combines(1)a multi-vehicle dynamic routing model with quantifiable preference weights,and(2)a distributed Nash equilibrium solver updated using replicator sub-dynamic models.The framework was evaluated on an urban road network containing 25 intersections with mixed priority ratios(10%–30%of vehicles with priority access demand),and the framework showed consistent benefits on four benchmarks(Social routing algorithm,Shortest path algorithm,The comprehensive path optimisation model,The emergency vehicle timing collaborative evolution path optimization method)showed consistent benefits.Results showthat across different traffic demand configurations,the proposed method reduces the average vehicle traveling time by at least 365 s,increases the road network throughput by 48.61%,and effectively balances the road loads.This approach successfully meets the diverse traffic demands of various vehicle types while optimizing road resource allocations.The proposed coordination paradigm advances theoretical foundations for fairness-aware traffic optimization while offering implementable strategies for next-generation cooperative vehicle-road systems,particularly in smart city deployments requiring mixed-priority mobility guarantees.
基金supported by the National Nature Science Foundation of China(71771201,72531009,71973001)the USTC Research Funds of the Double First-Class Initiative(FSSF-A-240202).
文摘Social interaction with peer pressure is widely studied in social network analysis.Game theory can be utilized to model dynamic social interaction,and one class of game network models assumes that people’s decision payoff functions hinge on individual covariates and the choices of their friends.However,peer pressure would be misidentified and induce a non-negligible bias when incomplete covariates are involved in the game model.For this reason,we develop a generalized constant peer effects model based on homogeneity structure in dynamic social networks.The new model can effectively avoid bias through homogeneity pursuit and can be applied to a wider range of scenarios.To estimate peer pressure in the model,we first present two algorithms based on the initialize expand merge method and the polynomial-time twostage method to estimate homogeneity parameters.Then we apply the nested pseudo-likelihood method and obtain consistent estimators of peer pressure.Simulation evaluations show that our proposed methodology can achieve desirable and effective results in terms of the community misclassification rate and parameter estimation error.We also illustrate the advantages of our model in the empirical analysis when compared with a benchmark model.
文摘Lightweight convolutional neural networks(CNNs)have simple structures but struggle to comprehensively and accurately extract important semantic information from images.While attention mechanisms can enhance CNNs by learning distinctive representations,most existing spatial and hybrid attention methods focus on local regions with extensive parameters,making them unsuitable for lightweight CNNs.In this paper,we propose a self-attention mechanism tailored for lightweight networks,namely the brief self-attention module(BSAM).BSAM consists of the brief spatial attention(BSA)and advanced channel attention blocks.Unlike conventional self-attention methods with many parameters,our BSA block improves the performance of lightweight networks by effectively learning global semantic representations.Moreover,BSAM can be seamlessly integrated into lightweight CNNs for end-to-end training,maintaining the network’s lightweight and mobile characteristics.We validate the effectiveness of the proposed method on image classification tasks using the Food-101,Caltech-256,and Mini-ImageNet datasets.
基金supported by National Natural Science Foundation of China(Nos.61261016,61661025)Science and Technology Plan of Gansu Province(No.20JR10RA273).
文摘For image compression sensing reconstruction,most algorithms use the method of reconstructing image blocks one by one and stacking many convolutional layers,which usually have defects of obvious block effects,high computational complexity,and long reconstruction time.An image compressed sensing reconstruction network based on self-attention mechanism(SAMNet)was proposed.For the compressed sampling,self-attention convolution was designed,which was conducive to capturing richer features,so that the compressed sensing measurement value retained more image structure information.For the reconstruction,a self-attention mechanism was introduced in the convolutional neural network.A reconstruction network including residual blocks,bottleneck transformer(BoTNet),and dense blocks was proposed,which strengthened the transfer of image features and reduced the amount of parameters dramatically.Under the Set5 dataset,when the measurement rates are 0.01,0.04,0.10,and 0.25,the average peak signal-to-noise ratio(PSNR)of SAMNet is improved by 1.27,1.23,0.50,and 0.15 dB,respectively,compared to the CSNet+.The running time of reconstructing a 256×256 image is reduced by 0.1473,0.1789,0.2310,and 0.2524 s compared to ReconNet.Experimental results showed that SAMNet improved the quality of reconstructed images and reduced the reconstruction time.
基金supported by the National Key Research and Development Program of China(2020YFC2200901)。
文摘As the complexity of scientific satellite missions increases,the requirements for their magnetic fields,magnetic field fluctuations,and even magnetic field gradients and variations become increasingly stringent.Additionally,there is a growing need to address the alternating magnetic fields produced by the spacecraft itself.This paper introduces a novel modeling method for spacecraft magnetic dipoles using an integrated self-attention mechanism and a transformer combined with Kolmogorov-Arnold Networks.The self-attention mechanism captures correlations among globally sparse data,establishing dependencies b.etween sparse magnetometer readings.Concurrently,the Kolmogorov-Arnold Network,proficient in modeling implicit numerical relationships between data features,enhances the ability to learn subtle patterns.Comparative experiments validate the capability of the proposed method to precisely model magnetic dipoles,achieving maximum Root Mean Square Errors of 24.06 mA·m^(2)and 0.32 cm for size and location modeling,respectively.The spacecraft magnetic model established using this method accurately computes magnetic fields and alternating magnetic fields at designated surfaces or points.This approach facilitates the rapid and precise construction of individual and complete spacecraft magnetic models,enabling the verification of magnetic specifications from the spacecraft design phase.
文摘The development of deep learning has made non-biochemical methods for molecular property prediction screening a reality,which can increase the experimental speed and reduce the experimental cost of relevant experiments.There are currently two main approaches to representing molecules:(a)representing molecules by fixing molecular descriptors,and(b)representing molecules by graph convolutional neural networks.Currently,both of these Representative methods have achieved some results in their respective experiments.Based on past efforts,we propose a Dual Self-attention Fusion Message Neural Network(DSFMNN).DSFMNN uses a combination of dual self-attention mechanism and graph convolutional neural network.Advantages of DSFMNN:(1)The dual self-attention mechanism focuses not only on the relationship between individual subunits in a molecule but also on the relationship between the atoms and chemical bonds contained in each subunit.(2)On the directed molecular graph,a message delivery approach centered on directed molecular bonds is used.We test the performance of the model on eight publicly available datasets and compare the performance with several models.Based on the current experimental results,DSFMNN has superior performance compared to previous models on the datasets applied in this paper.
基金supported in part by the National Natural Science Foundation of China(52175528)in part by the National Key Research and Development Program of China,the Chinese Ministry of Science and Technology(2018YFB1703200).
文摘Milling force is key to the understanding of cutting mechanism and the control of machining process.Traditional milling force models have limited prediction accuracy due to their simplified conditions and incomplete knowledge contained for model construction.On the other hand,due to the lack of guidance from physics,the data-driven models lack interpretability,making them challenging to generalize to practical applications.To meet these difficulties,a deep network model guided by milling dynamics is proposed in this study to predict the instantaneous milling force and spindle vibration under varying cutting conditions.The model uses a milling dynamics model to generate data sets to pre-train the deep network and then integrates the experimental data for fine-tuning to improve the model’s generalization and accuracy.Additionally,the vibration equation is incorporated into the loss function as the physical constraint,enhancing the model’s interpretability.A milling experiment is conducted to validate the effectiveness of the proposed model,and the results indicate that the physics incorporated could improve the network learning capability and interpretability.The predicted results are in good agreement with the measured values,with an average error as low as 2.6705%.The prediction accuracy is increased by 24.4367%compared to the pure data-driven model.
基金supported by the National Natural Science Foundations of China(Nos.12272411 and 42007259)the State Key Laboratory for GeoMechanics and Deep Underground Engineering,the China University of Mining&Technology(No.SKLGDUEK2207)the Department of Science and Technology of Shaanxi Province(Nos.2022KXJ-107 and 2022JC-LHJJ-16).
文摘Underground engineering in extreme environments necessitates understanding rock mechanical behavior under coupled high-temperature and dynamic loading conditions.This study presents an innovative multi-scale cross-platform PFC-FDEM coupling methodology that bridges microscopic thermal damage mechanisms with macroscopic dynamic fracture responses.The breakthrough coupling framework introduces:(1)bidirectional information transfer protocols enabling seamless integration between PFC’s particle-scale thermal damage characterization and FDEM’s continuum-scale fracture propagation,(2)multi-physics mapping algorithms that preserve crack network geometric invariants during scale transitions,and(3)cross-platform cohesive zone implementations for accurate SHTB dynamic loading simulation.The coupled approach reveals distinct three-stage crack evolution characteristics with temperature-dependent density following an exponential model.High-temperature exposure significantly reduces dynamic strength ratio(60%at 800℃)and diminishes strain-rate sensitivity,with dynamic increase factor decreasing from 1.0 to 2.2(25℃)to 1.0-1.3(800℃).Critically,the coupling methodology captures fundamental energy redistribution mechanisms:thermal crack networks alter elastic energy proportion from 75%to 35%while increasing fracture energy from 5%to 30%.Numerical predictions demonstrate excellent experimental agreement(±8%peak stress-strain errors),validating the PFC-FDEM coupling accuracy.This integrated framework provides essential computational tools for predicting complex thermal-mechanical rock behavior in underground engineering applications.
基金supported in part by the National Key Research and Development Program of China under Grant 2018YFA0701601in part by the National Natural Science Foundation of China under Grant 61922049 and Grant 61941104in part by the Tsinghua University-China Mobile Communications Group Company Ltd.,Joint Institute.
文摘The integration of satellite communication network and cellular network has a great potential to enable ubiquitous connectivity in future communication networks.Among numerous related application scenarios,the direct connection of mobile phone to satellite has attracted increasing attention.However,the spectrum scarcity in the sub-6 GHz band and low spectrum utilization prevents its popularity.To address these problem,in this paper,we propose a dynamic spectrum sharing method for satellite network and cellular network based on beam-hopping.Specifically,we first develop a centralized dynamic spectrum sharing architecture based on beam-hopping,and propose a delay pre-compensation scheme for beam hopping pattern.Then,an optimization problem is formulated to maximize the overall capacity of the integrated network,with considering the service requirements,the fairness between beam positions and mixed co-channel interference,etc.To solve this problem,a polling-based dynamic resource allocation algorithm is proposed.Simulation results confirm that the proposed algorithm can effectively reduce the serious cochannel interference between different beams or different systems,and improve the spectrum utilization rate as well as system capacity.
基金Project supported by the National Natural Science Foundation of China(Grant No.12072340)the Chinese Scholarship Council and the Australia Research Council through a linkage project fund。
文摘The successful application of perimeter control of urban traffic system strongly depends on the macroscopic fundamental diagram of the targeted region.Despite intensive studies on the partitioning of urban road networks,the dynamic partitioning of urban regions reflecting the propagation of congestion remains an open question.This paper proposes to partition the network into homogeneous sub-regions based on random walk algorithm.Starting from selected random walkers,the road network is partitioned from the early morning when congestion emerges.A modified Akaike information criterion is defined to find the optimal number of partitions.Region boundary adjustment algorithms are adopted to optimize the partitioning results to further ensure the correlation of partitions.The traffic data of Melbourne city are used to verify the effectiveness of the proposed partitioning method.
文摘The pH-sensitive hydrogels play a crucial role in applications such as soft robotics,drug delivery,and biomedical sensors,as they require precise control of swelling behaviors and stress distributions.Traditional experimental methods struggle to capture stress distributions due to technical limitations,while numerical approaches are often computationally intensive.This study presents a hybrid framework combining analytical modeling and machine learning(ML)to overcome these challenges.An analytical model is used to simulate transient swelling behaviors and stress distributions,and is confirmed to be viable through the comparison of the obtained simulation results with the existing experimental swelling data.The predictions from this model are used to train neural networks,including a two-step augmented architecture.The initial neural network predicts hydration values,which are then fed into a second network to predict stress distributions,effectively capturing nonlinear interdependencies.This approach achieves mean absolute errors(MAEs)as low as 0.031,with average errors of 1.9%for the radial stress and 2.55%for the hoop stress.This framework significantly enhances the predictive accuracy and reduces the computational complexity,offering actionable insights for optimizing hydrogel-based systems.
基金supported by the Fund from the Science and Technology Department of Henan Province,China(Grant Nos.222102210233 and 232102210064)the National Natural Science Foundation of China(Grant Nos.62373169 and 72474086)+5 种基金the Young and Midcareer Academic Leader of Jiangsu Province,China(Grant No.Qinglan Project in 2024)the National Statistical Science Research Project(Grant No.2022LZ03)Shaanxi Provincial Soft Science Project(Grant No.2022KRM111)Shaanxi Provincial Social Science Foundation(Grant No.2022R016)the Special Project for Philosophical and Social Sciences Research in Shaanxi Province,China(Grant No.2024QN018)the Fund from the Henan Office of Philosophy and Social Science(Grant No.2023CJJ112).
文摘Recent advances in statistical physics highlight the significant potential of machine learning for phase transition recognition.This study introduces a deep learning framework based on graph neural network to investigate non-equilibrium phase transitions,specifically focusing on the directed percolation process.By converting lattices with varying dimensions and connectivity schemes into graph structures and embedding the temporal evolution of the percolation process into node features,our approach enables unified analysis across diverse systems.The framework utilizes a multi-layer graph attention mechanism combined with global pooling to autonomously extract critical features from local dynamics to global phase transition signatures.The model successfully predicts percolation thresholds without relying on lattice geometry,demonstrating its robustness and versatility.Our approach not only offers new insights into phase transition studies but also provides a powerful tool for analyzing complex dynamical systems across various domains.
基金supported in part by the Major Program of the National Natural Science Foundation of China(62495021 and 62495020).
文摘The rapid growth of low-Earth-orbit satellites has injected new vitality into future service provisioning.However,given the inherent volatility of network traffic,ensuring differentiated quality of service in highly dynamic networks remains a significant challenge.In this paper,we propose an online learning-based resource scheduling scheme for satellite-terrestrial integrated networks(STINs)aimed at providing on-demand services with minimal resource utilization.Specifically,we focus on:①accurately characterizing the STIN channel,②predicting resource demand with uncertainty guarantees,and③implementing mixed timescale resource scheduling.For the STIN channel,we adopt the 3rd Generation Partnership Project channel and antenna models for non-terrestrial networks.We employ a one-dimensional convolution and attention-assisted long short-term memory architecture for average demand prediction,while introducing conformal prediction to mitigate uncertainties arising from burst traffic.Additionally,we develop a dual-timescale optimization framework that includes resource reservation on a larger timescale and resource adjustment on a smaller timescale.We also designed an online resource scheduling algorithm based on online convex optimization to guarantee long-term performance with limited knowledge of time-varying network information.Based on the Network Simulator 3 implementation of the STIN channel under our high-fidelity satellite Internet simulation platform,numerical results using a real-world dataset demonstrate the accuracy and efficiency of the prediction algorithms and online resource scheduling scheme.
文摘In order to solve the problem that the star point positioning accuracy of the star sensor in near space is decreased due to atmospheric background stray light and rapid maneuvering of platform, this paper proposes a star point positioning algorithm based on the capsule network whose input and output are both vectors. First, a PCTL (Probability-Coordinate Transformation Layer) is designed to represent the mapping relationship between the probability output of the capsule network and the star point sub-pixel coordinates. Then, Coordconv Layer is introduced to implement explicit encoding of space information and the probability is used as the centroid weight to achieve the conversion between probability and star point sub-pixel coordinates, which improves the network’s ability to perceive star point positions. Finally, based on the dynamic imaging principle of star sensors and the characteristics of near-space environment, a star map dataset for algorithm training and testing is constructed. The simulation results show that the proposed algorithm reduces the MAE (Mean Absolute Error) and RMSE (Root Mean Square Error) of the star point positioning by 36.1% and 41.7% respectively compared with the traditional algorithm. The research results can provide important theory and technical support for the scheme design, index demonstration, test and evaluation of large dynamic star sensors in near space.
基金National Natural Science Foundation of China (61773044,62073009)National key Laboratory of Science and Technology on Reliability and Environmental Engineering(WDZC2019601A301)。
文摘Delay aware routing is now widely used to provide efficient network transmission. However, for newly developing or developed mobile communication networks(MCN), only limited delay data can be obtained. In such a network, the delay is with epistemic uncertainty, which makes the traditional routing scheme based on deterministic theory or probability theory not applicable. Motivated by this problem, the MCN with epistemic uncertainty is first summarized as a dynamic uncertain network based on uncertainty theory, which is widely applied to model epistemic uncertainties. Then by modeling the uncertain end-toend delay, a new delay bounded routing scheme is proposed to find the path with the maximum belief degree that satisfies the delay threshold for the dynamic uncertain network. Finally, a lowEarth-orbit satellite communication network(LEO-SCN) is used as a case to verify the effectiveness of our routing scheme. It is first modeled as a dynamic uncertain network, and then the delay bounded paths with the maximum belief degree are computed and compared under different delay thresholds.