Two-dimensional endoscopic images are susceptible to interferences such as specular reflections and monotonous texture illumination,hindering accurate three-dimensional lesion reconstruction by surgical robots.This st...Two-dimensional endoscopic images are susceptible to interferences such as specular reflections and monotonous texture illumination,hindering accurate three-dimensional lesion reconstruction by surgical robots.This study proposes a novel end-to-end disparity estimation model to address these challenges.Our approach combines a Pseudo-Siamese neural network architecture with pyramid dilated convolutions,integrating multi-scale image information to enhance robustness against lighting interferences.This study introduces a Pseudo-Siamese structure-based disparity regression model that simplifies left-right image comparison,improving accuracy and efficiency.The model was evaluated using a dataset of stereo endoscopic videos captured by the Da Vinci surgical robot,comprising simulated silicone heart sequences and real heart video data.Experimental results demonstrate significant improvement in the network’s resistance to lighting interference without substantially increasing parameters.Moreover,the model exhibited faster convergence during training,contributing to overall performance enhancement.This study advances endoscopic image processing accuracy and has potential implications for surgical robot applications in complex environments.展开更多
Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay di...Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay differential equations(DDEs)and delay integrodifferential equations(DIDEs)with constant delays,primarily due to their low regularity at delayinduced breaking points.In this paper,a DNN method that combines multi-task learning(MTL)which is proposed to solve both the forward and inverse problems of DIDEs.The core idea of this approach is to divide the original equation into multiple tasks based on the delay,using auxiliary outputs to represent the integral terms,followed by the use of MTL to seamlessly incorporate the properties at the breaking points into the loss function.Furthermore,given the increased training dificulty associated with multiple tasks and outputs,we employ a sequential training scheme to reduce training complexity and provide reference solutions for subsequent tasks.This approach significantly enhances the approximation accuracy of solving DIDEs with DNNs,as demonstrated by comparisons with traditional DNN methods.We validate the effectiveness of this method through several numerical experiments,test various parameter sharing structures in MTL and compare the testing results of these structures.Finally,this method is implemented to solve the inverse problem of nonlinear DIDE and the results show that the unknown parameters of DIDE can be discovered with sparse or noisy data.展开更多
Low Earth orbit(LEO)satellite networks exhibit distinct characteristics,e.g.,limited resources of individual satellite nodes and dynamic network topology,which have brought many challenges for routing algorithms.To sa...Low Earth orbit(LEO)satellite networks exhibit distinct characteristics,e.g.,limited resources of individual satellite nodes and dynamic network topology,which have brought many challenges for routing algorithms.To satisfy quality of service(QoS)requirements of various users,it is critical to research efficient routing strategies to fully utilize satellite resources.This paper proposes a multi-QoS information optimized routing algorithm based on reinforcement learning for LEO satellite networks,which guarantees high level assurance demand services to be prioritized under limited satellite resources while considering the load balancing performance of the satellite networks for low level assurance demand services to ensure the full and effective utilization of satellite resources.An auxiliary path search algorithm is proposed to accelerate the convergence of satellite routing algorithm.Simulation results show that the generated routing strategy can timely process and fully meet the QoS demands of high assurance services while effectively improving the load balancing performance of the link.展开更多
Healthcare networks are transitioning from manual records to electronic health records,but this shift introduces vulnerabilities such as secure communication issues,privacy concerns,and the presence of malicious nodes...Healthcare networks are transitioning from manual records to electronic health records,but this shift introduces vulnerabilities such as secure communication issues,privacy concerns,and the presence of malicious nodes.Existing machine and deep learning-based anomalies detection methods often rely on centralized training,leading to reduced accuracy and potential privacy breaches.Therefore,this study proposes a Blockchain-based-Federated Learning architecture for Malicious Node Detection(BFL-MND)model.It trains models locally within healthcare clusters,sharing only model updates instead of patient data,preserving privacy and improving accuracy.Cloud and edge computing enhance the model’s scalability,while blockchain ensures secure,tamper-proof access to health data.Using the PhysioNet dataset,the proposed model achieves an accuracy of 0.95,F1 score of 0.93,precision of 0.94,and recall of 0.96,outperforming baseline models like random forest(0.88),adaptive boosting(0.90),logistic regression(0.86),perceptron(0.83),and deep neural networks(0.92).展开更多
With the rapid development of network technologies,a large number of deployed edge devices and information systems generate massive amounts of data which provide good support for the advancement of data-driven intelli...With the rapid development of network technologies,a large number of deployed edge devices and information systems generate massive amounts of data which provide good support for the advancement of data-driven intelligent models.However,these data often contain sensitive information of users.Federated learning(FL),as a privacy preservation machine learning setting,allows users to obtain a well-trained model without sending the privacy-sensitive local data to the central server.Despite the promising prospect of FL,several significant research challenges need to be addressed before widespread deployment,including network resource allocation,model security,model convergence,etc.In this paper,we first provide a brief survey on some of these works that have been done on FL and discuss the motivations of the Communication Networks(CNs)and FL to mutually enable each other.We analyze the support of network technologies for FL,which requires frequent communication and emphasizes security,as well as the studies on the intelligence of many network scenarios and the improvement of network performance and security by the methods based on FL.At last,some challenges and broader perspectives are explored.展开更多
6G is desired to support more intelligence networks and this trend attaches importance to the self-healing capability if degradation emerges in the cellular networks.As a primary component of selfhealing networks,faul...6G is desired to support more intelligence networks and this trend attaches importance to the self-healing capability if degradation emerges in the cellular networks.As a primary component of selfhealing networks,fault detection is investigated in this paper.Considering the fast response and low timeand-computational consumption,it is the first time that the Online Broad Learning System(OBLS)is applied to identify outages in cellular networks.In addition,the Automatic-constructed Online Broad Learning System(AOBLS)is put forward to rationalize its structure and consequently avoid over-fitting and under-fitting.Furthermore,a multi-layer classification structure is proposed to further improve the classification performance.To face the challenges caused by imbalanced data in fault detection problems,a novel weighting strategy is derived to achieve the Multilayer Automatic-constructed Weighted Online Broad Learning System(MAWOBLS)and ensemble learning with retrained Support Vector Machine(SVM),denoted as EMAWOBLS,for superior treatment with this imbalance issue.Simulation results show that the proposed algorithm has excellent performance in detecting faults with satisfactory time usage.展开更多
Zero Trust Network(ZTN)enhances network security through strict authentication and access control.However,in the ZTN,optimizing flow control to improve the quality of service is still facing challenges.Software Define...Zero Trust Network(ZTN)enhances network security through strict authentication and access control.However,in the ZTN,optimizing flow control to improve the quality of service is still facing challenges.Software Defined Network(SDN)provides solutions through centralized control and dynamic resource allocation,but the existing scheduling methods based on Deep Reinforcement Learning(DRL)are insufficient in terms of convergence speed and dynamic optimization capability.To solve these problems,this paper proposes DRL-AMIR,which is an efficient flow scheduling method for software defined ZTN.This method constructs a flow scheduling optimization model that comprehensively considers service delay,bandwidth occupation,and path hops.Additionally,it balances the differentiated requirements of delay-critical K-flows,bandwidth-intensive D-flows,and background B-flows through adaptiveweighting.Theproposed framework employs a customized state space comprising node labels,link bandwidth,delaymetrics,and path length.It incorporates an action space derived fromnode weights and a hybrid reward function that integrates both single-step and multi-step excitation mechanisms.Based on these components,a hierarchical architecture is designed,effectively integrating the data plane,control plane,and knowledge plane.In particular,the adaptive expert mechanism is introduced,which triggers the shortest path algorithm in the training process to accelerate convergence,reduce trial and error costs,and maintain stability.Experiments across diverse real-world network topologies demonstrate that DRL-AMIR achieves a 15–20%reduction in K-flow transmission delays,a 10–15%improvement in link bandwidth utilization compared to SPR,QoSR,and DRSIR,and a 30%faster convergence speed via adaptive expert mechanisms.展开更多
With the rapid development of advanced networking and computing technologies such as the Internet of Things, network function virtualization, and 5G infrastructure, new development opportunities are emerging for Marit...With the rapid development of advanced networking and computing technologies such as the Internet of Things, network function virtualization, and 5G infrastructure, new development opportunities are emerging for Maritime Meteorological Sensor Networks(MMSNs). However, the increasing number of intelligent devices joining the MMSN poses a growing threat to network security. Current Artificial Intelligence(AI) intrusion detection techniques turn intrusion detection into a classification problem, where AI excels. These techniques assume sufficient high-quality instances for model construction, which is often unsatisfactory for real-world operation with limited attack instances and constantly evolving characteristics. This paper proposes an Adaptive Personalized Federated learning(APFed) framework that allows multiple MMSN owners to engage in collaborative training. By employing an adaptive personalized update and a shared global classifier, the adverse effects of imbalanced, Non-Independent and Identically Distributed(Non-IID) data are mitigated, enabling the intrusion detection model to possess personalized capabilities and good global generalization. In addition, a lightweight intrusion detection model is proposed to detect various attacks with an effective adaptation to the MMSN environment. Finally, extensive experiments on a classical network dataset show that the attack classification accuracy is improved by about 5% compared to most baselines in the global scenarios.展开更多
Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilizat...Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilization efficiency. To meet the diverse needs of tasks, it usually needs to instantiate multiple network functions in the form of containers interconnect various generated containers to build a Container Cluster(CC). Then CCs will be deployed on edge service nodes with relatively limited resources. However, the increasingly complex and timevarying nature of tasks brings great challenges to optimal placement of CC. This paper regards the charges for various resources occupied by providing services as revenue, the service efficiency and energy consumption as cost, thus formulates a Mixed Integer Programming(MIP) model to describe the optimal placement of CC on edge service nodes. Furthermore, an Actor-Critic based Deep Reinforcement Learning(DRL) incorporating Graph Convolutional Networks(GCN) framework named as RL-GCN is proposed to solve the optimization problem. The framework obtains an optimal placement strategy through self-learning according to the requirements and objectives of the placement of CC. Particularly, through the introduction of GCN, the features of the association relationship between multiple containers in CCs can be effectively extracted to improve the quality of placement.The experiment results show that under different scales of service nodes and task requests, the proposed method can obtain the improved system performance in terms of placement error ratio, time efficiency of solution output and cumulative system revenue compared with other representative baseline methods.展开更多
Intelligent Transportation Systems(ITS)leverage Integrated Sensing and Communications(ISAC)to enhance data exchange between vehicles and infrastructure in the Internet of Vehicles(IoV).This integration inevitably incr...Intelligent Transportation Systems(ITS)leverage Integrated Sensing and Communications(ISAC)to enhance data exchange between vehicles and infrastructure in the Internet of Vehicles(IoV).This integration inevitably increases computing demands,risking real-time system stability.Vehicle Edge Computing(VEC)addresses this by offloading tasks to Road Side Units(RSUs),ensuring timely services.Our previous work,the FLSimCo algorithm,which uses local resources for federated Self-Supervised Learning(SSL),has a limitation:vehicles often can’t complete all iteration tasks.Our improved algorithm offloads partial tasks to RSUs and optimizes energy consumption by adjusting transmission power,CPU frequency,and task assignment ratios,balancing local and RSU-based training.Meanwhile,setting an offloading threshold further prevents inefficiencies.Simulation results show that the enhanced algorithm reduces energy consumption and improves offloading efficiency and accuracy of federated SSL.展开更多
The rapid increase in the number of Internet of Things(IoT)devices,coupled with a rise in sophisticated cyberattacks,demands robust intrusion detection systems.This study presents a holistic,intelligent intrusion dete...The rapid increase in the number of Internet of Things(IoT)devices,coupled with a rise in sophisticated cyberattacks,demands robust intrusion detection systems.This study presents a holistic,intelligent intrusion detection system.It uses a combined method that integrates machine learning(ML)and deep learning(DL)techniques to improve the protection of contemporary information technology(IT)systems.Unlike traditional signature-based or singlemodel methods,this system integrates the strengths of ensemble learning for binary classification and deep learning for multi-class classification.This combination provides a more nuanced and adaptable defense.The research utilizes the NF-UQ-NIDS-v2 dataset,a recent,comprehensive benchmark for evaluating network intrusion detection systems(NIDS).Our methodological framework employs advanced artificial intelligence techniques.Specifically,we use ensemble learning algorithms(Random Forest,Gradient Boosting,AdaBoost,and XGBoost)for binary classification.Deep learning architectures are also employed to address the complexities of multi-class classification,allowing for fine-grained identification of intrusion types.To mitigate class imbalance,a common problem in multi-class intrusion detection that biases model performance,we use oversampling and data augmentation.These techniques ensure equitable class representation.The results demonstrate the efficacy of the proposed hybrid ML-DL system.It achieves significant improvements in intrusion detection accuracy and reliability.This research contributes substantively to cybersecurity by providing a more robust and adaptable intrusion detection solution.展开更多
Wireless Sensor Networks(WSNs)play a critical role in automated border surveillance systems,where continuous monitoring is essential.However,limited energy resources in sensor nodes lead to frequent network failures a...Wireless Sensor Networks(WSNs)play a critical role in automated border surveillance systems,where continuous monitoring is essential.However,limited energy resources in sensor nodes lead to frequent network failures and reduced coverage over time.To address this issue,this paper presents an innovative energy-efficient protocol based on deep Q-learning(DQN),specifically developed to prolong the operational lifespan of WSNs used in border surveillance.By harnessing the adaptive power of DQN,the proposed protocol dynamically adjusts node activity and communication patterns.This approach ensures optimal energy usage while maintaining high coverage,connectivity,and data accuracy.The proposed system is modeled with 100 sensor nodes deployed over a 1000 m×1000 m area,featuring a strategically positioned sink node.Our method outperforms traditional approaches,achieving significant enhancements in network lifetime and energy utilization.Through extensive simulations,it is observed that the network lifetime increases by 9.75%,throughput increases by 8.85%and average delay decreases by 9.45%in comparison to the similar recent protocols.It demonstrates the robustness and efficiency of our protocol in real-world scenarios,highlighting its potential to revolutionize border surveillance operations.展开更多
Due to their high mechanical compliance and excellent biocompatibility,conductive hydrogels exhibit significant potential for applications in flexible electronics.However,as the demand for high sensitivity,superior me...Due to their high mechanical compliance and excellent biocompatibility,conductive hydrogels exhibit significant potential for applications in flexible electronics.However,as the demand for high sensitivity,superior mechanical properties,and strong adhesion performance continues to grow,many conventional fabrication methods remain complex and costly.Herein,we propose a simple and efficient strategy to construct an entangled network hydrogel through a liquid-metal-induced cross-linking reaction,hydrogel demonstrates outstanding properties,including exceptional stretchability(1643%),high tensile strength(366.54 kPa),toughness(350.2 kJ m^(−3)),and relatively low mechanical hysteresis.The hydrogel exhibits long-term stable reusable adhesion(104 kPa),enabling conformal and stable adhesion to human skin.This capability allows it to effectively capture high-quality epidermal electrophysiological signals with high signal-to-noise ratio(25.2 dB)and low impedance(310 ohms).Furthermore,by integrating advanced machine learning algorithms,achieving an attention classification accuracy of 91.38%,which will significantly impact fields like education,healthcare,and artificial intelligence.展开更多
Urban traffic prediction with high precision is always the unremitting pursuit of intelligent transportation systems and is instrumental in bringing smart cities into reality.The fundamental challenges for traffic pre...Urban traffic prediction with high precision is always the unremitting pursuit of intelligent transportation systems and is instrumental in bringing smart cities into reality.The fundamental challenges for traffic prediction lie in the accurate modelling of spatial and temporal traffic dynamics.Existing approaches mainly focus on modelling the traffic data itself,but do not explore the traffic correlations implicit in origin-destination(OD)data.In this paper,we propose STOD-Net,a dynamic spatial-temporal OD feature-enhanced deep network,to simultaneously predict the in-traffic and out-traffic for each and every region of a city.We model the OD data as dynamic graphs and adopt graph neural networks in STOD-Net to learn a low-dimensional representation for each region.As per the region feature,we design a gating mechanism and operate it on the traffic feature learning to explicitly capture spatial correlations.To further capture the complicated spatial and temporal dependencies among different regions,we propose a novel joint feature,learning block in STOD-Net and transfer the hybrid OD features to each block to make the learning process spatiotemporal-aware.We evaluate the effectiveness of STOD-Net on two benchmark datasets,and experimental results demonstrate that it outperforms the state-of-the-art by approximately 5%in terms of prediction accuracy and considerably improves prediction stability up to 80%in terms of standard deviation.展开更多
To address uncertainties in satellite orbit error prediction,this study proposes a novel ensemble learning-based orbit prediction method specifically designed for the BeiDou navigation satellite system(BDS).Building o...To address uncertainties in satellite orbit error prediction,this study proposes a novel ensemble learning-based orbit prediction method specifically designed for the BeiDou navigation satellite system(BDS).Building on ephemeris data and perturbation corrections,two new models are proposed:attention-enhanced BPNN(AEBP)and Transformer-ResNet-BiLSTM(TR-BiLSTM).These models effectively capture both local and global dependencies in satellite orbit data.To further enhance prediction accuracy and stability,the outputs of these two models were integrated using the gradient boosting decision tree(GBDT)ensemble learning method,which was optimized through a grid search.The main contribution of this approach is the synergistic combination of deep learning models and GBDT,which significantly improves both the accuracy and robustness of satellite orbit predictions.This model was validated using broadcast ephemeris data from the BDS-3 MEO and inclined geosynchronous orbit(IGSO)satellites.The results show that the proposed method achieves an error correction rate of 65.4%.This ensemble learning-based approach offers a highly effective solution for high-precision and stable satellite orbit predictions.展开更多
The rapid development of military technology has prompted different types of equipment to break the limits of operational domains and emerged through complex interactions to form a vast combat system of systems(CSoS),...The rapid development of military technology has prompted different types of equipment to break the limits of operational domains and emerged through complex interactions to form a vast combat system of systems(CSoS),which can be abstracted as a heterogeneous combat network(HCN).It is of great military significance to study the disintegration strategy of combat networks to achieve the breakdown of the enemy’s CSoS.To this end,this paper proposes an integrated framework called HCN disintegration based on double deep Q-learning(HCN-DDQL).Firstly,the enemy’s CSoS is abstracted as an HCN,and an evaluation index based on the capability and attack costs of nodes is proposed.Meanwhile,a mathematical optimization model for HCN disintegration is established.Secondly,the learning environment and double deep Q-network model of HCN-DDQL are established to train the HCN’s disintegration strategy.Then,based on the learned HCN-DDQL model,an algorithm for calculating the HCN’s optimal disintegration strategy under different states is proposed.Finally,a case study is used to demonstrate the reliability and effectiveness of HCNDDQL,and the results demonstrate that HCN-DDQL can disintegrate HCNs more effectively than baseline methods.展开更多
Gastrointestinal(GI)diseases,including gastric and colorectal cancers,signi-ficantly impact global health,necessitating accurate and efficient diagnostic me-thods.Endoscopic examination is the primary diagnostic tool;...Gastrointestinal(GI)diseases,including gastric and colorectal cancers,signi-ficantly impact global health,necessitating accurate and efficient diagnostic me-thods.Endoscopic examination is the primary diagnostic tool;however,its accu-racy is limited by operator dependency and interobserver variability.Advance-ments in deep learning,particularly convolutional neural networks(CNNs),show great potential for enhancing GI disease detection and classification.This review explores the application of CNNs in endoscopic imaging,focusing on polyp and tumor detection,disease classification,endoscopic ultrasound,and capsule endo-scopy analysis.We discuss the performance of CNN models with traditional dia-gnostic methods,highlighting their advantages in accuracy and real-time decision support.Despite promising results,challenges remain,including data availability,model interpretability,and clinical integration.Future directions include impro-ving model generalization,enhancing explainability,and conducting large-scale clinical trials.With continued advancements,CNN-powered artificial intelligence systems could revolutionize GI endoscopy by enhancing early disease detection,reducing diagnostic errors,and improving patient outcomes.展开更多
Recommendation systems(RSs)are crucial in personalizing user experiences in digital environments by suggesting relevant content or items.Collaborative filtering(CF)is a widely used personalization technique that lever...Recommendation systems(RSs)are crucial in personalizing user experiences in digital environments by suggesting relevant content or items.Collaborative filtering(CF)is a widely used personalization technique that leverages user-item interactions to generate recommendations.However,it struggles with challenges like the cold-start problem,scalability issues,and data sparsity.To address these limitations,we develop a Graph Convolutional Networks(GCNs)model that captures the complex network of interactions between users and items,identifying subtle patterns that traditional methods may overlook.We integrate this GCNs model into a federated learning(FL)framework,enabling themodel to learn fromdecentralized datasets.This not only significantly enhances user privacy—a significant improvement over conventionalmodels but also reassures users about the safety of their data.Additionally,by securely incorporating demographic information,our approach further personalizes recommendations and mitigates the coldstart issue without compromising user data.We validate our RSs model using the openMovieLens dataset and evaluate its performance across six key metrics:Precision,Recall,Area Under the Receiver Operating Characteristic Curve(ROC-AUC),F1 Score,Normalized Discounted Cumulative Gain(NDCG),and Mean Reciprocal Rank(MRR).The experimental results demonstrate significant enhancements in recommendation quality,underscoring that combining GCNs with CF in a federated setting provides a transformative solution for advanced recommendation systems.展开更多
Abnormal network traffic, as a frequent security risk, requires a series of techniques to categorize and detect it. Existing network traffic anomaly detection still faces challenges: the inability to fully extract loc...Abnormal network traffic, as a frequent security risk, requires a series of techniques to categorize and detect it. Existing network traffic anomaly detection still faces challenges: the inability to fully extract local and global features, as well as the lack of effective mechanisms to capture complex interactions between features;Additionally, when increasing the receptive field to obtain deeper feature representations, the reliance on increasing network depth leads to a significant increase in computational resource consumption, affecting the efficiency and performance of detection. Based on these issues, firstly, this paper proposes a network traffic anomaly detection model based on parallel dilated convolution and residual learning (Res-PDC). To better explore the interactive relationships between features, the traffic samples are converted into two-dimensional matrix. A module combining parallel dilated convolutions and residual learning (res-pdc) was designed to extract local and global features of traffic at different scales. By utilizing res-pdc modules with different dilation rates, we can effectively capture spatial features at different scales and explore feature dependencies spanning wider regions without increasing computational resources. Secondly, to focus and integrate the information in different feature subspaces, further enhance and extract the interactions among the features, multi-head attention is added to Res-PDC, resulting in the final model: multi-head attention enhanced parallel dilated convolution and residual learning (MHA-Res-PDC) for network traffic anomaly detection. Finally, comparisons with other machine learning and deep learning algorithms are conducted on the NSL-KDD and CIC-IDS-2018 datasets. The experimental results demonstrate that the proposed method in this paper can effectively improve the detection performance.展开更多
Physics-informed neural networks(PINNs)are promising to replace conventional mesh-based partial tial differen-equation(PDE)solvers by offering more accurate and flexible PDE solutions.However,PINNs are hampered by the...Physics-informed neural networks(PINNs)are promising to replace conventional mesh-based partial tial differen-equation(PDE)solvers by offering more accurate and flexible PDE solutions.However,PINNs are hampered by the relatively slow convergence and the need to perform additional,potentially expensive training for new PDE parameters.To solve this limitation,we introduce LatentPINN,a framework that utilizes latent representations of the PDE parameters as additional(to the coordinates)inputs into PINNs and allows for training over the distribution of these parameters.Motivated by the recent progress on generative models,we promote using latent diffusion models to learn compressed latent representations of the distribution of PDE parameters as they act as input parameters for NN functional solutions.We use a two-stage training scheme in which,in the first stage,we learn the latent representations for the distribution of PDE parameters.In the second stage,we train a physics-informed neural network over inputs given by randomly drawn samples from the coordinate space within the solution domain and samples from the learned latent representation of the PDE parameters.Considering their importance in capturing evolving interfaces and fronts in various fields,we test the approach on a class of level set equations given,for example,by the nonlinear Eikonal equation.We share results corresponding to three Eikonal parameters(velocity models)sets.The proposed method performs well on new phase velocity models without the need for any additional training.展开更多
基金Supported by Sichuan Science and Technology Program(2023YFSY0026,2023YFH0004)Supported by the Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korean government(MSIT)(No.RS-2022-00155885,Artificial Intelligence Convergence Innovation Human Resources Development(Hanyang University ERICA)).
文摘Two-dimensional endoscopic images are susceptible to interferences such as specular reflections and monotonous texture illumination,hindering accurate three-dimensional lesion reconstruction by surgical robots.This study proposes a novel end-to-end disparity estimation model to address these challenges.Our approach combines a Pseudo-Siamese neural network architecture with pyramid dilated convolutions,integrating multi-scale image information to enhance robustness against lighting interferences.This study introduces a Pseudo-Siamese structure-based disparity regression model that simplifies left-right image comparison,improving accuracy and efficiency.The model was evaluated using a dataset of stereo endoscopic videos captured by the Da Vinci surgical robot,comprising simulated silicone heart sequences and real heart video data.Experimental results demonstrate significant improvement in the network’s resistance to lighting interference without substantially increasing parameters.Moreover,the model exhibited faster convergence during training,contributing to overall performance enhancement.This study advances endoscopic image processing accuracy and has potential implications for surgical robot applications in complex environments.
文摘Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay differential equations(DDEs)and delay integrodifferential equations(DIDEs)with constant delays,primarily due to their low regularity at delayinduced breaking points.In this paper,a DNN method that combines multi-task learning(MTL)which is proposed to solve both the forward and inverse problems of DIDEs.The core idea of this approach is to divide the original equation into multiple tasks based on the delay,using auxiliary outputs to represent the integral terms,followed by the use of MTL to seamlessly incorporate the properties at the breaking points into the loss function.Furthermore,given the increased training dificulty associated with multiple tasks and outputs,we employ a sequential training scheme to reduce training complexity and provide reference solutions for subsequent tasks.This approach significantly enhances the approximation accuracy of solving DIDEs with DNNs,as demonstrated by comparisons with traditional DNN methods.We validate the effectiveness of this method through several numerical experiments,test various parameter sharing structures in MTL and compare the testing results of these structures.Finally,this method is implemented to solve the inverse problem of nonlinear DIDE and the results show that the unknown parameters of DIDE can be discovered with sparse or noisy data.
基金National Key Research and Development Program(2021YFB2900604)。
文摘Low Earth orbit(LEO)satellite networks exhibit distinct characteristics,e.g.,limited resources of individual satellite nodes and dynamic network topology,which have brought many challenges for routing algorithms.To satisfy quality of service(QoS)requirements of various users,it is critical to research efficient routing strategies to fully utilize satellite resources.This paper proposes a multi-QoS information optimized routing algorithm based on reinforcement learning for LEO satellite networks,which guarantees high level assurance demand services to be prioritized under limited satellite resources while considering the load balancing performance of the satellite networks for low level assurance demand services to ensure the full and effective utilization of satellite resources.An auxiliary path search algorithm is proposed to accelerate the convergence of satellite routing algorithm.Simulation results show that the generated routing strategy can timely process and fully meet the QoS demands of high assurance services while effectively improving the load balancing performance of the link.
基金funded by the Northern Border University,Arar,KSA,under the project number“NBU-FFR-2025-3555-07”.
文摘Healthcare networks are transitioning from manual records to electronic health records,but this shift introduces vulnerabilities such as secure communication issues,privacy concerns,and the presence of malicious nodes.Existing machine and deep learning-based anomalies detection methods often rely on centralized training,leading to reduced accuracy and potential privacy breaches.Therefore,this study proposes a Blockchain-based-Federated Learning architecture for Malicious Node Detection(BFL-MND)model.It trains models locally within healthcare clusters,sharing only model updates instead of patient data,preserving privacy and improving accuracy.Cloud and edge computing enhance the model’s scalability,while blockchain ensures secure,tamper-proof access to health data.Using the PhysioNet dataset,the proposed model achieves an accuracy of 0.95,F1 score of 0.93,precision of 0.94,and recall of 0.96,outperforming baseline models like random forest(0.88),adaptive boosting(0.90),logistic regression(0.86),perceptron(0.83),and deep neural networks(0.92).
基金supported by National Key Research and Development Program of China(No.2023YFB2704200)Beijing Natural Science Foundation(No.4254064).
文摘With the rapid development of network technologies,a large number of deployed edge devices and information systems generate massive amounts of data which provide good support for the advancement of data-driven intelligent models.However,these data often contain sensitive information of users.Federated learning(FL),as a privacy preservation machine learning setting,allows users to obtain a well-trained model without sending the privacy-sensitive local data to the central server.Despite the promising prospect of FL,several significant research challenges need to be addressed before widespread deployment,including network resource allocation,model security,model convergence,etc.In this paper,we first provide a brief survey on some of these works that have been done on FL and discuss the motivations of the Communication Networks(CNs)and FL to mutually enable each other.We analyze the support of network technologies for FL,which requires frequent communication and emphasizes security,as well as the studies on the intelligence of many network scenarios and the improvement of network performance and security by the methods based on FL.At last,some challenges and broader perspectives are explored.
基金supported in part by the National Key Research and Development Project under Grant 2020YFB1806805partially funded through a grant from Qualcomm。
文摘6G is desired to support more intelligence networks and this trend attaches importance to the self-healing capability if degradation emerges in the cellular networks.As a primary component of selfhealing networks,fault detection is investigated in this paper.Considering the fast response and low timeand-computational consumption,it is the first time that the Online Broad Learning System(OBLS)is applied to identify outages in cellular networks.In addition,the Automatic-constructed Online Broad Learning System(AOBLS)is put forward to rationalize its structure and consequently avoid over-fitting and under-fitting.Furthermore,a multi-layer classification structure is proposed to further improve the classification performance.To face the challenges caused by imbalanced data in fault detection problems,a novel weighting strategy is derived to achieve the Multilayer Automatic-constructed Weighted Online Broad Learning System(MAWOBLS)and ensemble learning with retrained Support Vector Machine(SVM),denoted as EMAWOBLS,for superior treatment with this imbalance issue.Simulation results show that the proposed algorithm has excellent performance in detecting faults with satisfactory time usage.
基金supported in part by Scientific Research Fund of Zhejiang Provincial Education Department under Grant Y202351110in part by Huzhou Science and Technology Plan Project under Grant 2024YZ23+1 种基金in part by Research Fund of National Key Laboratory of Advanced Communication Networks under Grant SCX23641X004in part by Postgraduate Research and Innovation Project of Huzhou University under Grant 2024KYCX50.
文摘Zero Trust Network(ZTN)enhances network security through strict authentication and access control.However,in the ZTN,optimizing flow control to improve the quality of service is still facing challenges.Software Defined Network(SDN)provides solutions through centralized control and dynamic resource allocation,but the existing scheduling methods based on Deep Reinforcement Learning(DRL)are insufficient in terms of convergence speed and dynamic optimization capability.To solve these problems,this paper proposes DRL-AMIR,which is an efficient flow scheduling method for software defined ZTN.This method constructs a flow scheduling optimization model that comprehensively considers service delay,bandwidth occupation,and path hops.Additionally,it balances the differentiated requirements of delay-critical K-flows,bandwidth-intensive D-flows,and background B-flows through adaptiveweighting.Theproposed framework employs a customized state space comprising node labels,link bandwidth,delaymetrics,and path length.It incorporates an action space derived fromnode weights and a hybrid reward function that integrates both single-step and multi-step excitation mechanisms.Based on these components,a hierarchical architecture is designed,effectively integrating the data plane,control plane,and knowledge plane.In particular,the adaptive expert mechanism is introduced,which triggers the shortest path algorithm in the training process to accelerate convergence,reduce trial and error costs,and maintain stability.Experiments across diverse real-world network topologies demonstrate that DRL-AMIR achieves a 15–20%reduction in K-flow transmission delays,a 10–15%improvement in link bandwidth utilization compared to SPR,QoSR,and DRSIR,and a 30%faster convergence speed via adaptive expert mechanisms.
基金supported by the National Natural Science Foundation of China under Grant 62371181the Project on Excellent Postgraduate Dissertation of Hohai University (422003482)the Changzhou Science and Technology International Cooperation Program under Grant CZ20230029。
文摘With the rapid development of advanced networking and computing technologies such as the Internet of Things, network function virtualization, and 5G infrastructure, new development opportunities are emerging for Maritime Meteorological Sensor Networks(MMSNs). However, the increasing number of intelligent devices joining the MMSN poses a growing threat to network security. Current Artificial Intelligence(AI) intrusion detection techniques turn intrusion detection into a classification problem, where AI excels. These techniques assume sufficient high-quality instances for model construction, which is often unsatisfactory for real-world operation with limited attack instances and constantly evolving characteristics. This paper proposes an Adaptive Personalized Federated learning(APFed) framework that allows multiple MMSN owners to engage in collaborative training. By employing an adaptive personalized update and a shared global classifier, the adverse effects of imbalanced, Non-Independent and Identically Distributed(Non-IID) data are mitigated, enabling the intrusion detection model to possess personalized capabilities and good global generalization. In addition, a lightweight intrusion detection model is proposed to detect various attacks with an effective adaptation to the MMSN environment. Finally, extensive experiments on a classical network dataset show that the attack classification accuracy is improved by about 5% compared to most baselines in the global scenarios.
文摘Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilization efficiency. To meet the diverse needs of tasks, it usually needs to instantiate multiple network functions in the form of containers interconnect various generated containers to build a Container Cluster(CC). Then CCs will be deployed on edge service nodes with relatively limited resources. However, the increasingly complex and timevarying nature of tasks brings great challenges to optimal placement of CC. This paper regards the charges for various resources occupied by providing services as revenue, the service efficiency and energy consumption as cost, thus formulates a Mixed Integer Programming(MIP) model to describe the optimal placement of CC on edge service nodes. Furthermore, an Actor-Critic based Deep Reinforcement Learning(DRL) incorporating Graph Convolutional Networks(GCN) framework named as RL-GCN is proposed to solve the optimization problem. The framework obtains an optimal placement strategy through self-learning according to the requirements and objectives of the placement of CC. Particularly, through the introduction of GCN, the features of the association relationship between multiple containers in CCs can be effectively extracted to improve the quality of placement.The experiment results show that under different scales of service nodes and task requests, the proposed method can obtain the improved system performance in terms of placement error ratio, time efficiency of solution output and cumulative system revenue compared with other representative baseline methods.
文摘Intelligent Transportation Systems(ITS)leverage Integrated Sensing and Communications(ISAC)to enhance data exchange between vehicles and infrastructure in the Internet of Vehicles(IoV).This integration inevitably increases computing demands,risking real-time system stability.Vehicle Edge Computing(VEC)addresses this by offloading tasks to Road Side Units(RSUs),ensuring timely services.Our previous work,the FLSimCo algorithm,which uses local resources for federated Self-Supervised Learning(SSL),has a limitation:vehicles often can’t complete all iteration tasks.Our improved algorithm offloads partial tasks to RSUs and optimizes energy consumption by adjusting transmission power,CPU frequency,and task assignment ratios,balancing local and RSU-based training.Meanwhile,setting an offloading threshold further prevents inefficiencies.Simulation results show that the enhanced algorithm reduces energy consumption and improves offloading efficiency and accuracy of federated SSL.
文摘The rapid increase in the number of Internet of Things(IoT)devices,coupled with a rise in sophisticated cyberattacks,demands robust intrusion detection systems.This study presents a holistic,intelligent intrusion detection system.It uses a combined method that integrates machine learning(ML)and deep learning(DL)techniques to improve the protection of contemporary information technology(IT)systems.Unlike traditional signature-based or singlemodel methods,this system integrates the strengths of ensemble learning for binary classification and deep learning for multi-class classification.This combination provides a more nuanced and adaptable defense.The research utilizes the NF-UQ-NIDS-v2 dataset,a recent,comprehensive benchmark for evaluating network intrusion detection systems(NIDS).Our methodological framework employs advanced artificial intelligence techniques.Specifically,we use ensemble learning algorithms(Random Forest,Gradient Boosting,AdaBoost,and XGBoost)for binary classification.Deep learning architectures are also employed to address the complexities of multi-class classification,allowing for fine-grained identification of intrusion types.To mitigate class imbalance,a common problem in multi-class intrusion detection that biases model performance,we use oversampling and data augmentation.These techniques ensure equitable class representation.The results demonstrate the efficacy of the proposed hybrid ML-DL system.It achieves significant improvements in intrusion detection accuracy and reliability.This research contributes substantively to cybersecurity by providing a more robust and adaptable intrusion detection solution.
基金funded by Sardar Vallabhbhai National Institute of Technology through SEED grant No.Dean(R&C)/SEED Money/2021-22/11153Date:08/02/2022supported by Business Finland EWARE-6G project under 6G Bridge program,and in part by theHorizon Europe(Smart Networks and Services Joint Under taking)program under Grant Agreement No.101096838(6G-XR project).
文摘Wireless Sensor Networks(WSNs)play a critical role in automated border surveillance systems,where continuous monitoring is essential.However,limited energy resources in sensor nodes lead to frequent network failures and reduced coverage over time.To address this issue,this paper presents an innovative energy-efficient protocol based on deep Q-learning(DQN),specifically developed to prolong the operational lifespan of WSNs used in border surveillance.By harnessing the adaptive power of DQN,the proposed protocol dynamically adjusts node activity and communication patterns.This approach ensures optimal energy usage while maintaining high coverage,connectivity,and data accuracy.The proposed system is modeled with 100 sensor nodes deployed over a 1000 m×1000 m area,featuring a strategically positioned sink node.Our method outperforms traditional approaches,achieving significant enhancements in network lifetime and energy utilization.Through extensive simulations,it is observed that the network lifetime increases by 9.75%,throughput increases by 8.85%and average delay decreases by 9.45%in comparison to the similar recent protocols.It demonstrates the robustness and efficiency of our protocol in real-world scenarios,highlighting its potential to revolutionize border surveillance operations.
基金supported by the National Key Research&Development Program of China(grant no.2022YFC3500503)the National Natural Science Foundation of China(grant nos.62227807,12374171,12004034,62402041)+2 种基金the Beijing Institute of Technology Research Fund Program for Young Scholars,Chinathe Fundamental Research Funds for the Central Universities(grant nos.2024CX06060)Beijing Youth Talent Lifting Project.
文摘Due to their high mechanical compliance and excellent biocompatibility,conductive hydrogels exhibit significant potential for applications in flexible electronics.However,as the demand for high sensitivity,superior mechanical properties,and strong adhesion performance continues to grow,many conventional fabrication methods remain complex and costly.Herein,we propose a simple and efficient strategy to construct an entangled network hydrogel through a liquid-metal-induced cross-linking reaction,hydrogel demonstrates outstanding properties,including exceptional stretchability(1643%),high tensile strength(366.54 kPa),toughness(350.2 kJ m^(−3)),and relatively low mechanical hysteresis.The hydrogel exhibits long-term stable reusable adhesion(104 kPa),enabling conformal and stable adhesion to human skin.This capability allows it to effectively capture high-quality epidermal electrophysiological signals with high signal-to-noise ratio(25.2 dB)and low impedance(310 ohms).Furthermore,by integrating advanced machine learning algorithms,achieving an attention classification accuracy of 91.38%,which will significantly impact fields like education,healthcare,and artificial intelligence.
基金supported by the National Natural Science Foundation of China,Grant/Award Number:62401338by the Shandong Province Excellent Youth Science Fund Project(Overseas),Grant/Award Number:2024HWYQ-028by the Fundamental Research Funds of Shandong University.
文摘Urban traffic prediction with high precision is always the unremitting pursuit of intelligent transportation systems and is instrumental in bringing smart cities into reality.The fundamental challenges for traffic prediction lie in the accurate modelling of spatial and temporal traffic dynamics.Existing approaches mainly focus on modelling the traffic data itself,but do not explore the traffic correlations implicit in origin-destination(OD)data.In this paper,we propose STOD-Net,a dynamic spatial-temporal OD feature-enhanced deep network,to simultaneously predict the in-traffic and out-traffic for each and every region of a city.We model the OD data as dynamic graphs and adopt graph neural networks in STOD-Net to learn a low-dimensional representation for each region.As per the region feature,we design a gating mechanism and operate it on the traffic feature learning to explicitly capture spatial correlations.To further capture the complicated spatial and temporal dependencies among different regions,we propose a novel joint feature,learning block in STOD-Net and transfer the hybrid OD features to each block to make the learning process spatiotemporal-aware.We evaluate the effectiveness of STOD-Net on two benchmark datasets,and experimental results demonstrate that it outperforms the state-of-the-art by approximately 5%in terms of prediction accuracy and considerably improves prediction stability up to 80%in terms of standard deviation.
基金funded by the Strategic Priority Research Program of the Chinese Academy of Sciences(Grant No.XDA28040300)Project for Guangxi Science and Technology Base,and Talents(Grant No.GK AD22035957)+1 种基金the Informatization Plan of the Chinese Academy of Sciences(Grant No.CAS-WX2021SF-0304)the West Light Foundation of the ChineseAcademy of Sciences(Grant No.XAB2021YN19).
文摘To address uncertainties in satellite orbit error prediction,this study proposes a novel ensemble learning-based orbit prediction method specifically designed for the BeiDou navigation satellite system(BDS).Building on ephemeris data and perturbation corrections,two new models are proposed:attention-enhanced BPNN(AEBP)and Transformer-ResNet-BiLSTM(TR-BiLSTM).These models effectively capture both local and global dependencies in satellite orbit data.To further enhance prediction accuracy and stability,the outputs of these two models were integrated using the gradient boosting decision tree(GBDT)ensemble learning method,which was optimized through a grid search.The main contribution of this approach is the synergistic combination of deep learning models and GBDT,which significantly improves both the accuracy and robustness of satellite orbit predictions.This model was validated using broadcast ephemeris data from the BDS-3 MEO and inclined geosynchronous orbit(IGSO)satellites.The results show that the proposed method achieves an error correction rate of 65.4%.This ensemble learning-based approach offers a highly effective solution for high-precision and stable satellite orbit predictions.
基金supported by the National Natural Science Foundation of China(7200120972231011+2 种基金72071206)the Science and Technology Innovative Research Team in Higher Educational Institutions of Hunan Province(2020RC4046)the Science Foundation for Outstanding Youth Scholars of Hunan Province(2022JJ20047).
文摘The rapid development of military technology has prompted different types of equipment to break the limits of operational domains and emerged through complex interactions to form a vast combat system of systems(CSoS),which can be abstracted as a heterogeneous combat network(HCN).It is of great military significance to study the disintegration strategy of combat networks to achieve the breakdown of the enemy’s CSoS.To this end,this paper proposes an integrated framework called HCN disintegration based on double deep Q-learning(HCN-DDQL).Firstly,the enemy’s CSoS is abstracted as an HCN,and an evaluation index based on the capability and attack costs of nodes is proposed.Meanwhile,a mathematical optimization model for HCN disintegration is established.Secondly,the learning environment and double deep Q-network model of HCN-DDQL are established to train the HCN’s disintegration strategy.Then,based on the learned HCN-DDQL model,an algorithm for calculating the HCN’s optimal disintegration strategy under different states is proposed.Finally,a case study is used to demonstrate the reliability and effectiveness of HCNDDQL,and the results demonstrate that HCN-DDQL can disintegrate HCNs more effectively than baseline methods.
基金Supported by Open Funds for Shaanxi Provincial Key Laboratory of Infection and Immune Diseases,No.2023-KFMS-1.
文摘Gastrointestinal(GI)diseases,including gastric and colorectal cancers,signi-ficantly impact global health,necessitating accurate and efficient diagnostic me-thods.Endoscopic examination is the primary diagnostic tool;however,its accu-racy is limited by operator dependency and interobserver variability.Advance-ments in deep learning,particularly convolutional neural networks(CNNs),show great potential for enhancing GI disease detection and classification.This review explores the application of CNNs in endoscopic imaging,focusing on polyp and tumor detection,disease classification,endoscopic ultrasound,and capsule endo-scopy analysis.We discuss the performance of CNN models with traditional dia-gnostic methods,highlighting their advantages in accuracy and real-time decision support.Despite promising results,challenges remain,including data availability,model interpretability,and clinical integration.Future directions include impro-ving model generalization,enhancing explainability,and conducting large-scale clinical trials.With continued advancements,CNN-powered artificial intelligence systems could revolutionize GI endoscopy by enhancing early disease detection,reducing diagnostic errors,and improving patient outcomes.
基金funded by Soonchunhyang University,Grant Numbers 20241422BK21 FOUR(Fostering Outstanding Universities for Research,Grant Number 5199990914048).
文摘Recommendation systems(RSs)are crucial in personalizing user experiences in digital environments by suggesting relevant content or items.Collaborative filtering(CF)is a widely used personalization technique that leverages user-item interactions to generate recommendations.However,it struggles with challenges like the cold-start problem,scalability issues,and data sparsity.To address these limitations,we develop a Graph Convolutional Networks(GCNs)model that captures the complex network of interactions between users and items,identifying subtle patterns that traditional methods may overlook.We integrate this GCNs model into a federated learning(FL)framework,enabling themodel to learn fromdecentralized datasets.This not only significantly enhances user privacy—a significant improvement over conventionalmodels but also reassures users about the safety of their data.Additionally,by securely incorporating demographic information,our approach further personalizes recommendations and mitigates the coldstart issue without compromising user data.We validate our RSs model using the openMovieLens dataset and evaluate its performance across six key metrics:Precision,Recall,Area Under the Receiver Operating Characteristic Curve(ROC-AUC),F1 Score,Normalized Discounted Cumulative Gain(NDCG),and Mean Reciprocal Rank(MRR).The experimental results demonstrate significant enhancements in recommendation quality,underscoring that combining GCNs with CF in a federated setting provides a transformative solution for advanced recommendation systems.
基金supported by the Xiamen Science and Technology Subsidy Project(No.2023CXY0318).
文摘Abnormal network traffic, as a frequent security risk, requires a series of techniques to categorize and detect it. Existing network traffic anomaly detection still faces challenges: the inability to fully extract local and global features, as well as the lack of effective mechanisms to capture complex interactions between features;Additionally, when increasing the receptive field to obtain deeper feature representations, the reliance on increasing network depth leads to a significant increase in computational resource consumption, affecting the efficiency and performance of detection. Based on these issues, firstly, this paper proposes a network traffic anomaly detection model based on parallel dilated convolution and residual learning (Res-PDC). To better explore the interactive relationships between features, the traffic samples are converted into two-dimensional matrix. A module combining parallel dilated convolutions and residual learning (res-pdc) was designed to extract local and global features of traffic at different scales. By utilizing res-pdc modules with different dilation rates, we can effectively capture spatial features at different scales and explore feature dependencies spanning wider regions without increasing computational resources. Secondly, to focus and integrate the information in different feature subspaces, further enhance and extract the interactions among the features, multi-head attention is added to Res-PDC, resulting in the final model: multi-head attention enhanced parallel dilated convolution and residual learning (MHA-Res-PDC) for network traffic anomaly detection. Finally, comparisons with other machine learning and deep learning algorithms are conducted on the NSL-KDD and CIC-IDS-2018 datasets. The experimental results demonstrate that the proposed method in this paper can effectively improve the detection performance.
基金King Abdullah University of Science and Technol-ogy(KAUST)for supporting this research and the Seismic Wave Anal-ysis group for the supportive and encouraging environment.
文摘Physics-informed neural networks(PINNs)are promising to replace conventional mesh-based partial tial differen-equation(PDE)solvers by offering more accurate and flexible PDE solutions.However,PINNs are hampered by the relatively slow convergence and the need to perform additional,potentially expensive training for new PDE parameters.To solve this limitation,we introduce LatentPINN,a framework that utilizes latent representations of the PDE parameters as additional(to the coordinates)inputs into PINNs and allows for training over the distribution of these parameters.Motivated by the recent progress on generative models,we promote using latent diffusion models to learn compressed latent representations of the distribution of PDE parameters as they act as input parameters for NN functional solutions.We use a two-stage training scheme in which,in the first stage,we learn the latent representations for the distribution of PDE parameters.In the second stage,we train a physics-informed neural network over inputs given by randomly drawn samples from the coordinate space within the solution domain and samples from the learned latent representation of the PDE parameters.Considering their importance in capturing evolving interfaces and fronts in various fields,we test the approach on a class of level set equations given,for example,by the nonlinear Eikonal equation.We share results corresponding to three Eikonal parameters(velocity models)sets.The proposed method performs well on new phase velocity models without the need for any additional training.