Two-dimensional endoscopic images are susceptible to interferences such as specular reflections and monotonous texture illumination,hindering accurate three-dimensional lesion reconstruction by surgical robots.This st...Two-dimensional endoscopic images are susceptible to interferences such as specular reflections and monotonous texture illumination,hindering accurate three-dimensional lesion reconstruction by surgical robots.This study proposes a novel end-to-end disparity estimation model to address these challenges.Our approach combines a Pseudo-Siamese neural network architecture with pyramid dilated convolutions,integrating multi-scale image information to enhance robustness against lighting interferences.This study introduces a Pseudo-Siamese structure-based disparity regression model that simplifies left-right image comparison,improving accuracy and efficiency.The model was evaluated using a dataset of stereo endoscopic videos captured by the Da Vinci surgical robot,comprising simulated silicone heart sequences and real heart video data.Experimental results demonstrate significant improvement in the network’s resistance to lighting interference without substantially increasing parameters.Moreover,the model exhibited faster convergence during training,contributing to overall performance enhancement.This study advances endoscopic image processing accuracy and has potential implications for surgical robot applications in complex environments.展开更多
Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay di...Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay differential equations(DDEs)and delay integrodifferential equations(DIDEs)with constant delays,primarily due to their low regularity at delayinduced breaking points.In this paper,a DNN method that combines multi-task learning(MTL)which is proposed to solve both the forward and inverse problems of DIDEs.The core idea of this approach is to divide the original equation into multiple tasks based on the delay,using auxiliary outputs to represent the integral terms,followed by the use of MTL to seamlessly incorporate the properties at the breaking points into the loss function.Furthermore,given the increased training dificulty associated with multiple tasks and outputs,we employ a sequential training scheme to reduce training complexity and provide reference solutions for subsequent tasks.This approach significantly enhances the approximation accuracy of solving DIDEs with DNNs,as demonstrated by comparisons with traditional DNN methods.We validate the effectiveness of this method through several numerical experiments,test various parameter sharing structures in MTL and compare the testing results of these structures.Finally,this method is implemented to solve the inverse problem of nonlinear DIDE and the results show that the unknown parameters of DIDE can be discovered with sparse or noisy data.展开更多
With the increasing complexity of vehicular networks and the proliferation of connected vehicles,Federated Learning(FL)has emerged as a critical framework for decentralized model training while preserving data privacy...With the increasing complexity of vehicular networks and the proliferation of connected vehicles,Federated Learning(FL)has emerged as a critical framework for decentralized model training while preserving data privacy.However,efficient client selection and adaptive weight allocation in heterogeneous and non-IID environments remain challenging.To address these issues,we propose Federated Learning with Client Selection and Adaptive Weighting(FedCW),a novel algorithm that leverages adaptive client selection and dynamic weight allocation for optimizing model convergence in real-time vehicular networks.FedCW selects clients based on their Euclidean distance from the global model and dynamically adjusts aggregation weights to optimize both data diversity and model convergence.Experimental results show that FedCW significantly outperforms existing FL algorithms such as FedAvg,FedProx,and SCAFFOLD,particularly in non-IID settings,achieving faster convergence,higher accuracy,and reduced communication overhead.These findings demonstrate that FedCW provides an effective solution for enhancing the performance of FL in heterogeneous,edge-based computing environments.展开更多
The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities...The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities and obstacles.The huge and diversified nature of these datasets cannot always be managed using traditional data analysis methods.As a consequence,deep learning has emerged as a strong tool for analysing numerous omics data due to its ability to handle complex and non-linear relationships.This paper explores the fundamental concepts of deep learning and how they are used in multi-omics medical data mining.We demonstrate how autoencoders,variational autoencoders,multimodal models,attention mechanisms,transformers,and graph neural networks enable pattern analysis and recognition across all omics data.Deep learning has been found to be effective in illness classification,biomarker identification,gene network learning,and therapeutic efficacy prediction.We also consider critical problems like as data quality,model explainability,whether findings can be repeated,and computational power requirements.We now consider future elements of combining omics with clinical and imaging data,explainable AI,federated learning,and real-time diagnostics.Overall,this study emphasises the need of collaborating across disciplines to advance deep learning-based multi-omics research for precision medicine and comprehending complicated disorders.展开更多
An image processing and deep learning method for identifying different types of rock images was proposed.Preprocessing,such as rock image acquisition,gray scaling,Gaussian blurring,and feature dimensionality reduction...An image processing and deep learning method for identifying different types of rock images was proposed.Preprocessing,such as rock image acquisition,gray scaling,Gaussian blurring,and feature dimensionality reduction,was conducted to extract useful feature information and recognize and classify rock images using Tensor Flow-based convolutional neural network(CNN)and Py Qt5.A rock image dataset was established and separated into workouts,confirmation sets,and test sets.The framework was subsequently compiled and trained.The categorization approach was evaluated using image data from the validation and test datasets,and key metrics,such as accuracy,precision,and recall,were analyzed.Finally,the classification model conducted a probabilistic analysis of the measured data to determine the equivalent lithological type for each image.The experimental results indicated that the method combining deep learning,Tensor Flow-based CNN,and Py Qt5 to recognize and classify rock images has an accuracy rate of up to 98.8%,and can be successfully utilized for rock image recognition.The system can be extended to geological exploration,mine engineering,and other rock and mineral resource development to more efficiently and accurately recognize rock samples.Moreover,it can match them with the intelligent support design system to effectively improve the reliability and economy of the support scheme.The system can serve as a reference for supporting the design of other mining and underground space projects.展开更多
Sudden wildfires cause significant global ecological damage.While satellite imagery has advanced early fire detection and mitigation,image-based systems face limitations including high false alarm rates,visual obstruc...Sudden wildfires cause significant global ecological damage.While satellite imagery has advanced early fire detection and mitigation,image-based systems face limitations including high false alarm rates,visual obstructions,and substantial computational demands,especially in complex forest terrains.To address these challenges,this study proposes a novel forest fire detection model utilizing audio classification and machine learning.We developed an audio-based pipeline using real-world environmental sound recordings.Sounds were converted into Mel-spectrograms and classified via a Convolutional Neural Network(CNN),enabling the capture of distinctive fire acoustic signatures(e.g.,crackling,roaring)that are minimally impacted by visual or weather conditions.Internet of Things(IoT)sound sensors were crucial for generating complex environmental parameters to optimize feature extraction.The CNN model achieved high performance in stratified 5-fold cross-validation(92.4%±1.6 accuracy,91.2%±1.8 F1-score)and on test data(94.93%accuracy,93.04%F1-score),with 98.44%precision and 88.32%recall,demonstrating reliability across environmental conditions.These results indicate that the audio-based approach not only improves detection reliability but also markedly reduces computational overhead compared to traditional image-based methods.The findings suggest that acoustic sensing integrated with machine learning offers a powerful,low-cost,and efficient solution for real-time forest fire monitoring in complex,dynamic environments.展开更多
Due to their high mechanical compliance and excellent biocompatibility,conductive hydrogels exhibit significant potential for applications in flexible electronics.However,as the demand for high sensitivity,superior me...Due to their high mechanical compliance and excellent biocompatibility,conductive hydrogels exhibit significant potential for applications in flexible electronics.However,as the demand for high sensitivity,superior mechanical properties,and strong adhesion performance continues to grow,many conventional fabrication methods remain complex and costly.Herein,we propose a simple and efficient strategy to construct an entangled network hydrogel through a liquid-metal-induced cross-linking reaction,hydrogel demonstrates outstanding properties,including exceptional stretchability(1643%),high tensile strength(366.54 kPa),toughness(350.2 kJ m^(−3)),and relatively low mechanical hysteresis.The hydrogel exhibits long-term stable reusable adhesion(104 kPa),enabling conformal and stable adhesion to human skin.This capability allows it to effectively capture high-quality epidermal electrophysiological signals with high signal-to-noise ratio(25.2 dB)and low impedance(310 ohms).Furthermore,by integrating advanced machine learning algorithms,achieving an attention classification accuracy of 91.38%,which will significantly impact fields like education,healthcare,and artificial intelligence.展开更多
Network architectures assisted by Generative Artificial Intelligence(GAI)are envisioned as foundational elements of sixth-generation(6G)communication system.To deliver ubiquitous intelligent services and meet diverse ...Network architectures assisted by Generative Artificial Intelligence(GAI)are envisioned as foundational elements of sixth-generation(6G)communication system.To deliver ubiquitous intelligent services and meet diverse service requirements,6G network architecture should offer personalized services to various mobile devices.Federated learning(FL)with personalized local training,as a privacypreserving machine learning(ML)approach,can be applied to address these challenges.In this paper,we propose a meta-learning-based personalized FL(PFL)method that improves both communication and computation efficiency by utilizing over-the-air computations.Its“pretraining-and-fine-tuning”principle makes it particularly suitable for enabling edge nodes to access personalized GAI services while preserving local privacy.Experiment results demonstrate the outperformance and efficacy of the proposed algorithm,and notably indicate enhanced communication efficiency without compromising accuracy.展开更多
Alzheimer’s Disease(AD)is a progressive neurodegenerative disorder that significantly affects cognitive function,making early and accurate diagnosis essential.Traditional Deep Learning(DL)-based approaches often stru...Alzheimer’s Disease(AD)is a progressive neurodegenerative disorder that significantly affects cognitive function,making early and accurate diagnosis essential.Traditional Deep Learning(DL)-based approaches often struggle with low-contrast MRI images,class imbalance,and suboptimal feature extraction.This paper develops a Hybrid DL system that unites MobileNetV2 with adaptive classification methods to boost Alzheimer’s diagnosis by processing MRI scans.Image enhancement is done using Contrast-Limited Adaptive Histogram Equalization(CLAHE)and Enhanced Super-Resolution Generative Adversarial Networks(ESRGAN).A classification robustness enhancement system integrates class weighting techniques and a Matthews Correlation Coefficient(MCC)-based evaluation method into the design.The trained and validated model gives a 98.88%accuracy rate and 0.9614 MCC score.We also performed a 10-fold cross-validation experiment with an average accuracy of 96.52%(±1.51),a loss of 0.1671,and an MCC score of 0.9429 across folds.The proposed framework outperforms the state-of-the-art models with a 98%weighted F1-score while decreasing misdiagnosis results for every AD stage.The model demonstrates apparent separation abilities between AD progression stages according to the results of the confusion matrix analysis.These results validate the effectiveness of hybrid DL models with adaptive preprocessing for early and reliable Alzheimer’s diagnosis,contributing to improved computer-aided diagnosis(CAD)systems in clinical practice.展开更多
Low Earth orbit(LEO)satellite networks exhibit distinct characteristics,e.g.,limited resources of individual satellite nodes and dynamic network topology,which have brought many challenges for routing algorithms.To sa...Low Earth orbit(LEO)satellite networks exhibit distinct characteristics,e.g.,limited resources of individual satellite nodes and dynamic network topology,which have brought many challenges for routing algorithms.To satisfy quality of service(QoS)requirements of various users,it is critical to research efficient routing strategies to fully utilize satellite resources.This paper proposes a multi-QoS information optimized routing algorithm based on reinforcement learning for LEO satellite networks,which guarantees high level assurance demand services to be prioritized under limited satellite resources while considering the load balancing performance of the satellite networks for low level assurance demand services to ensure the full and effective utilization of satellite resources.An auxiliary path search algorithm is proposed to accelerate the convergence of satellite routing algorithm.Simulation results show that the generated routing strategy can timely process and fully meet the QoS demands of high assurance services while effectively improving the load balancing performance of the link.展开更多
Healthcare networks are transitioning from manual records to electronic health records,but this shift introduces vulnerabilities such as secure communication issues,privacy concerns,and the presence of malicious nodes...Healthcare networks are transitioning from manual records to electronic health records,but this shift introduces vulnerabilities such as secure communication issues,privacy concerns,and the presence of malicious nodes.Existing machine and deep learning-based anomalies detection methods often rely on centralized training,leading to reduced accuracy and potential privacy breaches.Therefore,this study proposes a Blockchain-based-Federated Learning architecture for Malicious Node Detection(BFL-MND)model.It trains models locally within healthcare clusters,sharing only model updates instead of patient data,preserving privacy and improving accuracy.Cloud and edge computing enhance the model’s scalability,while blockchain ensures secure,tamper-proof access to health data.Using the PhysioNet dataset,the proposed model achieves an accuracy of 0.95,F1 score of 0.93,precision of 0.94,and recall of 0.96,outperforming baseline models like random forest(0.88),adaptive boosting(0.90),logistic regression(0.86),perceptron(0.83),and deep neural networks(0.92).展开更多
With the rapid development of network technologies,a large number of deployed edge devices and information systems generate massive amounts of data which provide good support for the advancement of data-driven intelli...With the rapid development of network technologies,a large number of deployed edge devices and information systems generate massive amounts of data which provide good support for the advancement of data-driven intelligent models.However,these data often contain sensitive information of users.Federated learning(FL),as a privacy preservation machine learning setting,allows users to obtain a well-trained model without sending the privacy-sensitive local data to the central server.Despite the promising prospect of FL,several significant research challenges need to be addressed before widespread deployment,including network resource allocation,model security,model convergence,etc.In this paper,we first provide a brief survey on some of these works that have been done on FL and discuss the motivations of the Communication Networks(CNs)and FL to mutually enable each other.We analyze the support of network technologies for FL,which requires frequent communication and emphasizes security,as well as the studies on the intelligence of many network scenarios and the improvement of network performance and security by the methods based on FL.At last,some challenges and broader perspectives are explored.展开更多
6G is desired to support more intelligence networks and this trend attaches importance to the self-healing capability if degradation emerges in the cellular networks.As a primary component of selfhealing networks,faul...6G is desired to support more intelligence networks and this trend attaches importance to the self-healing capability if degradation emerges in the cellular networks.As a primary component of selfhealing networks,fault detection is investigated in this paper.Considering the fast response and low timeand-computational consumption,it is the first time that the Online Broad Learning System(OBLS)is applied to identify outages in cellular networks.In addition,the Automatic-constructed Online Broad Learning System(AOBLS)is put forward to rationalize its structure and consequently avoid over-fitting and under-fitting.Furthermore,a multi-layer classification structure is proposed to further improve the classification performance.To face the challenges caused by imbalanced data in fault detection problems,a novel weighting strategy is derived to achieve the Multilayer Automatic-constructed Weighted Online Broad Learning System(MAWOBLS)and ensemble learning with retrained Support Vector Machine(SVM),denoted as EMAWOBLS,for superior treatment with this imbalance issue.Simulation results show that the proposed algorithm has excellent performance in detecting faults with satisfactory time usage.展开更多
The forthcoming sixth generation(6G)of mobile communication networks is envisioned to be AInative,supporting intelligent services and pervasive computing at unprecedented scale.Among the key paradigms enabling this vi...The forthcoming sixth generation(6G)of mobile communication networks is envisioned to be AInative,supporting intelligent services and pervasive computing at unprecedented scale.Among the key paradigms enabling this vision,Federated Learning(FL)has gained prominence as a distributed machine learning framework that allows multiple devices to collaboratively train models without sharing raw data,thereby preserving privacy and reducing the need for centralized storage.This capability is particularly attractive for vision-based applications,where image and video data are both sensitive and bandwidth-intensive.However,the integration of FL with 6G networks presents unique challenges,including communication bottlenecks,device heterogeneity,and trade-offs between model accuracy,latency,and energy consumption.In this paper,we developed a simulation-based framework to investigate the performance of FL in representative vision tasks under 6G-like environments.We formalize the system model,incorporating both the federated averaging(FedAvg)training process and a simplified communication costmodel that captures bandwidth constraints,packet loss,and variable latency across edge devices.Using standard image datasets(e.g.,MNIST,CIFAR-10)as benchmarks,we analyze how factors such as the number of participating clients,degree of data heterogeneity,and communication frequency influence convergence speed and model accuracy.Additionally,we evaluate the effectiveness of lightweight communication-efficient strategies,including local update tuning and gradient compression,in mitigating network overhead.The experimental results reveal several key insights:(i)communication limitations can significantly degrade FL convergence in vision tasks if not properly addressed;(ii)judicious tuning of local training epochs and client participation levels enables notable improvements in both efficiency and accuracy;and(iii)communication-efficient FL strategies provide a promising pathway to balance performance with the stringent latency and reliability requirements expected in 6G.These findings highlight the synergistic role of AI and nextgeneration networks in enabling privacy-preserving,real-time vision applications,and they provide concrete design guidelines for researchers and practitioners working at the intersection of FL and 6G.展开更多
With the rapid development of advanced networking and computing technologies such as the Internet of Things, network function virtualization, and 5G infrastructure, new development opportunities are emerging for Marit...With the rapid development of advanced networking and computing technologies such as the Internet of Things, network function virtualization, and 5G infrastructure, new development opportunities are emerging for Maritime Meteorological Sensor Networks(MMSNs). However, the increasing number of intelligent devices joining the MMSN poses a growing threat to network security. Current Artificial Intelligence(AI) intrusion detection techniques turn intrusion detection into a classification problem, where AI excels. These techniques assume sufficient high-quality instances for model construction, which is often unsatisfactory for real-world operation with limited attack instances and constantly evolving characteristics. This paper proposes an Adaptive Personalized Federated learning(APFed) framework that allows multiple MMSN owners to engage in collaborative training. By employing an adaptive personalized update and a shared global classifier, the adverse effects of imbalanced, Non-Independent and Identically Distributed(Non-IID) data are mitigated, enabling the intrusion detection model to possess personalized capabilities and good global generalization. In addition, a lightweight intrusion detection model is proposed to detect various attacks with an effective adaptation to the MMSN environment. Finally, extensive experiments on a classical network dataset show that the attack classification accuracy is improved by about 5% compared to most baselines in the global scenarios.展开更多
Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilizat...Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilization efficiency. To meet the diverse needs of tasks, it usually needs to instantiate multiple network functions in the form of containers interconnect various generated containers to build a Container Cluster(CC). Then CCs will be deployed on edge service nodes with relatively limited resources. However, the increasingly complex and timevarying nature of tasks brings great challenges to optimal placement of CC. This paper regards the charges for various resources occupied by providing services as revenue, the service efficiency and energy consumption as cost, thus formulates a Mixed Integer Programming(MIP) model to describe the optimal placement of CC on edge service nodes. Furthermore, an Actor-Critic based Deep Reinforcement Learning(DRL) incorporating Graph Convolutional Networks(GCN) framework named as RL-GCN is proposed to solve the optimization problem. The framework obtains an optimal placement strategy through self-learning according to the requirements and objectives of the placement of CC. Particularly, through the introduction of GCN, the features of the association relationship between multiple containers in CCs can be effectively extracted to improve the quality of placement.The experiment results show that under different scales of service nodes and task requests, the proposed method can obtain the improved system performance in terms of placement error ratio, time efficiency of solution output and cumulative system revenue compared with other representative baseline methods.展开更多
Microseismic monitoring technology is widely used in tunnel and coal mine safety production.For signals generated by ultra-weak microseismic events,traditional sensors encounter limitations in terms of detection sensi...Microseismic monitoring technology is widely used in tunnel and coal mine safety production.For signals generated by ultra-weak microseismic events,traditional sensors encounter limitations in terms of detection sensitivity.Given the complex engineering environment,automatic multi-classification of microseismic data is highly required.In this study,we use acceleration sensors to collect signals and combine the improved Visual Geometry Group with a convolutional block attention module to obtain a new network structure,termed CNN_BAM,for automatic classification and identification of microseismic events.We use the dataset collected from the Hanjiang-to-Weihe River Diversion Project to train and validate the network model.Results show that the CNN_BAM model exhibits good feature extraction ability,achieving a recognition accuracy of 99.29%,surpassing all its counterparts.The stability and accuracy of the classification algorithm improve remarkably.In addition,through fine-tuning and migration to the Pan Ⅱ Mine Project,the network demonstrates reliable generalization performance.This outcome reflects its adaptability across different projects and promising application prospects.展开更多
The rapid increase in the number of Internet of Things(IoT)devices,coupled with a rise in sophisticated cyberattacks,demands robust intrusion detection systems.This study presents a holistic,intelligent intrusion dete...The rapid increase in the number of Internet of Things(IoT)devices,coupled with a rise in sophisticated cyberattacks,demands robust intrusion detection systems.This study presents a holistic,intelligent intrusion detection system.It uses a combined method that integrates machine learning(ML)and deep learning(DL)techniques to improve the protection of contemporary information technology(IT)systems.Unlike traditional signature-based or singlemodel methods,this system integrates the strengths of ensemble learning for binary classification and deep learning for multi-class classification.This combination provides a more nuanced and adaptable defense.The research utilizes the NF-UQ-NIDS-v2 dataset,a recent,comprehensive benchmark for evaluating network intrusion detection systems(NIDS).Our methodological framework employs advanced artificial intelligence techniques.Specifically,we use ensemble learning algorithms(Random Forest,Gradient Boosting,AdaBoost,and XGBoost)for binary classification.Deep learning architectures are also employed to address the complexities of multi-class classification,allowing for fine-grained identification of intrusion types.To mitigate class imbalance,a common problem in multi-class intrusion detection that biases model performance,we use oversampling and data augmentation.These techniques ensure equitable class representation.The results demonstrate the efficacy of the proposed hybrid ML-DL system.It achieves significant improvements in intrusion detection accuracy and reliability.This research contributes substantively to cybersecurity by providing a more robust and adaptable intrusion detection solution.展开更多
Wireless Sensor Networks(WSNs)play a critical role in automated border surveillance systems,where continuous monitoring is essential.However,limited energy resources in sensor nodes lead to frequent network failures a...Wireless Sensor Networks(WSNs)play a critical role in automated border surveillance systems,where continuous monitoring is essential.However,limited energy resources in sensor nodes lead to frequent network failures and reduced coverage over time.To address this issue,this paper presents an innovative energy-efficient protocol based on deep Q-learning(DQN),specifically developed to prolong the operational lifespan of WSNs used in border surveillance.By harnessing the adaptive power of DQN,the proposed protocol dynamically adjusts node activity and communication patterns.This approach ensures optimal energy usage while maintaining high coverage,connectivity,and data accuracy.The proposed system is modeled with 100 sensor nodes deployed over a 1000 m×1000 m area,featuring a strategically positioned sink node.Our method outperforms traditional approaches,achieving significant enhancements in network lifetime and energy utilization.Through extensive simulations,it is observed that the network lifetime increases by 9.75%,throughput increases by 8.85%and average delay decreases by 9.45%in comparison to the similar recent protocols.It demonstrates the robustness and efficiency of our protocol in real-world scenarios,highlighting its potential to revolutionize border surveillance operations.展开更多
Urban traffic prediction with high precision is always the unremitting pursuit of intelligent transportation systems and is instrumental in bringing smart cities into reality.The fundamental challenges for traffic pre...Urban traffic prediction with high precision is always the unremitting pursuit of intelligent transportation systems and is instrumental in bringing smart cities into reality.The fundamental challenges for traffic prediction lie in the accurate modelling of spatial and temporal traffic dynamics.Existing approaches mainly focus on modelling the traffic data itself,but do not explore the traffic correlations implicit in origin-destination(OD)data.In this paper,we propose STOD-Net,a dynamic spatial-temporal OD feature-enhanced deep network,to simultaneously predict the in-traffic and out-traffic for each and every region of a city.We model the OD data as dynamic graphs and adopt graph neural networks in STOD-Net to learn a low-dimensional representation for each region.As per the region feature,we design a gating mechanism and operate it on the traffic feature learning to explicitly capture spatial correlations.To further capture the complicated spatial and temporal dependencies among different regions,we propose a novel joint feature,learning block in STOD-Net and transfer the hybrid OD features to each block to make the learning process spatiotemporal-aware.We evaluate the effectiveness of STOD-Net on two benchmark datasets,and experimental results demonstrate that it outperforms the state-of-the-art by approximately 5%in terms of prediction accuracy and considerably improves prediction stability up to 80%in terms of standard deviation.展开更多
基金Supported by Sichuan Science and Technology Program(2023YFSY0026,2023YFH0004)Supported by the Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korean government(MSIT)(No.RS-2022-00155885,Artificial Intelligence Convergence Innovation Human Resources Development(Hanyang University ERICA)).
文摘Two-dimensional endoscopic images are susceptible to interferences such as specular reflections and monotonous texture illumination,hindering accurate three-dimensional lesion reconstruction by surgical robots.This study proposes a novel end-to-end disparity estimation model to address these challenges.Our approach combines a Pseudo-Siamese neural network architecture with pyramid dilated convolutions,integrating multi-scale image information to enhance robustness against lighting interferences.This study introduces a Pseudo-Siamese structure-based disparity regression model that simplifies left-right image comparison,improving accuracy and efficiency.The model was evaluated using a dataset of stereo endoscopic videos captured by the Da Vinci surgical robot,comprising simulated silicone heart sequences and real heart video data.Experimental results demonstrate significant improvement in the network’s resistance to lighting interference without substantially increasing parameters.Moreover,the model exhibited faster convergence during training,contributing to overall performance enhancement.This study advances endoscopic image processing accuracy and has potential implications for surgical robot applications in complex environments.
文摘Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay differential equations(DDEs)and delay integrodifferential equations(DIDEs)with constant delays,primarily due to their low regularity at delayinduced breaking points.In this paper,a DNN method that combines multi-task learning(MTL)which is proposed to solve both the forward and inverse problems of DIDEs.The core idea of this approach is to divide the original equation into multiple tasks based on the delay,using auxiliary outputs to represent the integral terms,followed by the use of MTL to seamlessly incorporate the properties at the breaking points into the loss function.Furthermore,given the increased training dificulty associated with multiple tasks and outputs,we employ a sequential training scheme to reduce training complexity and provide reference solutions for subsequent tasks.This approach significantly enhances the approximation accuracy of solving DIDEs with DNNs,as demonstrated by comparisons with traditional DNN methods.We validate the effectiveness of this method through several numerical experiments,test various parameter sharing structures in MTL and compare the testing results of these structures.Finally,this method is implemented to solve the inverse problem of nonlinear DIDE and the results show that the unknown parameters of DIDE can be discovered with sparse or noisy data.
文摘With the increasing complexity of vehicular networks and the proliferation of connected vehicles,Federated Learning(FL)has emerged as a critical framework for decentralized model training while preserving data privacy.However,efficient client selection and adaptive weight allocation in heterogeneous and non-IID environments remain challenging.To address these issues,we propose Federated Learning with Client Selection and Adaptive Weighting(FedCW),a novel algorithm that leverages adaptive client selection and dynamic weight allocation for optimizing model convergence in real-time vehicular networks.FedCW selects clients based on their Euclidean distance from the global model and dynamically adjusts aggregation weights to optimize both data diversity and model convergence.Experimental results show that FedCW significantly outperforms existing FL algorithms such as FedAvg,FedProx,and SCAFFOLD,particularly in non-IID settings,achieving faster convergence,higher accuracy,and reduced communication overhead.These findings demonstrate that FedCW provides an effective solution for enhancing the performance of FL in heterogeneous,edge-based computing environments.
文摘The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities and obstacles.The huge and diversified nature of these datasets cannot always be managed using traditional data analysis methods.As a consequence,deep learning has emerged as a strong tool for analysing numerous omics data due to its ability to handle complex and non-linear relationships.This paper explores the fundamental concepts of deep learning and how they are used in multi-omics medical data mining.We demonstrate how autoencoders,variational autoencoders,multimodal models,attention mechanisms,transformers,and graph neural networks enable pattern analysis and recognition across all omics data.Deep learning has been found to be effective in illness classification,biomarker identification,gene network learning,and therapeutic efficacy prediction.We also consider critical problems like as data quality,model explainability,whether findings can be repeated,and computational power requirements.We now consider future elements of combining omics with clinical and imaging data,explainable AI,federated learning,and real-time diagnostics.Overall,this study emphasises the need of collaborating across disciplines to advance deep learning-based multi-omics research for precision medicine and comprehending complicated disorders.
基金financially supported by the National Science and Technology Major Project——Deep Earth Probe and Mineral Resources Exploration(No.2024ZD1003701)the National Key R&D Program of China(No.2022YFC2905004)。
文摘An image processing and deep learning method for identifying different types of rock images was proposed.Preprocessing,such as rock image acquisition,gray scaling,Gaussian blurring,and feature dimensionality reduction,was conducted to extract useful feature information and recognize and classify rock images using Tensor Flow-based convolutional neural network(CNN)and Py Qt5.A rock image dataset was established and separated into workouts,confirmation sets,and test sets.The framework was subsequently compiled and trained.The categorization approach was evaluated using image data from the validation and test datasets,and key metrics,such as accuracy,precision,and recall,were analyzed.Finally,the classification model conducted a probabilistic analysis of the measured data to determine the equivalent lithological type for each image.The experimental results indicated that the method combining deep learning,Tensor Flow-based CNN,and Py Qt5 to recognize and classify rock images has an accuracy rate of up to 98.8%,and can be successfully utilized for rock image recognition.The system can be extended to geological exploration,mine engineering,and other rock and mineral resource development to more efficiently and accurately recognize rock samples.Moreover,it can match them with the intelligent support design system to effectively improve the reliability and economy of the support scheme.The system can serve as a reference for supporting the design of other mining and underground space projects.
基金funded by the Directorate of Research and Community Service,Directorate General of Research and Development,Ministry of Higher Education,Science and Technologyin accordance with the Implementation Contract for the Operational Assistance Program for State Universities,Research Program Number:109/C3/DT.05.00/PL/2025.
文摘Sudden wildfires cause significant global ecological damage.While satellite imagery has advanced early fire detection and mitigation,image-based systems face limitations including high false alarm rates,visual obstructions,and substantial computational demands,especially in complex forest terrains.To address these challenges,this study proposes a novel forest fire detection model utilizing audio classification and machine learning.We developed an audio-based pipeline using real-world environmental sound recordings.Sounds were converted into Mel-spectrograms and classified via a Convolutional Neural Network(CNN),enabling the capture of distinctive fire acoustic signatures(e.g.,crackling,roaring)that are minimally impacted by visual or weather conditions.Internet of Things(IoT)sound sensors were crucial for generating complex environmental parameters to optimize feature extraction.The CNN model achieved high performance in stratified 5-fold cross-validation(92.4%±1.6 accuracy,91.2%±1.8 F1-score)and on test data(94.93%accuracy,93.04%F1-score),with 98.44%precision and 88.32%recall,demonstrating reliability across environmental conditions.These results indicate that the audio-based approach not only improves detection reliability but also markedly reduces computational overhead compared to traditional image-based methods.The findings suggest that acoustic sensing integrated with machine learning offers a powerful,low-cost,and efficient solution for real-time forest fire monitoring in complex,dynamic environments.
基金supported by the National Key Research&Development Program of China(grant no.2022YFC3500503)the National Natural Science Foundation of China(grant nos.62227807,12374171,12004034,62402041)+2 种基金the Beijing Institute of Technology Research Fund Program for Young Scholars,Chinathe Fundamental Research Funds for the Central Universities(grant nos.2024CX06060)Beijing Youth Talent Lifting Project.
文摘Due to their high mechanical compliance and excellent biocompatibility,conductive hydrogels exhibit significant potential for applications in flexible electronics.However,as the demand for high sensitivity,superior mechanical properties,and strong adhesion performance continues to grow,many conventional fabrication methods remain complex and costly.Herein,we propose a simple and efficient strategy to construct an entangled network hydrogel through a liquid-metal-induced cross-linking reaction,hydrogel demonstrates outstanding properties,including exceptional stretchability(1643%),high tensile strength(366.54 kPa),toughness(350.2 kJ m^(−3)),and relatively low mechanical hysteresis.The hydrogel exhibits long-term stable reusable adhesion(104 kPa),enabling conformal and stable adhesion to human skin.This capability allows it to effectively capture high-quality epidermal electrophysiological signals with high signal-to-noise ratio(25.2 dB)and low impedance(310 ohms).Furthermore,by integrating advanced machine learning algorithms,achieving an attention classification accuracy of 91.38%,which will significantly impact fields like education,healthcare,and artificial intelligence.
基金supported in part by the National Key R&D Program of China under Grant 2024YFE0200700in part by the National Natural Science Foundation of China under Grant 62201504.
文摘Network architectures assisted by Generative Artificial Intelligence(GAI)are envisioned as foundational elements of sixth-generation(6G)communication system.To deliver ubiquitous intelligent services and meet diverse service requirements,6G network architecture should offer personalized services to various mobile devices.Federated learning(FL)with personalized local training,as a privacypreserving machine learning(ML)approach,can be applied to address these challenges.In this paper,we propose a meta-learning-based personalized FL(PFL)method that improves both communication and computation efficiency by utilizing over-the-air computations.Its“pretraining-and-fine-tuning”principle makes it particularly suitable for enabling edge nodes to access personalized GAI services while preserving local privacy.Experiment results demonstrate the outperformance and efficacy of the proposed algorithm,and notably indicate enhanced communication efficiency without compromising accuracy.
基金funded by the Deanship of Graduate Studies and Scientific Research at Jouf University under grant No.(DGSSR-2025-02-01295).
文摘Alzheimer’s Disease(AD)is a progressive neurodegenerative disorder that significantly affects cognitive function,making early and accurate diagnosis essential.Traditional Deep Learning(DL)-based approaches often struggle with low-contrast MRI images,class imbalance,and suboptimal feature extraction.This paper develops a Hybrid DL system that unites MobileNetV2 with adaptive classification methods to boost Alzheimer’s diagnosis by processing MRI scans.Image enhancement is done using Contrast-Limited Adaptive Histogram Equalization(CLAHE)and Enhanced Super-Resolution Generative Adversarial Networks(ESRGAN).A classification robustness enhancement system integrates class weighting techniques and a Matthews Correlation Coefficient(MCC)-based evaluation method into the design.The trained and validated model gives a 98.88%accuracy rate and 0.9614 MCC score.We also performed a 10-fold cross-validation experiment with an average accuracy of 96.52%(±1.51),a loss of 0.1671,and an MCC score of 0.9429 across folds.The proposed framework outperforms the state-of-the-art models with a 98%weighted F1-score while decreasing misdiagnosis results for every AD stage.The model demonstrates apparent separation abilities between AD progression stages according to the results of the confusion matrix analysis.These results validate the effectiveness of hybrid DL models with adaptive preprocessing for early and reliable Alzheimer’s diagnosis,contributing to improved computer-aided diagnosis(CAD)systems in clinical practice.
基金National Key Research and Development Program(2021YFB2900604)。
文摘Low Earth orbit(LEO)satellite networks exhibit distinct characteristics,e.g.,limited resources of individual satellite nodes and dynamic network topology,which have brought many challenges for routing algorithms.To satisfy quality of service(QoS)requirements of various users,it is critical to research efficient routing strategies to fully utilize satellite resources.This paper proposes a multi-QoS information optimized routing algorithm based on reinforcement learning for LEO satellite networks,which guarantees high level assurance demand services to be prioritized under limited satellite resources while considering the load balancing performance of the satellite networks for low level assurance demand services to ensure the full and effective utilization of satellite resources.An auxiliary path search algorithm is proposed to accelerate the convergence of satellite routing algorithm.Simulation results show that the generated routing strategy can timely process and fully meet the QoS demands of high assurance services while effectively improving the load balancing performance of the link.
基金funded by the Northern Border University,Arar,KSA,under the project number“NBU-FFR-2025-3555-07”.
文摘Healthcare networks are transitioning from manual records to electronic health records,but this shift introduces vulnerabilities such as secure communication issues,privacy concerns,and the presence of malicious nodes.Existing machine and deep learning-based anomalies detection methods often rely on centralized training,leading to reduced accuracy and potential privacy breaches.Therefore,this study proposes a Blockchain-based-Federated Learning architecture for Malicious Node Detection(BFL-MND)model.It trains models locally within healthcare clusters,sharing only model updates instead of patient data,preserving privacy and improving accuracy.Cloud and edge computing enhance the model’s scalability,while blockchain ensures secure,tamper-proof access to health data.Using the PhysioNet dataset,the proposed model achieves an accuracy of 0.95,F1 score of 0.93,precision of 0.94,and recall of 0.96,outperforming baseline models like random forest(0.88),adaptive boosting(0.90),logistic regression(0.86),perceptron(0.83),and deep neural networks(0.92).
基金supported by National Key Research and Development Program of China(No.2023YFB2704200)Beijing Natural Science Foundation(No.4254064).
文摘With the rapid development of network technologies,a large number of deployed edge devices and information systems generate massive amounts of data which provide good support for the advancement of data-driven intelligent models.However,these data often contain sensitive information of users.Federated learning(FL),as a privacy preservation machine learning setting,allows users to obtain a well-trained model without sending the privacy-sensitive local data to the central server.Despite the promising prospect of FL,several significant research challenges need to be addressed before widespread deployment,including network resource allocation,model security,model convergence,etc.In this paper,we first provide a brief survey on some of these works that have been done on FL and discuss the motivations of the Communication Networks(CNs)and FL to mutually enable each other.We analyze the support of network technologies for FL,which requires frequent communication and emphasizes security,as well as the studies on the intelligence of many network scenarios and the improvement of network performance and security by the methods based on FL.At last,some challenges and broader perspectives are explored.
基金supported in part by the National Key Research and Development Project under Grant 2020YFB1806805partially funded through a grant from Qualcomm。
文摘6G is desired to support more intelligence networks and this trend attaches importance to the self-healing capability if degradation emerges in the cellular networks.As a primary component of selfhealing networks,fault detection is investigated in this paper.Considering the fast response and low timeand-computational consumption,it is the first time that the Online Broad Learning System(OBLS)is applied to identify outages in cellular networks.In addition,the Automatic-constructed Online Broad Learning System(AOBLS)is put forward to rationalize its structure and consequently avoid over-fitting and under-fitting.Furthermore,a multi-layer classification structure is proposed to further improve the classification performance.To face the challenges caused by imbalanced data in fault detection problems,a novel weighting strategy is derived to achieve the Multilayer Automatic-constructed Weighted Online Broad Learning System(MAWOBLS)and ensemble learning with retrained Support Vector Machine(SVM),denoted as EMAWOBLS,for superior treatment with this imbalance issue.Simulation results show that the proposed algorithm has excellent performance in detecting faults with satisfactory time usage.
文摘The forthcoming sixth generation(6G)of mobile communication networks is envisioned to be AInative,supporting intelligent services and pervasive computing at unprecedented scale.Among the key paradigms enabling this vision,Federated Learning(FL)has gained prominence as a distributed machine learning framework that allows multiple devices to collaboratively train models without sharing raw data,thereby preserving privacy and reducing the need for centralized storage.This capability is particularly attractive for vision-based applications,where image and video data are both sensitive and bandwidth-intensive.However,the integration of FL with 6G networks presents unique challenges,including communication bottlenecks,device heterogeneity,and trade-offs between model accuracy,latency,and energy consumption.In this paper,we developed a simulation-based framework to investigate the performance of FL in representative vision tasks under 6G-like environments.We formalize the system model,incorporating both the federated averaging(FedAvg)training process and a simplified communication costmodel that captures bandwidth constraints,packet loss,and variable latency across edge devices.Using standard image datasets(e.g.,MNIST,CIFAR-10)as benchmarks,we analyze how factors such as the number of participating clients,degree of data heterogeneity,and communication frequency influence convergence speed and model accuracy.Additionally,we evaluate the effectiveness of lightweight communication-efficient strategies,including local update tuning and gradient compression,in mitigating network overhead.The experimental results reveal several key insights:(i)communication limitations can significantly degrade FL convergence in vision tasks if not properly addressed;(ii)judicious tuning of local training epochs and client participation levels enables notable improvements in both efficiency and accuracy;and(iii)communication-efficient FL strategies provide a promising pathway to balance performance with the stringent latency and reliability requirements expected in 6G.These findings highlight the synergistic role of AI and nextgeneration networks in enabling privacy-preserving,real-time vision applications,and they provide concrete design guidelines for researchers and practitioners working at the intersection of FL and 6G.
基金supported by the National Natural Science Foundation of China under Grant 62371181the Project on Excellent Postgraduate Dissertation of Hohai University (422003482)the Changzhou Science and Technology International Cooperation Program under Grant CZ20230029。
文摘With the rapid development of advanced networking and computing technologies such as the Internet of Things, network function virtualization, and 5G infrastructure, new development opportunities are emerging for Maritime Meteorological Sensor Networks(MMSNs). However, the increasing number of intelligent devices joining the MMSN poses a growing threat to network security. Current Artificial Intelligence(AI) intrusion detection techniques turn intrusion detection into a classification problem, where AI excels. These techniques assume sufficient high-quality instances for model construction, which is often unsatisfactory for real-world operation with limited attack instances and constantly evolving characteristics. This paper proposes an Adaptive Personalized Federated learning(APFed) framework that allows multiple MMSN owners to engage in collaborative training. By employing an adaptive personalized update and a shared global classifier, the adverse effects of imbalanced, Non-Independent and Identically Distributed(Non-IID) data are mitigated, enabling the intrusion detection model to possess personalized capabilities and good global generalization. In addition, a lightweight intrusion detection model is proposed to detect various attacks with an effective adaptation to the MMSN environment. Finally, extensive experiments on a classical network dataset show that the attack classification accuracy is improved by about 5% compared to most baselines in the global scenarios.
文摘Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilization efficiency. To meet the diverse needs of tasks, it usually needs to instantiate multiple network functions in the form of containers interconnect various generated containers to build a Container Cluster(CC). Then CCs will be deployed on edge service nodes with relatively limited resources. However, the increasingly complex and timevarying nature of tasks brings great challenges to optimal placement of CC. This paper regards the charges for various resources occupied by providing services as revenue, the service efficiency and energy consumption as cost, thus formulates a Mixed Integer Programming(MIP) model to describe the optimal placement of CC on edge service nodes. Furthermore, an Actor-Critic based Deep Reinforcement Learning(DRL) incorporating Graph Convolutional Networks(GCN) framework named as RL-GCN is proposed to solve the optimization problem. The framework obtains an optimal placement strategy through self-learning according to the requirements and objectives of the placement of CC. Particularly, through the introduction of GCN, the features of the association relationship between multiple containers in CCs can be effectively extracted to improve the quality of placement.The experiment results show that under different scales of service nodes and task requests, the proposed method can obtain the improved system performance in terms of placement error ratio, time efficiency of solution output and cumulative system revenue compared with other representative baseline methods.
基金supported by the Key Research and Development Plan of Anhui Province(202104a05020059)the Excellent Scientific Research and Innovation Team of Anhui Province(2022AH010003)support from Hefei Comprehensive National Science Center is highly appreciated.
文摘Microseismic monitoring technology is widely used in tunnel and coal mine safety production.For signals generated by ultra-weak microseismic events,traditional sensors encounter limitations in terms of detection sensitivity.Given the complex engineering environment,automatic multi-classification of microseismic data is highly required.In this study,we use acceleration sensors to collect signals and combine the improved Visual Geometry Group with a convolutional block attention module to obtain a new network structure,termed CNN_BAM,for automatic classification and identification of microseismic events.We use the dataset collected from the Hanjiang-to-Weihe River Diversion Project to train and validate the network model.Results show that the CNN_BAM model exhibits good feature extraction ability,achieving a recognition accuracy of 99.29%,surpassing all its counterparts.The stability and accuracy of the classification algorithm improve remarkably.In addition,through fine-tuning and migration to the Pan Ⅱ Mine Project,the network demonstrates reliable generalization performance.This outcome reflects its adaptability across different projects and promising application prospects.
文摘The rapid increase in the number of Internet of Things(IoT)devices,coupled with a rise in sophisticated cyberattacks,demands robust intrusion detection systems.This study presents a holistic,intelligent intrusion detection system.It uses a combined method that integrates machine learning(ML)and deep learning(DL)techniques to improve the protection of contemporary information technology(IT)systems.Unlike traditional signature-based or singlemodel methods,this system integrates the strengths of ensemble learning for binary classification and deep learning for multi-class classification.This combination provides a more nuanced and adaptable defense.The research utilizes the NF-UQ-NIDS-v2 dataset,a recent,comprehensive benchmark for evaluating network intrusion detection systems(NIDS).Our methodological framework employs advanced artificial intelligence techniques.Specifically,we use ensemble learning algorithms(Random Forest,Gradient Boosting,AdaBoost,and XGBoost)for binary classification.Deep learning architectures are also employed to address the complexities of multi-class classification,allowing for fine-grained identification of intrusion types.To mitigate class imbalance,a common problem in multi-class intrusion detection that biases model performance,we use oversampling and data augmentation.These techniques ensure equitable class representation.The results demonstrate the efficacy of the proposed hybrid ML-DL system.It achieves significant improvements in intrusion detection accuracy and reliability.This research contributes substantively to cybersecurity by providing a more robust and adaptable intrusion detection solution.
基金funded by Sardar Vallabhbhai National Institute of Technology through SEED grant No.Dean(R&C)/SEED Money/2021-22/11153Date:08/02/2022supported by Business Finland EWARE-6G project under 6G Bridge program,and in part by theHorizon Europe(Smart Networks and Services Joint Under taking)program under Grant Agreement No.101096838(6G-XR project).
文摘Wireless Sensor Networks(WSNs)play a critical role in automated border surveillance systems,where continuous monitoring is essential.However,limited energy resources in sensor nodes lead to frequent network failures and reduced coverage over time.To address this issue,this paper presents an innovative energy-efficient protocol based on deep Q-learning(DQN),specifically developed to prolong the operational lifespan of WSNs used in border surveillance.By harnessing the adaptive power of DQN,the proposed protocol dynamically adjusts node activity and communication patterns.This approach ensures optimal energy usage while maintaining high coverage,connectivity,and data accuracy.The proposed system is modeled with 100 sensor nodes deployed over a 1000 m×1000 m area,featuring a strategically positioned sink node.Our method outperforms traditional approaches,achieving significant enhancements in network lifetime and energy utilization.Through extensive simulations,it is observed that the network lifetime increases by 9.75%,throughput increases by 8.85%and average delay decreases by 9.45%in comparison to the similar recent protocols.It demonstrates the robustness and efficiency of our protocol in real-world scenarios,highlighting its potential to revolutionize border surveillance operations.
基金supported by the National Natural Science Foundation of China,Grant/Award Number:62401338by the Shandong Province Excellent Youth Science Fund Project(Overseas),Grant/Award Number:2024HWYQ-028by the Fundamental Research Funds of Shandong University.
文摘Urban traffic prediction with high precision is always the unremitting pursuit of intelligent transportation systems and is instrumental in bringing smart cities into reality.The fundamental challenges for traffic prediction lie in the accurate modelling of spatial and temporal traffic dynamics.Existing approaches mainly focus on modelling the traffic data itself,but do not explore the traffic correlations implicit in origin-destination(OD)data.In this paper,we propose STOD-Net,a dynamic spatial-temporal OD feature-enhanced deep network,to simultaneously predict the in-traffic and out-traffic for each and every region of a city.We model the OD data as dynamic graphs and adopt graph neural networks in STOD-Net to learn a low-dimensional representation for each region.As per the region feature,we design a gating mechanism and operate it on the traffic feature learning to explicitly capture spatial correlations.To further capture the complicated spatial and temporal dependencies among different regions,we propose a novel joint feature,learning block in STOD-Net and transfer the hybrid OD features to each block to make the learning process spatiotemporal-aware.We evaluate the effectiveness of STOD-Net on two benchmark datasets,and experimental results demonstrate that it outperforms the state-of-the-art by approximately 5%in terms of prediction accuracy and considerably improves prediction stability up to 80%in terms of standard deviation.