期刊文献+
共找到30,752篇文章
< 1 2 250 >
每页显示 20 50 100
A Novel Self-Supervised Learning Network for Binocular Disparity Estimation 被引量:1
1
作者 Jiawei Tian Yu Zhou +5 位作者 Xiaobing Chen Salman A.AlQahtani Hongrong Chen Bo Yang Siyu Lu Wenfeng Zheng 《Computer Modeling in Engineering & Sciences》 SCIE EI 2025年第1期209-229,共21页
Two-dimensional endoscopic images are susceptible to interferences such as specular reflections and monotonous texture illumination,hindering accurate three-dimensional lesion reconstruction by surgical robots.This st... Two-dimensional endoscopic images are susceptible to interferences such as specular reflections and monotonous texture illumination,hindering accurate three-dimensional lesion reconstruction by surgical robots.This study proposes a novel end-to-end disparity estimation model to address these challenges.Our approach combines a Pseudo-Siamese neural network architecture with pyramid dilated convolutions,integrating multi-scale image information to enhance robustness against lighting interferences.This study introduces a Pseudo-Siamese structure-based disparity regression model that simplifies left-right image comparison,improving accuracy and efficiency.The model was evaluated using a dataset of stereo endoscopic videos captured by the Da Vinci surgical robot,comprising simulated silicone heart sequences and real heart video data.Experimental results demonstrate significant improvement in the network’s resistance to lighting interference without substantially increasing parameters.Moreover,the model exhibited faster convergence during training,contributing to overall performance enhancement.This study advances endoscopic image processing accuracy and has potential implications for surgical robot applications in complex environments. 展开更多
关键词 Parallax estimation parallax regression model self-supervised learning Pseudo-Siamese neural network pyramid dilated convolution binocular disparity estimation
在线阅读 下载PDF
DEEP NEURAL NETWORKS COMBINING MULTI-TASK LEARNING FOR SOLVING DELAY INTEGRO-DIFFERENTIAL EQUATIONS 被引量:1
2
作者 WANG Chen-yao SHI Feng 《数学杂志》 2025年第1期13-38,共26页
Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay di... Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay differential equations(DDEs)and delay integrodifferential equations(DIDEs)with constant delays,primarily due to their low regularity at delayinduced breaking points.In this paper,a DNN method that combines multi-task learning(MTL)which is proposed to solve both the forward and inverse problems of DIDEs.The core idea of this approach is to divide the original equation into multiple tasks based on the delay,using auxiliary outputs to represent the integral terms,followed by the use of MTL to seamlessly incorporate the properties at the breaking points into the loss function.Furthermore,given the increased training dificulty associated with multiple tasks and outputs,we employ a sequential training scheme to reduce training complexity and provide reference solutions for subsequent tasks.This approach significantly enhances the approximation accuracy of solving DIDEs with DNNs,as demonstrated by comparisons with traditional DNN methods.We validate the effectiveness of this method through several numerical experiments,test various parameter sharing structures in MTL and compare the testing results of these structures.Finally,this method is implemented to solve the inverse problem of nonlinear DIDE and the results show that the unknown parameters of DIDE can be discovered with sparse or noisy data. 展开更多
关键词 Delay integro-differential equation Multi-task learning parameter sharing structure deep neural network sequential training scheme
在线阅读 下载PDF
FedCW: Client Selection with Adaptive Weight in Heterogeneous Federated Learning
3
作者 Haotian Wu Jiaming Pei Jinhai Li 《Computers, Materials & Continua》 2026年第1期1551-1570,共20页
With the increasing complexity of vehicular networks and the proliferation of connected vehicles,Federated Learning(FL)has emerged as a critical framework for decentralized model training while preserving data privacy... With the increasing complexity of vehicular networks and the proliferation of connected vehicles,Federated Learning(FL)has emerged as a critical framework for decentralized model training while preserving data privacy.However,efficient client selection and adaptive weight allocation in heterogeneous and non-IID environments remain challenging.To address these issues,we propose Federated Learning with Client Selection and Adaptive Weighting(FedCW),a novel algorithm that leverages adaptive client selection and dynamic weight allocation for optimizing model convergence in real-time vehicular networks.FedCW selects clients based on their Euclidean distance from the global model and dynamically adjusts aggregation weights to optimize both data diversity and model convergence.Experimental results show that FedCW significantly outperforms existing FL algorithms such as FedAvg,FedProx,and SCAFFOLD,particularly in non-IID settings,achieving faster convergence,higher accuracy,and reduced communication overhead.These findings demonstrate that FedCW provides an effective solution for enhancing the performance of FL in heterogeneous,edge-based computing environments. 展开更多
关键词 Federated learning non-IID client selection weight allocation vehicular networks
在线阅读 下载PDF
Harnessing deep learning for the discovery of latent patterns in multi-omics medical data
4
作者 Okechukwu Paul-Chima Ugwu Fabian COgenyi +8 位作者 Chinyere Nkemjika Anyanwu Melvin Nnaemeka Ugwu Esther Ugo Alum Mariam Basajja Joseph Obiezu Chukwujekwu Ezeonwumelu Daniel Ejim Uti Ibe Michael Usman Chukwuebuka Gabriel Eze Simeon Ikechukwu Egba 《Medical Data Mining》 2026年第1期32-45,共14页
The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities... The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities and obstacles.The huge and diversified nature of these datasets cannot always be managed using traditional data analysis methods.As a consequence,deep learning has emerged as a strong tool for analysing numerous omics data due to its ability to handle complex and non-linear relationships.This paper explores the fundamental concepts of deep learning and how they are used in multi-omics medical data mining.We demonstrate how autoencoders,variational autoencoders,multimodal models,attention mechanisms,transformers,and graph neural networks enable pattern analysis and recognition across all omics data.Deep learning has been found to be effective in illness classification,biomarker identification,gene network learning,and therapeutic efficacy prediction.We also consider critical problems like as data quality,model explainability,whether findings can be repeated,and computational power requirements.We now consider future elements of combining omics with clinical and imaging data,explainable AI,federated learning,and real-time diagnostics.Overall,this study emphasises the need of collaborating across disciplines to advance deep learning-based multi-omics research for precision medicine and comprehending complicated disorders. 展开更多
关键词 deep learning multi-omics integration biomedical data mining precision medicine graph neural networks autoencoders and transformers
在线阅读 下载PDF
Research on the visualization method of lithology intelligent recognition based on deep learning using mine tunnel images
5
作者 Aiai Wang Shuai Cao +1 位作者 Erol Yilmaz Hui Cao 《International Journal of Minerals,Metallurgy and Materials》 2026年第1期141-152,共12页
An image processing and deep learning method for identifying different types of rock images was proposed.Preprocessing,such as rock image acquisition,gray scaling,Gaussian blurring,and feature dimensionality reduction... An image processing and deep learning method for identifying different types of rock images was proposed.Preprocessing,such as rock image acquisition,gray scaling,Gaussian blurring,and feature dimensionality reduction,was conducted to extract useful feature information and recognize and classify rock images using Tensor Flow-based convolutional neural network(CNN)and Py Qt5.A rock image dataset was established and separated into workouts,confirmation sets,and test sets.The framework was subsequently compiled and trained.The categorization approach was evaluated using image data from the validation and test datasets,and key metrics,such as accuracy,precision,and recall,were analyzed.Finally,the classification model conducted a probabilistic analysis of the measured data to determine the equivalent lithological type for each image.The experimental results indicated that the method combining deep learning,Tensor Flow-based CNN,and Py Qt5 to recognize and classify rock images has an accuracy rate of up to 98.8%,and can be successfully utilized for rock image recognition.The system can be extended to geological exploration,mine engineering,and other rock and mineral resource development to more efficiently and accurately recognize rock samples.Moreover,it can match them with the intelligent support design system to effectively improve the reliability and economy of the support scheme.The system can serve as a reference for supporting the design of other mining and underground space projects. 展开更多
关键词 rock picture recognition convolutional neural network intelligent support for roadways deep learning lithology determination
在线阅读 下载PDF
An Improved Forest Fire Detection Model Using Audio Classification and Machine Learning
6
作者 Kemahyanto Exaudi Deris Stiawan +4 位作者 Bhakti Yudho Suprapto Hanif Fakhrurroja MohdYazid Idris Tami AAlghamdi Rahmat Budiarto 《Computers, Materials & Continua》 2026年第1期2062-2085,共24页
Sudden wildfires cause significant global ecological damage.While satellite imagery has advanced early fire detection and mitigation,image-based systems face limitations including high false alarm rates,visual obstruc... Sudden wildfires cause significant global ecological damage.While satellite imagery has advanced early fire detection and mitigation,image-based systems face limitations including high false alarm rates,visual obstructions,and substantial computational demands,especially in complex forest terrains.To address these challenges,this study proposes a novel forest fire detection model utilizing audio classification and machine learning.We developed an audio-based pipeline using real-world environmental sound recordings.Sounds were converted into Mel-spectrograms and classified via a Convolutional Neural Network(CNN),enabling the capture of distinctive fire acoustic signatures(e.g.,crackling,roaring)that are minimally impacted by visual or weather conditions.Internet of Things(IoT)sound sensors were crucial for generating complex environmental parameters to optimize feature extraction.The CNN model achieved high performance in stratified 5-fold cross-validation(92.4%±1.6 accuracy,91.2%±1.8 F1-score)and on test data(94.93%accuracy,93.04%F1-score),with 98.44%precision and 88.32%recall,demonstrating reliability across environmental conditions.These results indicate that the audio-based approach not only improves detection reliability but also markedly reduces computational overhead compared to traditional image-based methods.The findings suggest that acoustic sensing integrated with machine learning offers a powerful,low-cost,and efficient solution for real-time forest fire monitoring in complex,dynamic environments. 展开更多
关键词 Audio classification convolutional neural network(CNN) environmental science forest fire detection machine learning spectrogram analysis IOT
在线阅读 下载PDF
Machine Learning Enabled Reusable Adhesion,Entangled Network‑Based Hydrogel for Long‑Term,High‑Fidelity EEG Recording and Attention Assessment 被引量:1
7
作者 Kai Zheng Chengcheng Zheng +9 位作者 Lixian Zhu Bihai Yang Xiaokun Jin Su Wang Zikai Song Jingyu Liu Yan Xiong Fuze Tian Ran Cai Bin Hu 《Nano-Micro Letters》 2025年第11期514-529,共16页
Due to their high mechanical compliance and excellent biocompatibility,conductive hydrogels exhibit significant potential for applications in flexible electronics.However,as the demand for high sensitivity,superior me... Due to their high mechanical compliance and excellent biocompatibility,conductive hydrogels exhibit significant potential for applications in flexible electronics.However,as the demand for high sensitivity,superior mechanical properties,and strong adhesion performance continues to grow,many conventional fabrication methods remain complex and costly.Herein,we propose a simple and efficient strategy to construct an entangled network hydrogel through a liquid-metal-induced cross-linking reaction,hydrogel demonstrates outstanding properties,including exceptional stretchability(1643%),high tensile strength(366.54 kPa),toughness(350.2 kJ m^(−3)),and relatively low mechanical hysteresis.The hydrogel exhibits long-term stable reusable adhesion(104 kPa),enabling conformal and stable adhesion to human skin.This capability allows it to effectively capture high-quality epidermal electrophysiological signals with high signal-to-noise ratio(25.2 dB)and low impedance(310 ohms).Furthermore,by integrating advanced machine learning algorithms,achieving an attention classification accuracy of 91.38%,which will significantly impact fields like education,healthcare,and artificial intelligence. 展开更多
关键词 Entangled network Reusable adhesion Epidermal sensor Machine learning Attention assessment
在线阅读 下载PDF
Personalized Generative AI Services Through Federated Learning in 6G Edge Networks 被引量:1
8
作者 Li Zeshen Chen Zihan +1 位作者 Hu Xinyi Howard H.Yang 《China Communications》 2025年第7期1-13,共13页
Network architectures assisted by Generative Artificial Intelligence(GAI)are envisioned as foundational elements of sixth-generation(6G)communication system.To deliver ubiquitous intelligent services and meet diverse ... Network architectures assisted by Generative Artificial Intelligence(GAI)are envisioned as foundational elements of sixth-generation(6G)communication system.To deliver ubiquitous intelligent services and meet diverse service requirements,6G network architecture should offer personalized services to various mobile devices.Federated learning(FL)with personalized local training,as a privacypreserving machine learning(ML)approach,can be applied to address these challenges.In this paper,we propose a meta-learning-based personalized FL(PFL)method that improves both communication and computation efficiency by utilizing over-the-air computations.Its“pretraining-and-fine-tuning”principle makes it particularly suitable for enabling edge nodes to access personalized GAI services while preserving local privacy.Experiment results demonstrate the outperformance and efficacy of the proposed algorithm,and notably indicate enhanced communication efficiency without compromising accuracy. 展开更多
关键词 generative artificial intelligence personalized federated learning 6G networks
在线阅读 下载PDF
A Hybrid Deep Learning Multi-Class Classification Model for Alzheimer’s Disease Using Enhanced MRI Images
9
作者 Ghadah Naif Alwakid 《Computers, Materials & Continua》 2026年第1期797-821,共25页
Alzheimer’s Disease(AD)is a progressive neurodegenerative disorder that significantly affects cognitive function,making early and accurate diagnosis essential.Traditional Deep Learning(DL)-based approaches often stru... Alzheimer’s Disease(AD)is a progressive neurodegenerative disorder that significantly affects cognitive function,making early and accurate diagnosis essential.Traditional Deep Learning(DL)-based approaches often struggle with low-contrast MRI images,class imbalance,and suboptimal feature extraction.This paper develops a Hybrid DL system that unites MobileNetV2 with adaptive classification methods to boost Alzheimer’s diagnosis by processing MRI scans.Image enhancement is done using Contrast-Limited Adaptive Histogram Equalization(CLAHE)and Enhanced Super-Resolution Generative Adversarial Networks(ESRGAN).A classification robustness enhancement system integrates class weighting techniques and a Matthews Correlation Coefficient(MCC)-based evaluation method into the design.The trained and validated model gives a 98.88%accuracy rate and 0.9614 MCC score.We also performed a 10-fold cross-validation experiment with an average accuracy of 96.52%(±1.51),a loss of 0.1671,and an MCC score of 0.9429 across folds.The proposed framework outperforms the state-of-the-art models with a 98%weighted F1-score while decreasing misdiagnosis results for every AD stage.The model demonstrates apparent separation abilities between AD progression stages according to the results of the confusion matrix analysis.These results validate the effectiveness of hybrid DL models with adaptive preprocessing for early and reliable Alzheimer’s diagnosis,contributing to improved computer-aided diagnosis(CAD)systems in clinical practice. 展开更多
关键词 Alzheimer’s disease deep learning MRI images MobileNetV2 contrast-limited adaptive histogram equalization(CLAHE) enhanced super-resolution generative adversarial networks(ESRGAN) multi-class classification
在线阅读 下载PDF
Multi-QoS routing algorithm based on reinforcement learning for LEO satellite networks 被引量:1
10
作者 ZHANG Yifan DONG Tao +1 位作者 LIU Zhihui JIN Shichao 《Journal of Systems Engineering and Electronics》 2025年第1期37-47,共11页
Low Earth orbit(LEO)satellite networks exhibit distinct characteristics,e.g.,limited resources of individual satellite nodes and dynamic network topology,which have brought many challenges for routing algorithms.To sa... Low Earth orbit(LEO)satellite networks exhibit distinct characteristics,e.g.,limited resources of individual satellite nodes and dynamic network topology,which have brought many challenges for routing algorithms.To satisfy quality of service(QoS)requirements of various users,it is critical to research efficient routing strategies to fully utilize satellite resources.This paper proposes a multi-QoS information optimized routing algorithm based on reinforcement learning for LEO satellite networks,which guarantees high level assurance demand services to be prioritized under limited satellite resources while considering the load balancing performance of the satellite networks for low level assurance demand services to ensure the full and effective utilization of satellite resources.An auxiliary path search algorithm is proposed to accelerate the convergence of satellite routing algorithm.Simulation results show that the generated routing strategy can timely process and fully meet the QoS demands of high assurance services while effectively improving the load balancing performance of the link. 展开更多
关键词 low Earth orbit(LEO)satellite network reinforcement learning multi-quality of service(QoS) routing algorithm
在线阅读 下载PDF
Secure Malicious Node Detection in Decentralized Healthcare Networks Using Cloud and Edge Computing with Blockchain-Enabled Federated Learning
11
作者 Raj Sonani Reham Alhejaili +2 位作者 Pushpalika Chatterjee Khalid Hamad Alnafisah Jehad Ali 《Computer Modeling in Engineering & Sciences》 2025年第9期3169-3189,共21页
Healthcare networks are transitioning from manual records to electronic health records,but this shift introduces vulnerabilities such as secure communication issues,privacy concerns,and the presence of malicious nodes... Healthcare networks are transitioning from manual records to electronic health records,but this shift introduces vulnerabilities such as secure communication issues,privacy concerns,and the presence of malicious nodes.Existing machine and deep learning-based anomalies detection methods often rely on centralized training,leading to reduced accuracy and potential privacy breaches.Therefore,this study proposes a Blockchain-based-Federated Learning architecture for Malicious Node Detection(BFL-MND)model.It trains models locally within healthcare clusters,sharing only model updates instead of patient data,preserving privacy and improving accuracy.Cloud and edge computing enhance the model’s scalability,while blockchain ensures secure,tamper-proof access to health data.Using the PhysioNet dataset,the proposed model achieves an accuracy of 0.95,F1 score of 0.93,precision of 0.94,and recall of 0.96,outperforming baseline models like random forest(0.88),adaptive boosting(0.90),logistic regression(0.86),perceptron(0.83),and deep neural networks(0.92). 展开更多
关键词 Authentication blockchain deep learning federated learning healthcare network machine learning wearable sensor nodes
在线阅读 下载PDF
When Communication Networks Meet Federated Learning for Intelligence Interconnecting:A Comprehensive Survey and Future Perspective
12
作者 Sha Zongxuan Huo Ru +3 位作者 Sun Chuang Wang Shuo Huang Tao F.Richard Yu 《China Communications》 2025年第7期74-94,共21页
With the rapid development of network technologies,a large number of deployed edge devices and information systems generate massive amounts of data which provide good support for the advancement of data-driven intelli... With the rapid development of network technologies,a large number of deployed edge devices and information systems generate massive amounts of data which provide good support for the advancement of data-driven intelligent models.However,these data often contain sensitive information of users.Federated learning(FL),as a privacy preservation machine learning setting,allows users to obtain a well-trained model without sending the privacy-sensitive local data to the central server.Despite the promising prospect of FL,several significant research challenges need to be addressed before widespread deployment,including network resource allocation,model security,model convergence,etc.In this paper,we first provide a brief survey on some of these works that have been done on FL and discuss the motivations of the Communication Networks(CNs)and FL to mutually enable each other.We analyze the support of network technologies for FL,which requires frequent communication and emphasizes security,as well as the studies on the intelligence of many network scenarios and the improvement of network performance and security by the methods based on FL.At last,some challenges and broader perspectives are explored. 展开更多
关键词 communication networks federated learning intelligence interconnecting machine learning privacy preservation
在线阅读 下载PDF
An Ensembled Multi-Layer Automatic-Constructed Weighted Online Broad Learning System for Fault Detection in Cellular Networks
13
作者 Wang Qi Pan Zhiwen +1 位作者 Liu Nan You Xiaohu 《China Communications》 2025年第8期150-167,共18页
6G is desired to support more intelligence networks and this trend attaches importance to the self-healing capability if degradation emerges in the cellular networks.As a primary component of selfhealing networks,faul... 6G is desired to support more intelligence networks and this trend attaches importance to the self-healing capability if degradation emerges in the cellular networks.As a primary component of selfhealing networks,fault detection is investigated in this paper.Considering the fast response and low timeand-computational consumption,it is the first time that the Online Broad Learning System(OBLS)is applied to identify outages in cellular networks.In addition,the Automatic-constructed Online Broad Learning System(AOBLS)is put forward to rationalize its structure and consequently avoid over-fitting and under-fitting.Furthermore,a multi-layer classification structure is proposed to further improve the classification performance.To face the challenges caused by imbalanced data in fault detection problems,a novel weighting strategy is derived to achieve the Multilayer Automatic-constructed Weighted Online Broad Learning System(MAWOBLS)and ensemble learning with retrained Support Vector Machine(SVM),denoted as EMAWOBLS,for superior treatment with this imbalance issue.Simulation results show that the proposed algorithm has excellent performance in detecting faults with satisfactory time usage. 展开更多
关键词 broad learning system(BLS) cell outage detection cellular network fault detection ensemble learning imbalanced classification online broad learning system(OBLS) self-healing network weighted broad learning system(WBLS)
在线阅读 下载PDF
Federated Learning for Vision-Based Applications in 6G Networks: A Simulation-Based Performance Study
14
作者 Manuel J.C.S.Reis Nishu Gupta 《Computer Modeling in Engineering & Sciences》 2025年第12期4225-4243,共19页
The forthcoming sixth generation(6G)of mobile communication networks is envisioned to be AInative,supporting intelligent services and pervasive computing at unprecedented scale.Among the key paradigms enabling this vi... The forthcoming sixth generation(6G)of mobile communication networks is envisioned to be AInative,supporting intelligent services and pervasive computing at unprecedented scale.Among the key paradigms enabling this vision,Federated Learning(FL)has gained prominence as a distributed machine learning framework that allows multiple devices to collaboratively train models without sharing raw data,thereby preserving privacy and reducing the need for centralized storage.This capability is particularly attractive for vision-based applications,where image and video data are both sensitive and bandwidth-intensive.However,the integration of FL with 6G networks presents unique challenges,including communication bottlenecks,device heterogeneity,and trade-offs between model accuracy,latency,and energy consumption.In this paper,we developed a simulation-based framework to investigate the performance of FL in representative vision tasks under 6G-like environments.We formalize the system model,incorporating both the federated averaging(FedAvg)training process and a simplified communication costmodel that captures bandwidth constraints,packet loss,and variable latency across edge devices.Using standard image datasets(e.g.,MNIST,CIFAR-10)as benchmarks,we analyze how factors such as the number of participating clients,degree of data heterogeneity,and communication frequency influence convergence speed and model accuracy.Additionally,we evaluate the effectiveness of lightweight communication-efficient strategies,including local update tuning and gradient compression,in mitigating network overhead.The experimental results reveal several key insights:(i)communication limitations can significantly degrade FL convergence in vision tasks if not properly addressed;(ii)judicious tuning of local training epochs and client participation levels enables notable improvements in both efficiency and accuracy;and(iii)communication-efficient FL strategies provide a promising pathway to balance performance with the stringent latency and reliability requirements expected in 6G.These findings highlight the synergistic role of AI and nextgeneration networks in enabling privacy-preserving,real-time vision applications,and they provide concrete design guidelines for researchers and practitioners working at the intersection of FL and 6G. 展开更多
关键词 Federated learning 6G networks edge intelligence vision-based applications communication-efficient learning privacy-preserving AI
在线阅读 下载PDF
APFed: Adaptive personalized federated learning for intrusion detection in maritime meteorological sensor networks
15
作者 Xin Su Guifu Zhang 《Digital Communications and Networks》 2025年第2期401-411,共11页
With the rapid development of advanced networking and computing technologies such as the Internet of Things, network function virtualization, and 5G infrastructure, new development opportunities are emerging for Marit... With the rapid development of advanced networking and computing technologies such as the Internet of Things, network function virtualization, and 5G infrastructure, new development opportunities are emerging for Maritime Meteorological Sensor Networks(MMSNs). However, the increasing number of intelligent devices joining the MMSN poses a growing threat to network security. Current Artificial Intelligence(AI) intrusion detection techniques turn intrusion detection into a classification problem, where AI excels. These techniques assume sufficient high-quality instances for model construction, which is often unsatisfactory for real-world operation with limited attack instances and constantly evolving characteristics. This paper proposes an Adaptive Personalized Federated learning(APFed) framework that allows multiple MMSN owners to engage in collaborative training. By employing an adaptive personalized update and a shared global classifier, the adverse effects of imbalanced, Non-Independent and Identically Distributed(Non-IID) data are mitigated, enabling the intrusion detection model to possess personalized capabilities and good global generalization. In addition, a lightweight intrusion detection model is proposed to detect various attacks with an effective adaptation to the MMSN environment. Finally, extensive experiments on a classical network dataset show that the attack classification accuracy is improved by about 5% compared to most baselines in the global scenarios. 展开更多
关键词 Intrusion detection Maritime meteorological sensor network Federated learning Personalized model Deep learning
在线阅读 下载PDF
Container cluster placement in edge computing based on reinforcement learning incorporating graph convolutional networks scheme
16
作者 Zhuo Chen Bowen Zhu Chuan Zhou 《Digital Communications and Networks》 2025年第1期60-70,共11页
Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilizat... Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilization efficiency. To meet the diverse needs of tasks, it usually needs to instantiate multiple network functions in the form of containers interconnect various generated containers to build a Container Cluster(CC). Then CCs will be deployed on edge service nodes with relatively limited resources. However, the increasingly complex and timevarying nature of tasks brings great challenges to optimal placement of CC. This paper regards the charges for various resources occupied by providing services as revenue, the service efficiency and energy consumption as cost, thus formulates a Mixed Integer Programming(MIP) model to describe the optimal placement of CC on edge service nodes. Furthermore, an Actor-Critic based Deep Reinforcement Learning(DRL) incorporating Graph Convolutional Networks(GCN) framework named as RL-GCN is proposed to solve the optimization problem. The framework obtains an optimal placement strategy through self-learning according to the requirements and objectives of the placement of CC. Particularly, through the introduction of GCN, the features of the association relationship between multiple containers in CCs can be effectively extracted to improve the quality of placement.The experiment results show that under different scales of service nodes and task requests, the proposed method can obtain the improved system performance in terms of placement error ratio, time efficiency of solution output and cumulative system revenue compared with other representative baseline methods. 展开更多
关键词 Edge computing network virtualization Container cluster Deep reinforcement learning Graph convolutional network
在线阅读 下载PDF
Microseismic Event Recognition and Transfer Learning Based on Convolutional Neural Network and Attention Mechanisms
17
作者 Jin Shu Zhang Shichao +2 位作者 Gao Ya Yu Benli Zhen Shenglai 《Applied Geophysics》 2025年第4期1220-1232,1497,共14页
Microseismic monitoring technology is widely used in tunnel and coal mine safety production.For signals generated by ultra-weak microseismic events,traditional sensors encounter limitations in terms of detection sensi... Microseismic monitoring technology is widely used in tunnel and coal mine safety production.For signals generated by ultra-weak microseismic events,traditional sensors encounter limitations in terms of detection sensitivity.Given the complex engineering environment,automatic multi-classification of microseismic data is highly required.In this study,we use acceleration sensors to collect signals and combine the improved Visual Geometry Group with a convolutional block attention module to obtain a new network structure,termed CNN_BAM,for automatic classification and identification of microseismic events.We use the dataset collected from the Hanjiang-to-Weihe River Diversion Project to train and validate the network model.Results show that the CNN_BAM model exhibits good feature extraction ability,achieving a recognition accuracy of 99.29%,surpassing all its counterparts.The stability and accuracy of the classification algorithm improve remarkably.In addition,through fine-tuning and migration to the Pan Ⅱ Mine Project,the network demonstrates reliable generalization performance.This outcome reflects its adaptability across different projects and promising application prospects. 展开更多
关键词 Microseismic Convolutional Neural networks MULTI-CLASSIFICATION Attentional mechanism Transfer learning
在线阅读 下载PDF
Addressing Modern Cybersecurity Challenges: A Hybrid Machine Learning and Deep Learning Approach for Network Intrusion Detection
18
作者 Khadija Bouzaachane El Mahdi El Guarmah +1 位作者 Abdullah M.Alnajim Sheroz Khan 《Computers, Materials & Continua》 2025年第8期2391-2410,共20页
The rapid increase in the number of Internet of Things(IoT)devices,coupled with a rise in sophisticated cyberattacks,demands robust intrusion detection systems.This study presents a holistic,intelligent intrusion dete... The rapid increase in the number of Internet of Things(IoT)devices,coupled with a rise in sophisticated cyberattacks,demands robust intrusion detection systems.This study presents a holistic,intelligent intrusion detection system.It uses a combined method that integrates machine learning(ML)and deep learning(DL)techniques to improve the protection of contemporary information technology(IT)systems.Unlike traditional signature-based or singlemodel methods,this system integrates the strengths of ensemble learning for binary classification and deep learning for multi-class classification.This combination provides a more nuanced and adaptable defense.The research utilizes the NF-UQ-NIDS-v2 dataset,a recent,comprehensive benchmark for evaluating network intrusion detection systems(NIDS).Our methodological framework employs advanced artificial intelligence techniques.Specifically,we use ensemble learning algorithms(Random Forest,Gradient Boosting,AdaBoost,and XGBoost)for binary classification.Deep learning architectures are also employed to address the complexities of multi-class classification,allowing for fine-grained identification of intrusion types.To mitigate class imbalance,a common problem in multi-class intrusion detection that biases model performance,we use oversampling and data augmentation.These techniques ensure equitable class representation.The results demonstrate the efficacy of the proposed hybrid ML-DL system.It achieves significant improvements in intrusion detection accuracy and reliability.This research contributes substantively to cybersecurity by providing a more robust and adaptable intrusion detection solution. 展开更多
关键词 network intrusion detection systems(NIDS) NF-UQ-NIDS-v2 dataset ensemble learning decision tree K-means SMOTE deep learning
在线阅读 下载PDF
Deep Q-Learning Driven Protocol for Enhanced Border Surveillance with Extended Wireless Sensor Network Lifespan
19
作者 Nimisha Rajput Amit Kumar +3 位作者 Raghavendra Pal Nishu Gupta Mikko Uitto Jukka Mäkelä 《Computer Modeling in Engineering & Sciences》 2025年第6期3839-3859,共21页
Wireless Sensor Networks(WSNs)play a critical role in automated border surveillance systems,where continuous monitoring is essential.However,limited energy resources in sensor nodes lead to frequent network failures a... Wireless Sensor Networks(WSNs)play a critical role in automated border surveillance systems,where continuous monitoring is essential.However,limited energy resources in sensor nodes lead to frequent network failures and reduced coverage over time.To address this issue,this paper presents an innovative energy-efficient protocol based on deep Q-learning(DQN),specifically developed to prolong the operational lifespan of WSNs used in border surveillance.By harnessing the adaptive power of DQN,the proposed protocol dynamically adjusts node activity and communication patterns.This approach ensures optimal energy usage while maintaining high coverage,connectivity,and data accuracy.The proposed system is modeled with 100 sensor nodes deployed over a 1000 m×1000 m area,featuring a strategically positioned sink node.Our method outperforms traditional approaches,achieving significant enhancements in network lifetime and energy utilization.Through extensive simulations,it is observed that the network lifetime increases by 9.75%,throughput increases by 8.85%and average delay decreases by 9.45%in comparison to the similar recent protocols.It demonstrates the robustness and efficiency of our protocol in real-world scenarios,highlighting its potential to revolutionize border surveillance operations. 展开更多
关键词 Wireless sensor networks(WSNs) energy efficiency reinforcement learning network lifetime dynamic node management autonomous surveillance
在线阅读 下载PDF
Graph Neural Networks Empowered Origin-Destination Learning for Urban Traffic Prediction
20
作者 Chuanting Zhang Guoqing Ma +1 位作者 Liang Zhang Basem Shihada 《CAAI Transactions on Intelligence Technology》 2025年第4期1062-1076,共15页
Urban traffic prediction with high precision is always the unremitting pursuit of intelligent transportation systems and is instrumental in bringing smart cities into reality.The fundamental challenges for traffic pre... Urban traffic prediction with high precision is always the unremitting pursuit of intelligent transportation systems and is instrumental in bringing smart cities into reality.The fundamental challenges for traffic prediction lie in the accurate modelling of spatial and temporal traffic dynamics.Existing approaches mainly focus on modelling the traffic data itself,but do not explore the traffic correlations implicit in origin-destination(OD)data.In this paper,we propose STOD-Net,a dynamic spatial-temporal OD feature-enhanced deep network,to simultaneously predict the in-traffic and out-traffic for each and every region of a city.We model the OD data as dynamic graphs and adopt graph neural networks in STOD-Net to learn a low-dimensional representation for each region.As per the region feature,we design a gating mechanism and operate it on the traffic feature learning to explicitly capture spatial correlations.To further capture the complicated spatial and temporal dependencies among different regions,we propose a novel joint feature,learning block in STOD-Net and transfer the hybrid OD features to each block to make the learning process spatiotemporal-aware.We evaluate the effectiveness of STOD-Net on two benchmark datasets,and experimental results demonstrate that it outperforms the state-of-the-art by approximately 5%in terms of prediction accuracy and considerably improves prediction stability up to 80%in terms of standard deviation. 展开更多
关键词 deep neural networks origin-destination learning spatial-temporal modeling traffic prediction
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部