期刊文献+
共找到541,835篇文章
< 1 2 250 >
每页显示 20 50 100
Multi-dimension and multi-modal rolling mill vibration prediction model based on multi-level network fusion
1
作者 CHEN Shu-zong LIU Yun-xiao +3 位作者 WANG Yun-long QIAN Cheng HUA Chang-chun SUN Jie 《Journal of Central South University》 SCIE EI CAS CSCD 2024年第9期3329-3348,共20页
Mill vibration is a common problem in rolling production,which directly affects the thickness accuracy of the strip and may even lead to strip fracture accidents in serious cases.The existing vibration prediction mode... Mill vibration is a common problem in rolling production,which directly affects the thickness accuracy of the strip and may even lead to strip fracture accidents in serious cases.The existing vibration prediction models do not consider the features contained in the data,resulting in limited improvement of model accuracy.To address these challenges,this paper proposes a multi-dimensional multi-modal cold rolling vibration time series prediction model(MDMMVPM)based on the deep fusion of multi-level networks.In the model,the long-term and short-term modal features of multi-dimensional data are considered,and the appropriate prediction algorithms are selected for different data features.Based on the established prediction model,the effects of tension and rolling force on mill vibration are analyzed.Taking the 5th stand of a cold mill in a steel mill as the research object,the innovative model is applied to predict the mill vibration for the first time.The experimental results show that the correlation coefficient(R^(2))of the model proposed in this paper is 92.5%,and the root-mean-square error(RMSE)is 0.0011,which significantly improves the modeling accuracy compared with the existing models.The proposed model is also suitable for the hot rolling process,which provides a new method for the prediction of strip rolling vibration. 展开更多
关键词 rolling mill vibration multi-dimension data multi-modal data convolutional neural network time series prediction
在线阅读 下载PDF
Multi-relation spatiotemporal graph residual network model with multi-level feature attention:A novel approach for landslide displacement prediction
2
作者 Ziqian Wang Xiangwei Fang +3 位作者 Wengang Zhang Xuanming Ding Luqi Wang Chao Chen 《Journal of Rock Mechanics and Geotechnical Engineering》 2025年第7期4211-4226,共16页
Accurate prediction of landslide displacement is crucial for effective early warning of landslide disasters.While most existing prediction methods focus on time-series forecasting for individual monitoring points,ther... Accurate prediction of landslide displacement is crucial for effective early warning of landslide disasters.While most existing prediction methods focus on time-series forecasting for individual monitoring points,there is limited research on the spatiotemporal characteristics of landslide deformation.This paper proposes a novel Multi-Relation Spatiotemporal Graph Residual Network with Multi-Level Feature Attention(MFA-MRSTGRN)that effectively improves the prediction performance of landslide displacement through spatiotemporal fusion.This model integrates internal seepage factors as data feature enhancements with external triggering factors,allowing for accurate capture of the complex spatiotemporal characteristics of landslide displacement and the construction of a multi-source heterogeneous dataset.The MFA-MRSTGRN model incorporates dynamic graph theory and four key modules:multilevel feature attention,temporal-residual decomposition,spatial multi-relational graph convolution,and spatiotemporal fusion prediction.This comprehensive approach enables the efficient analyses of multi-source heterogeneous datasets,facilitating adaptive exploration of the evolving multi-relational,multi-dimensional spatiotemporal complexities in landslides.When applying this model to predict the displacement of the Liangshuijing landslide,we demonstrate that the MFA-MRSTGRN model surpasses traditional models,such as random forest(RF),long short-term memory(LSTM),and spatial temporal graph convolutional networks(ST-GCN)models in terms of various evaluation metrics including mean absolute error(MAE=1.27 mm),root mean square error(RMSE=1.49 mm),mean absolute percentage error(MAPE=0.026),and R-squared(R^(2)=0.88).Furthermore,feature ablation experiments indicate that incorporating internal seepage factors improves the predictive performance of landslide displacement models.This research provides an advanced and reliable method for landslide displacement prediction. 展开更多
关键词 Landslide displacement prediction Spatiotemporal fusion Dynamic graph Data feature enhancement multi-level feature attention
在线阅读 下载PDF
改进Deep Q Networks的交通信号均衡调度算法
3
作者 贺道坤 《机械设计与制造》 北大核心 2025年第4期135-140,共6页
为进一步缓解城市道路高峰时段十字路口的交通拥堵现象,实现路口各道路车流均衡通过,基于改进Deep Q Networks提出了一种的交通信号均衡调度算法。提取十字路口与交通信号调度最相关的特征,分别建立单向十字路口交通信号模型和线性双向... 为进一步缓解城市道路高峰时段十字路口的交通拥堵现象,实现路口各道路车流均衡通过,基于改进Deep Q Networks提出了一种的交通信号均衡调度算法。提取十字路口与交通信号调度最相关的特征,分别建立单向十字路口交通信号模型和线性双向十字路口交通信号模型,并基于此构建交通信号调度优化模型;针对Deep Q Networks算法在交通信号调度问题应用中所存在的收敛性、过估计等不足,对Deep Q Networks进行竞争网络改进、双网络改进以及梯度更新策略改进,提出相适应的均衡调度算法。通过与经典Deep Q Networks仿真比对,验证论文算法对交通信号调度问题的适用性和优越性。基于城市道路数据,分别针对两种场景进行仿真计算,仿真结果表明该算法能够有效缩减十字路口车辆排队长度,均衡各路口车流通行量,缓解高峰出行方向的道路拥堵现象,有利于十字路口交通信号调度效益的提升。 展开更多
关键词 交通信号调度 十字路口 Deep Q networks 深度强化学习 智能交通
在线阅读 下载PDF
LATITUDES Network:提升证据合成稳健性的效度(偏倚风险)评价工具库
4
作者 廖明雨 熊益权 +7 位作者 赵芃 郭金 陈靖文 刘春容 贾玉龙 任燕 孙鑫 谭婧 《中国循证医学杂志》 北大核心 2025年第5期614-620,共7页
证据合成是对现有研究证据进行系统收集、分析和整合的过程,其结果依赖于纳入原始研究的质量,而效度评价(validity assessment,又称偏倚风险评价)则是评估这些原始研究质量的重要手段。现有效度评价工具种类繁多,但部分工具缺乏严格的... 证据合成是对现有研究证据进行系统收集、分析和整合的过程,其结果依赖于纳入原始研究的质量,而效度评价(validity assessment,又称偏倚风险评价)则是评估这些原始研究质量的重要手段。现有效度评价工具种类繁多,但部分工具缺乏严格的开发过程和评估,证据合成过程中应用不恰当的效度评价工具开展文献质量评价,可能会影响研究结论的准确性,误导临床实践。为解决这一困境,2023年9月英国Bristol大学学者牵头成立了效度评价工具一站式资源站LATITUDES Network。该网站致力于收集、整理和推广研究效度评价工具,以促进原始研究效度评价的准确性,提升证据合成的稳健性和可靠性。本文对LATITUDES Network成立背景、收录的效度评价工具,以及评价工具使用的培训资源等内容进行了详细介绍,以期为国内学者更多地了解LATITUDES Network,更好地运用恰当的效度评价工具开展文献质量评价,以及为开发效度评价工具等提供参考。 展开更多
关键词 效度评价 偏倚风险 证据合成 LATITUDES network
原文传递
Application of virtual reality technology improves the functionality of brain networks in individuals experiencing pain 被引量:3
5
作者 Takahiko Nagamine 《World Journal of Clinical Cases》 SCIE 2025年第3期66-68,共3页
Medical procedures are inherently invasive and carry the risk of inducing pain to the mind and body.Recently,efforts have been made to alleviate the discomfort associated with invasive medical procedures through the u... Medical procedures are inherently invasive and carry the risk of inducing pain to the mind and body.Recently,efforts have been made to alleviate the discomfort associated with invasive medical procedures through the use of virtual reality(VR)technology.VR has been demonstrated to be an effective treatment for pain associated with medical procedures,as well as for chronic pain conditions for which no effective treatment has been established.The precise mechanism by which the diversion from reality facilitated by VR contributes to the diminution of pain and anxiety has yet to be elucidated.However,the provision of positive images through VR-based visual stimulation may enhance the functionality of brain networks.The salience network is diminished,while the default mode network is enhanced.Additionally,the medial prefrontal cortex may establish a stronger connection with the default mode network,which could result in a reduction of pain and anxiety.Further research into the potential of VR technology to alleviate pain could lead to a reduction in the number of individuals who overdose on painkillers and contribute to positive change in the medical field. 展开更多
关键词 Virtual reality PAIN ANXIETY Salience network Default mode network
在线阅读 下载PDF
EGSNet:An Efficient Glass Segmentation Network Based on Multi-Level Heterogeneous Architecture and Boundary Awareness
6
作者 Guojun Chen Tao Cui +1 位作者 Yongjie Hou Huihui Li 《Computers, Materials & Continua》 SCIE EI 2024年第12期3969-3987,共19页
Existing glass segmentation networks have high computational complexity and large memory occupation,leading to high hardware requirements and time overheads for model inference,which is not conducive to efficiency-see... Existing glass segmentation networks have high computational complexity and large memory occupation,leading to high hardware requirements and time overheads for model inference,which is not conducive to efficiency-seeking real-time tasks such as autonomous driving.The inefficiency of the models is mainly due to employing homogeneous modules to process features of different layers.These modules require computationally intensive convolutions and weight calculation branches with numerous parameters to accommodate the differences in information across layers.We propose an efficient glass segmentation network(EGSNet)based on multi-level heterogeneous architecture and boundary awareness to balance the model performance and efficiency.EGSNet divides the feature layers from different stages into low-level understanding,semantic-level understanding,and global understanding with boundary guidance.Based on the information differences among the different layers,we further propose the multi-angle collaborative enhancement(MCE)module,which extracts the detailed information from shallow features,and the large-scale contextual feature extraction(LCFE)module to understand semantic logic through deep features.The models are trained and evaluated on the glass segmentation datasets HSO(Home-Scene-Oriented)and Trans10k-stuff,respectively,and EGSNet achieves the best efficiency and performance compared to advanced methods.In the HSO test set results,the IoU,Fβ,MAE(Mean Absolute Error),and BER(Balance Error Rate)of EGSNet are 0.804,0.847,0.084,and 0.085,and the GFLOPs(Giga Floating Point Operations Per Second)are only 27.15.Experimental results show that EGSNet significantly improves the efficiency of the glass segmentation task with better performance. 展开更多
关键词 Image segmentation multi-level heterogeneous architecture feature differences
在线阅读 下载PDF
Deep neural network based on multi-level wavelet and attention for structured illumination microscopy
7
作者 Yanwei Zhang Song Lang +2 位作者 Xuan Cao Hanqing Zheng Yan Gong 《Journal of Innovative Optical Health Sciences》 SCIE EI CSCD 2024年第2期12-23,共12页
Structured illumination microscopy(SIM)is a popular and powerful super-resolution(SR)technique in biomedical research.However,the conventional reconstruction algorithm for SIM heavily relies on the accurate prior know... Structured illumination microscopy(SIM)is a popular and powerful super-resolution(SR)technique in biomedical research.However,the conventional reconstruction algorithm for SIM heavily relies on the accurate prior knowledge of illumination patterns and signal-to-noise ratio(SNR)of raw images.To obtain high-quality SR images,several raw images need to be captured under high fluorescence level,which further restricts SIM’s temporal resolution and its applications.Deep learning(DL)is a data-driven technology that has been used to expand the limits of optical microscopy.In this study,we propose a deep neural network based on multi-level wavelet and attention mechanism(MWAM)for SIM.Our results show that the MWAM network can extract high-frequency information contained in SIM raw images and accurately integrate it into the output image,resulting in superior SR images compared to those generated using wide-field images as input data.We also demonstrate that the number of SIM raw images can be reduced to three,with one image in each illumination orientation,to achieve the optimal tradeoff between temporal and spatial resolution.Furthermore,our MWAM network exhibits superior reconstruction ability on low-SNR images compared to conventional SIM algorithms.We have also analyzed the adaptability of this network on other biological samples and successfully applied the pretrained model to other SIM systems. 展开更多
关键词 Super-resolution reconstruction multi-level wavelet packet transform residual channel attention selective kernel attention
原文传递
Multi-Level Parallel Network for Brain Tumor Segmentation
8
作者 Juhong Tie Hui Peng 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第4期741-757,共17页
Accurate automatic segmentation of gliomas in various sub-regions,including peritumoral edema,necrotic core,and enhancing and non-enhancing tumor core from 3D multimodal MRI images,is challenging because of its highly... Accurate automatic segmentation of gliomas in various sub-regions,including peritumoral edema,necrotic core,and enhancing and non-enhancing tumor core from 3D multimodal MRI images,is challenging because of its highly heterogeneous appearance and shape.Deep convolution neural networks(CNNs)have recently improved glioma segmentation performance.However,extensive down-sampling such as pooling or stridden convolution in CNNs significantly decreases the initial image resolution,resulting in the loss of accurate spatial and object parts information,especially information on the small sub-region tumors,affecting segmentation performance.Hence,this paper proposes a novel multi-level parallel network comprising three different level parallel subnetworks to fully use low-level,mid-level,and high-level information and improve the performance of brain tumor segmentation.We also introduce the Combo loss function to address input class imbalance and false positives and negatives imbalance in deep learning.The proposed method is trained and validated on the BraTS 2020 training and validation dataset.On the validation dataset,ourmethod achieved a mean Dice score of 0.907,0.830,and 0.787 for the whole tumor,tumor core,and enhancing tumor core,respectively.Compared with state-of-the-art methods,the multi-level parallel network has achieved competitive results on the validation dataset. 展开更多
关键词 Convolution neural network brain tumor segmentation parallel network
在线阅读 下载PDF
Robustness Optimization Algorithm with Multi-Granularity Integration for Scale-Free Networks Against Malicious Attacks 被引量:1
9
作者 ZHANG Yiheng LI Jinhai 《昆明理工大学学报(自然科学版)》 北大核心 2025年第1期54-71,共18页
Complex network models are frequently employed for simulating and studyingdiverse real-world complex systems.Among these models,scale-free networks typically exhibit greater fragility to malicious attacks.Consequently... Complex network models are frequently employed for simulating and studyingdiverse real-world complex systems.Among these models,scale-free networks typically exhibit greater fragility to malicious attacks.Consequently,enhancing the robustness of scale-free networks has become a pressing issue.To address this problem,this paper proposes a Multi-Granularity Integration Algorithm(MGIA),which aims to improve the robustness of scale-free networks while keeping the initial degree of each node unchanged,ensuring network connectivity and avoiding the generation of multiple edges.The algorithm generates a multi-granularity structure from the initial network to be optimized,then uses different optimization strategies to optimize the networks at various granular layers in this structure,and finally realizes the information exchange between different granular layers,thereby further enhancing the optimization effect.We propose new network refresh,crossover,and mutation operators to ensure that the optimized network satisfies the given constraints.Meanwhile,we propose new network similarity and network dissimilarity evaluation metrics to improve the effectiveness of the optimization operators in the algorithm.In the experiments,the MGIA enhances the robustness of the scale-free network by 67.6%.This improvement is approximately 17.2%higher than the optimization effects achieved by eight currently existing complex network robustness optimization algorithms. 展开更多
关键词 complex network model MULTI-GRANULARITY scale-free networks ROBUSTNESS algorithm integration
原文传递
Offload Strategy for Edge Computing in Satellite Networks Based on Software Defined Network 被引量:1
10
作者 Zhiguo Liu Yuqing Gui +1 位作者 Lin Wang Yingru Jiang 《Computers, Materials & Continua》 SCIE EI 2025年第1期863-879,共17页
Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in us... Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency. 展开更多
关键词 Satellite network edge computing task scheduling computing offloading
在线阅读 下载PDF
A Multi-Level Semantic Constraint Approach for Highway Tunnel Scene Twin Modeling 被引量:1
11
作者 LI Yufei XIE Yakun +3 位作者 CHEN Mingzhen ZHAO Yaoji TU Jiaxing HU Ya 《Journal of Geodesy and Geoinformation Science》 2025年第2期37-56,共20页
As a key node of modern transportation network,the informationization management of road tunnels is crucial to ensure the operation safety and traffic efficiency.However,the existing tunnel vehicle modeling methods ge... As a key node of modern transportation network,the informationization management of road tunnels is crucial to ensure the operation safety and traffic efficiency.However,the existing tunnel vehicle modeling methods generally have problems such as insufficient 3D scene description capability and low dynamic update efficiency,which are difficult to meet the demand of real-time accurate management.For this reason,this paper proposes a vehicle twin modeling method for road tunnels.This approach starts from the actual management needs,and supports multi-level dynamic modeling from vehicle type,size to color by constructing a vehicle model library that can be flexibly invoked;at the same time,semantic constraint rules with geometric layout,behavioral attributes,and spatial relationships are designed to ensure that the virtual model matches with the real model with a high degree of similarity;ultimately,the prototype system is constructed and the case region is selected for the case study,and the dynamic vehicle status in the tunnel is realized by integrating real-time monitoring data with semantic constraints for precise virtual-real mapping.Finally,the prototype system is constructed and case experiments are conducted in selected case areas,which are combined with real-time monitoring data to realize dynamic updating and three-dimensional visualization of vehicle states in tunnels.The experiments show that the proposed method can run smoothly with an average rendering efficiency of 17.70 ms while guaranteeing the modeling accuracy(composite similarity of 0.867),which significantly improves the real-time and intuitive tunnel management.The research results provide reliable technical support for intelligent operation and emergency response of road tunnels,and offer new ideas for digital twin modeling of complex scenes. 展开更多
关键词 highway tunnel twin modeling multi-level semantic constraints tunnel vehicles multidimensional modeling
在线阅读 下载PDF
A Novel Self-Supervised Learning Network for Binocular Disparity Estimation 被引量:1
12
作者 Jiawei Tian Yu Zhou +5 位作者 Xiaobing Chen Salman A.AlQahtani Hongrong Chen Bo Yang Siyu Lu Wenfeng Zheng 《Computer Modeling in Engineering & Sciences》 SCIE EI 2025年第1期209-229,共21页
Two-dimensional endoscopic images are susceptible to interferences such as specular reflections and monotonous texture illumination,hindering accurate three-dimensional lesion reconstruction by surgical robots.This st... Two-dimensional endoscopic images are susceptible to interferences such as specular reflections and monotonous texture illumination,hindering accurate three-dimensional lesion reconstruction by surgical robots.This study proposes a novel end-to-end disparity estimation model to address these challenges.Our approach combines a Pseudo-Siamese neural network architecture with pyramid dilated convolutions,integrating multi-scale image information to enhance robustness against lighting interferences.This study introduces a Pseudo-Siamese structure-based disparity regression model that simplifies left-right image comparison,improving accuracy and efficiency.The model was evaluated using a dataset of stereo endoscopic videos captured by the Da Vinci surgical robot,comprising simulated silicone heart sequences and real heart video data.Experimental results demonstrate significant improvement in the network’s resistance to lighting interference without substantially increasing parameters.Moreover,the model exhibited faster convergence during training,contributing to overall performance enhancement.This study advances endoscopic image processing accuracy and has potential implications for surgical robot applications in complex environments. 展开更多
关键词 Parallax estimation parallax regression model self-supervised learning Pseudo-Siamese neural network pyramid dilated convolution binocular disparity estimation
在线阅读 下载PDF
DEEP NEURAL NETWORKS COMBINING MULTI-TASK LEARNING FOR SOLVING DELAY INTEGRO-DIFFERENTIAL EQUATIONS 被引量:1
13
作者 WANG Chen-yao SHI Feng 《数学杂志》 2025年第1期13-38,共26页
Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay di... Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay differential equations(DDEs)and delay integrodifferential equations(DIDEs)with constant delays,primarily due to their low regularity at delayinduced breaking points.In this paper,a DNN method that combines multi-task learning(MTL)which is proposed to solve both the forward and inverse problems of DIDEs.The core idea of this approach is to divide the original equation into multiple tasks based on the delay,using auxiliary outputs to represent the integral terms,followed by the use of MTL to seamlessly incorporate the properties at the breaking points into the loss function.Furthermore,given the increased training dificulty associated with multiple tasks and outputs,we employ a sequential training scheme to reduce training complexity and provide reference solutions for subsequent tasks.This approach significantly enhances the approximation accuracy of solving DIDEs with DNNs,as demonstrated by comparisons with traditional DNN methods.We validate the effectiveness of this method through several numerical experiments,test various parameter sharing structures in MTL and compare the testing results of these structures.Finally,this method is implemented to solve the inverse problem of nonlinear DIDE and the results show that the unknown parameters of DIDE can be discovered with sparse or noisy data. 展开更多
关键词 Delay integro-differential equation Multi-task learning parameter sharing structure deep neural network sequential training scheme
在线阅读 下载PDF
Multi-Stage-Based Siamese Neural Network for Seal Image Recognition
14
作者 Jianfeng Lu Xiangye Huang +3 位作者 Caijin Li Renlin Xin Shanqing Zhang Mahmoud Emam 《Computer Modeling in Engineering & Sciences》 SCIE EI 2025年第1期405-423,共19页
Seal authentication is an important task for verifying the authenticity of stamped seals used in various domains to protect legal documents from tampering and counterfeiting.Stamped seal inspection is commonly audited... Seal authentication is an important task for verifying the authenticity of stamped seals used in various domains to protect legal documents from tampering and counterfeiting.Stamped seal inspection is commonly audited manually to ensure document authenticity.However,manual assessment of seal images is tedious and laborintensive due to human errors,inconsistent placement,and completeness of the seal.Traditional image recognition systems are inadequate enough to identify seal types accurately,necessitating a neural network-based method for seal image recognition.However,neural network-based classification algorithms,such as Residual Networks(ResNet)andVisualGeometryGroup with 16 layers(VGG16)yield suboptimal recognition rates on stamp datasets.Additionally,the fixed training data categories make handling new categories to be a challenging task.This paper proposes amulti-stage seal recognition algorithmbased on Siamese network to overcome these limitations.Firstly,the seal image is pre-processed by applying an image rotation correction module based on Histogram of Oriented Gradients(HOG).Secondly,the similarity between input seal image pairs is measured by utilizing a similarity comparison module based on the Siamese network.Finally,we compare the results with the pre-stored standard seal template images in the database to obtain the seal type.To evaluate the performance of the proposed method,we further create a new seal image dataset that contains two subsets with 210,000 valid labeled pairs in total.The proposed work has a practical significance in industries where automatic seal authentication is essential as in legal,financial,and governmental sectors,where automatic seal recognition can enhance document security and streamline validation processes.Furthermore,the experimental results show that the proposed multi-stage method for seal image recognition outperforms state-of-the-art methods on the two established datasets. 展开更多
关键词 Seal recognition seal authentication document tampering siamese network spatial transformer network similarity comparison network
在线阅读 下载PDF
Enhanced electrode-level diagnostics for lithium-ion battery degradation using physics-informed neural networks 被引量:1
15
作者 Rui Xiong Yinghao He +2 位作者 Yue Sun Yanbo Jia Weixiang Shen 《Journal of Energy Chemistry》 2025年第5期618-627,共10页
For the diagnostics and health management of lithium-ion batteries,numerous models have been developed to understand their degradation characteristics.These models typically fall into two categories:data-driven models... For the diagnostics and health management of lithium-ion batteries,numerous models have been developed to understand their degradation characteristics.These models typically fall into two categories:data-driven models and physical models,each offering unique advantages but also facing limitations.Physics-informed neural networks(PINNs)provide a robust framework to integrate data-driven models with physical principles,ensuring consistency with underlying physics while enabling generalization across diverse operational conditions.This study introduces a PINN-based approach to reconstruct open circuit voltage(OCV)curves and estimate key ageing parameters at both the cell and electrode levels.These parameters include available capacity,electrode capacities,and lithium inventory capacity.The proposed method integrates OCV reconstruction models as functional components into convolutional neural networks(CNNs)and is validated using a public dataset.The results reveal that the estimated ageing parameters closely align with those obtained through offline OCV tests,with errors in reconstructed OCV curves remaining within 15 mV.This demonstrates the ability of the method to deliver fast and accurate degradation diagnostics at the electrode level,advancing the potential for precise and efficient battery health management. 展开更多
关键词 Lithium-ion batteries Electrode level Ageing diagnosis Physics-informed neural network Convolutional neural networks
在线阅读 下载PDF
TMC-GCN: Encrypted Traffic Mapping Classification Method Based on Graph Convolutional Networks 被引量:1
16
作者 Baoquan Liu Xi Chen +2 位作者 Qingjun Yuan Degang Li Chunxiang Gu 《Computers, Materials & Continua》 2025年第2期3179-3201,共23页
With the emphasis on user privacy and communication security, encrypted traffic has increased dramatically, which brings great challenges to traffic classification. The classification method of encrypted traffic based... With the emphasis on user privacy and communication security, encrypted traffic has increased dramatically, which brings great challenges to traffic classification. The classification method of encrypted traffic based on GNN can deal with encrypted traffic well. However, existing GNN-based approaches ignore the relationship between client or server packets. In this paper, we design a network traffic topology based on GCN, called Flow Mapping Graph (FMG). FMG establishes sequential edges between vertexes by the arrival order of packets and establishes jump-order edges between vertexes by connecting packets in different bursts with the same direction. It not only reflects the time characteristics of the packet but also strengthens the relationship between the client or server packets. According to FMG, a Traffic Mapping Classification model (TMC-GCN) is designed, which can automatically capture and learn the characteristics and structure information of the top vertex in FMG. The TMC-GCN model is used to classify the encrypted traffic. The encryption stream classification problem is transformed into a graph classification problem, which can effectively deal with data from different data sources and application scenarios. By comparing the performance of TMC-GCN with other classical models in four public datasets, including CICIOT2023, ISCXVPN2016, CICAAGM2017, and GraphDapp, the effectiveness of the FMG algorithm is verified. The experimental results show that the accuracy rate of the TMC-GCN model is 96.13%, the recall rate is 95.04%, and the F1 rate is 94.54%. 展开更多
关键词 Encrypted traffic classification deep learning graph neural networks multi-layer perceptron graph convolutional networks
在线阅读 下载PDF
Traffic safety helmet wear detection based on improved YOLOv5 network 被引量:1
17
作者 GUI Dongdong SUN Bo 《Optoelectronics Letters》 2025年第1期35-42,共8页
Aiming at the problem that the current traffic safety helmet detection model can't balance the accuracy of detection with the size of the model and the poor generalization of the model,a method based on improving ... Aiming at the problem that the current traffic safety helmet detection model can't balance the accuracy of detection with the size of the model and the poor generalization of the model,a method based on improving you only look once version 5(YOLOv5) is proposed.By incorporating the lightweight Ghost Net module into the YOLOv5 backbone network,we effectively reduce the model size.The addition of the receptive fields block(RFB) module enhances feature extraction and improves the feature acquisition capability of the lightweight model.Subsequently,the high-performance lightweight convolution,GSConv,is integrated into the neck structure for further model size compression.Moreover,the baseline model's loss function is substituted with efficient insertion over union(EIoU),accelerating network convergence and enhancing detection precision.Experimental results corroborate the effectiveness of this improved algorithm in real-world traffic scenarios. 展开更多
关键词 network UNION BACKBONE
原文传递
Atmospheric scattering model and dark channel prior constraint network for environmental monitoring under hazy conditions 被引量:2
18
作者 Lintao Han Hengyi Lv +3 位作者 Chengshan Han Yuchen Zhao Qing Han Hailong Liu 《Journal of Environmental Sciences》 2025年第6期203-218,共16页
Environmentalmonitoring systems based on remote sensing technology have a wider monitoringrange and longer timeliness, which makes them widely used in the detection andmanagement of pollution sources. However, haze we... Environmentalmonitoring systems based on remote sensing technology have a wider monitoringrange and longer timeliness, which makes them widely used in the detection andmanagement of pollution sources. However, haze weather conditions degrade image qualityand reduce the precision of environmental monitoring systems. To address this problem,this research proposes a remote sensing image dehazingmethod based on the atmosphericscattering model and a dark channel prior constrained network. The method consists ofa dehazing network, a dark channel information injection network (DCIIN), and a transmissionmap network. Within the dehazing network, the branch fusion module optimizesfeature weights to enhance the dehazing effect. By leveraging dark channel information,the DCIIN enables high-quality estimation of the atmospheric veil. To ensure the outputof the deep learning model aligns with physical laws, we reconstruct the haze image usingthe prediction results from the three networks. Subsequently, we apply the traditionalloss function and dark channel loss function between the reconstructed haze image and theoriginal haze image. This approach enhances interpretability and reliabilitywhile maintainingadherence to physical principles. Furthermore, the network is trained on a synthesizednon-homogeneous haze remote sensing dataset using dark channel information from cloudmaps. The experimental results show that the proposed network can achieve better imagedehazing on both synthetic and real remote sensing images with non-homogeneous hazedistribution. This research provides a new idea for solving the problem of decreased accuracyof environmental monitoring systems under haze weather conditions and has strongpracticability. 展开更多
关键词 Remote sensing Image dehazing Environmental monitoring Neural network INTERPRETABILITY
原文传递
Dynamic Multi-Graph Spatio-Temporal Graph Traffic Flow Prediction in Bangkok:An Application of a Continuous Convolutional Neural Network
19
作者 Pongsakon Promsawat Weerapan Sae-dan +2 位作者 Marisa Kaewsuwan Weerawat Sudsutad Aphirak Aphithana 《Computer Modeling in Engineering & Sciences》 SCIE EI 2025年第1期579-607,共29页
The ability to accurately predict urban traffic flows is crucial for optimising city operations.Consequently,various methods for forecasting urban traffic have been developed,focusing on analysing historical data to u... The ability to accurately predict urban traffic flows is crucial for optimising city operations.Consequently,various methods for forecasting urban traffic have been developed,focusing on analysing historical data to understand complex mobility patterns.Deep learning techniques,such as graph neural networks(GNNs),are popular for their ability to capture spatio-temporal dependencies.However,these models often become overly complex due to the large number of hyper-parameters involved.In this study,we introduce Dynamic Multi-Graph Spatial-Temporal Graph Neural Ordinary Differential Equation Networks(DMST-GNODE),a framework based on ordinary differential equations(ODEs)that autonomously discovers effective spatial-temporal graph neural network(STGNN)architectures for traffic prediction tasks.The comparative analysis of DMST-GNODE and baseline models indicates that DMST-GNODE model demonstrates superior performance across multiple datasets,consistently achieving the lowest Root Mean Square Error(RMSE)and Mean Absolute Error(MAE)values,alongside the highest accuracy.On the BKK(Bangkok)dataset,it outperformed other models with an RMSE of 3.3165 and an accuracy of 0.9367 for a 20-min interval,maintaining this trend across 40 and 60 min.Similarly,on the PeMS08 dataset,DMST-GNODE achieved the best performance with an RMSE of 19.4863 and an accuracy of 0.9377 at 20 min,demonstrating its effectiveness over longer periods.The Los_Loop dataset results further emphasise this model’s advantage,with an RMSE of 3.3422 and an accuracy of 0.7643 at 20 min,consistently maintaining superiority across all time intervals.These numerical highlights indicate that DMST-GNODE not only outperforms baseline models but also achieves higher accuracy and lower errors across different time intervals and datasets. 展开更多
关键词 Graph neural networks convolutional neural network deep learning dynamic multi-graph SPATIO-TEMPORAL
在线阅读 下载PDF
DIGNN-A:Real-Time Network Intrusion Detection with Integrated Neural Networks Based on Dynamic Graph
20
作者 Jizhao Liu Minghao Guo 《Computers, Materials & Continua》 SCIE EI 2025年第1期817-842,共26页
The increasing popularity of the Internet and the widespread use of information technology have led to a rise in the number and sophistication of network attacks and security threats.Intrusion detection systems are cr... The increasing popularity of the Internet and the widespread use of information technology have led to a rise in the number and sophistication of network attacks and security threats.Intrusion detection systems are crucial to network security,playing a pivotal role in safeguarding networks from potential threats.However,in the context of an evolving landscape of sophisticated and elusive attacks,existing intrusion detection methodologies often overlook critical aspects such as changes in network topology over time and interactions between hosts.To address these issues,this paper proposes a real-time network intrusion detection method based on graph neural networks.The proposedmethod leverages the advantages of graph neural networks and employs a straightforward graph construction method to represent network traffic as dynamic graph-structured data.Additionally,a graph convolution operation with a multi-head attention mechanism is utilized to enhance the model’s ability to capture the intricate relationships within the graph structure comprehensively.Furthermore,it uses an integrated graph neural network to address dynamic graphs’structural and topological changes at different time points and the challenges of edge embedding in intrusion detection data.The edge classification problem is effectively transformed into node classification by employing a line graph data representation,which facilitates fine-grained intrusion detection tasks on dynamic graph node feature representations.The efficacy of the proposed method is evaluated using two commonly used intrusion detection datasets,UNSW-NB15 and NF-ToN-IoT-v2,and results are compared with previous studies in this field.The experimental results demonstrate that our proposed method achieves 99.3%and 99.96%accuracy on the two datasets,respectively,and outperforms the benchmark model in several evaluation metrics. 展开更多
关键词 Intrusion detection graph neural networks attention mechanisms line graphs dynamic graph neural networks
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部