期刊文献+
共找到542,033篇文章
< 1 2 250 >
每页显示 20 50 100
Impacts of lateral boundary conditions from numerical models and data-driven networks on convective-scale ensemble forecasts
1
作者 Junjie Deng Jin Zhang +3 位作者 Haoyan Liu Hongqi Li Feng Chen Jing Chen 《Atmospheric and Oceanic Science Letters》 2025年第2期78-85,共8页
The impacts of lateral boundary conditions(LBCs)provided by numerical models and data-driven networks on convective-scale ensemble forecasts are investigated in this study.Four experiments are conducted on the Hangzho... The impacts of lateral boundary conditions(LBCs)provided by numerical models and data-driven networks on convective-scale ensemble forecasts are investigated in this study.Four experiments are conducted on the Hangzhou RDP(19th Hangzhou Asian Games Research Development Project on Convective-scale Ensemble Prediction and Application)testbed,with the LBCs respectively sourced from National Centers for Environmental Prediction(NCEP)Global Forecast System(GFS)forecasts with 33 vertical levels(Exp_GFS),Pangu forecasts with 13 vertical levels(Exp_Pangu),Fuxi forecasts with 13 vertical levels(Exp_Fuxi),and NCEP GFS forecasts with the vertical levels reduced to 13(the same as those of Exp_Pangu and Exp_Fuxi)(Exp_GFSRDV).In general,Exp_Pangu performs comparably to Exp_GFS,while Exp_Fuxi shows slightly inferior performance compared to Exp_Pangu,possibly due to its less accurate large-scale predictions.Therefore,the ability of using data-driven networks to efficiently provide LBCs for convective-scale ensemble forecasts has been demonstrated.Moreover,Exp_GFSRDV has the worst convective-scale forecasts among the four experiments,which indicates the potential improvement of using data-driven networks for LBCs by increasing the vertical levels of the networks.However,the ensemble spread of the four experiments barely increases with lead time.Thus,each experiment has insufficient ensemble spread to present realistic forecast uncertainties,which will be investigated in a future study. 展开更多
关键词 Ensemble forecast Convective scale Lateral boundary conditions data-driven network
在线阅读 下载PDF
Data-Driven Method for Predicting Remaining Useful Life of Bearings Based on Multi-Layer Perception Neural Network and Bidirectional Long Short-Term Memory Network
2
作者 Yongfeng Tai Xingyu Yan +3 位作者 Xiangyi Geng Lin Mu Mingshun Jiang Faye Zhang 《Structural Durability & Health Monitoring》 2025年第2期365-383,共19页
The remaining useful life prediction of rolling bearing is vital in safety and reliability guarantee.In engineering scenarios,only a small amount of bearing performance degradation data can be obtained through acceler... The remaining useful life prediction of rolling bearing is vital in safety and reliability guarantee.In engineering scenarios,only a small amount of bearing performance degradation data can be obtained through accelerated life testing.In the absence of lifetime data,the hidden long-term correlation between performance degradation data is challenging to mine effectively,which is the main factor that restricts the prediction precision and engineering application of the residual life prediction method.To address this problem,a novel method based on the multi-layer perception neural network and bidirectional long short-term memory network is proposed.Firstly,a nonlinear health indicator(HI)calculation method based on kernel principal component analysis(KPCA)and exponential weighted moving average(EWMA)is designed.Then,using the raw vibration data and HI,a multi-layer perceptron(MLP)neural network is trained to further calculate the HI of the online bearing in real time.Furthermore,The bidirectional long short-term memory model(BiLSTM)optimized by particle swarm optimization(PSO)is used to mine the time series features of HI and predict the remaining service life.Performance verification experiments and comparative experiments are carried out on the XJTU-SY bearing open dataset.The research results indicate that this method has an excellent ability to predict future HI and remaining life. 展开更多
关键词 Remaining useful life prediction rolling bearing health indicator construction multilayer perceptron bidirectional long short-term memory network
在线阅读 下载PDF
A hybrid data-driven approach for rainfall-induced landslide susceptibility mapping:Physically-based probabilistic model with convolutional neural network
3
作者 Hong-Zhi Cui Bin Tong +2 位作者 Tao Wang Jie Dou Jian Ji 《Journal of Rock Mechanics and Geotechnical Engineering》 2025年第8期4933-4951,共19页
Landslide susceptibility mapping(LSM)plays a crucial role in assessing geological risks.The current LSM techniques face a significant challenge in achieving accurate results due to uncertainties associated with region... Landslide susceptibility mapping(LSM)plays a crucial role in assessing geological risks.The current LSM techniques face a significant challenge in achieving accurate results due to uncertainties associated with regional-scale geotechnical parameters.To explore rainfall-induced LSM,this study proposes a hybrid model that combines the physically-based probabilistic model(PPM)with convolutional neural network(CNN).The PPM is capable of effectively capturing the spatial distribution of landslides by incorporating the probability of failure(POF)considering the slope stability mechanism under rainfall conditions.This significantly characterizes the variation of POF caused by parameter uncertainties.CNN was used as a binary classifier to capture the spatial and channel correlation between landslide conditioning factors and the probability of landslide occurrence.OpenCV image enhancement technique was utilized to extract non-landslide points based on the POF of landslides.The proposed model comprehensively considers physical mechanics when selecting non-landslide samples,effectively filtering out samples that do not adhere to physical principles and reduce the risk of overfitting.The results indicate that the proposed PPM-CNN hybrid model presents a higher prediction accuracy,with an area under the curve(AUC)value of 0.85 based on the landslide case of the Niangniangba area of Gansu Province,China compared with the individual CNN model(AUC=0.61)and the PPM(AUC=0.74).This model can also consider the statistical correlation and non-normal probability distributions of model parameters.These results offer practical guidance for future research on rainfall-induced LSM at the regional scale. 展开更多
关键词 Rainfall landslides Landslide susceptibility mapping Hybrid model Physically-based model Convolution neural network(CNN) Probability of failure(POF)
在线阅读 下载PDF
改进Deep Q Networks的交通信号均衡调度算法
4
作者 贺道坤 《机械设计与制造》 北大核心 2025年第4期135-140,共6页
为进一步缓解城市道路高峰时段十字路口的交通拥堵现象,实现路口各道路车流均衡通过,基于改进Deep Q Networks提出了一种的交通信号均衡调度算法。提取十字路口与交通信号调度最相关的特征,分别建立单向十字路口交通信号模型和线性双向... 为进一步缓解城市道路高峰时段十字路口的交通拥堵现象,实现路口各道路车流均衡通过,基于改进Deep Q Networks提出了一种的交通信号均衡调度算法。提取十字路口与交通信号调度最相关的特征,分别建立单向十字路口交通信号模型和线性双向十字路口交通信号模型,并基于此构建交通信号调度优化模型;针对Deep Q Networks算法在交通信号调度问题应用中所存在的收敛性、过估计等不足,对Deep Q Networks进行竞争网络改进、双网络改进以及梯度更新策略改进,提出相适应的均衡调度算法。通过与经典Deep Q Networks仿真比对,验证论文算法对交通信号调度问题的适用性和优越性。基于城市道路数据,分别针对两种场景进行仿真计算,仿真结果表明该算法能够有效缩减十字路口车辆排队长度,均衡各路口车流通行量,缓解高峰出行方向的道路拥堵现象,有利于十字路口交通信号调度效益的提升。 展开更多
关键词 交通信号调度 十字路口 Deep Q networks 深度强化学习 智能交通
在线阅读 下载PDF
A hybrid physics-informed data-driven neural network for CO_(2) storage in depleted shale reservoirs 被引量:1
5
作者 Yan-Wei Wang Zhen-Xue Dai +3 位作者 Gui-Sheng Wang Li Chen Yu-Zhou Xia Yu-Hao Zhou 《Petroleum Science》 SCIE EI CAS CSCD 2024年第1期286-301,共16页
To reduce CO_(2) emissions in response to global climate change,shale reservoirs could be ideal candidates for long-term carbon geo-sequestration involving multi-scale transport processes.However,most current CO_(2) s... To reduce CO_(2) emissions in response to global climate change,shale reservoirs could be ideal candidates for long-term carbon geo-sequestration involving multi-scale transport processes.However,most current CO_(2) sequestration models do not adequately consider multiple transport mechanisms.Moreover,the evaluation of CO_(2) storage processes usually involves laborious and time-consuming numerical simulations unsuitable for practical prediction and decision-making.In this paper,an integrated model involving gas diffusion,adsorption,dissolution,slip flow,and Darcy flow is proposed to accurately characterize CO_(2) storage in depleted shale reservoirs,supporting the establishment of a training database.On this basis,a hybrid physics-informed data-driven neural network(HPDNN)is developed as a deep learning surrogate for prediction and inversion.By incorporating multiple sources of scientific knowledge,the HPDNN can be configured with limited simulation resources,significantly accelerating the forward and inversion processes.Furthermore,the HPDNN can more intelligently predict injection performance,precisely perform reservoir parameter inversion,and reasonably evaluate the CO_(2) storage capacity under complicated scenarios.The validation and test results demonstrate that the HPDNN can ensure high accuracy and strong robustness across an extensive applicability range when dealing with field data with multiple noise sources.This study has tremendous potential to replace traditional modeling tools for predicting and making decisions about CO_(2) storage projects in depleted shale reservoirs. 展开更多
关键词 Deep learning Physics-informed data-driven neural network Depleted shale reservoirs CO_(2)storage Transport mechanisms
原文传递
An efficient data-driven global sensitivity analysis method of shale gas production through convolutional neural network
6
作者 Liang Xue Shuai Xu +4 位作者 Jie Nie Ji Qin Jiang-Xia Han Yue-Tian Liu Qin-Zhuo Liao 《Petroleum Science》 SCIE EI CAS CSCD 2024年第4期2475-2484,共10页
The shale gas development process is complex in terms of its flow mechanisms and the accuracy of the production forecasting is influenced by geological parameters and engineering parameters.Therefore,to quantitatively... The shale gas development process is complex in terms of its flow mechanisms and the accuracy of the production forecasting is influenced by geological parameters and engineering parameters.Therefore,to quantitatively evaluate the relative importance of model parameters on the production forecasting performance,sensitivity analysis of parameters is required.The parameters are ranked according to the sensitivity coefficients for the subsequent optimization scheme design.A data-driven global sensitivity analysis(GSA)method using convolutional neural networks(CNN)is proposed to identify the influencing parameters in shale gas production.The CNN is trained on a large dataset,validated against numerical simulations,and utilized as a surrogate model for efficient sensitivity analysis.Our approach integrates CNN with the Sobol'global sensitivity analysis method,presenting three key scenarios for sensitivity analysis:analysis of the production stage as a whole,analysis by fixed time intervals,and analysis by declining rate.The findings underscore the predominant influence of reservoir thickness and well length on shale gas production.Furthermore,the temporal sensitivity analysis reveals the dynamic shifts in parameter importance across the distinct production stages. 展开更多
关键词 Shale gas Global sensitivity Convolutional neural network data-driven
原文传递
LATITUDES Network:提升证据合成稳健性的效度(偏倚风险)评价工具库
7
作者 廖明雨 熊益权 +7 位作者 赵芃 郭金 陈靖文 刘春容 贾玉龙 任燕 孙鑫 谭婧 《中国循证医学杂志》 北大核心 2025年第5期614-620,共7页
证据合成是对现有研究证据进行系统收集、分析和整合的过程,其结果依赖于纳入原始研究的质量,而效度评价(validity assessment,又称偏倚风险评价)则是评估这些原始研究质量的重要手段。现有效度评价工具种类繁多,但部分工具缺乏严格的... 证据合成是对现有研究证据进行系统收集、分析和整合的过程,其结果依赖于纳入原始研究的质量,而效度评价(validity assessment,又称偏倚风险评价)则是评估这些原始研究质量的重要手段。现有效度评价工具种类繁多,但部分工具缺乏严格的开发过程和评估,证据合成过程中应用不恰当的效度评价工具开展文献质量评价,可能会影响研究结论的准确性,误导临床实践。为解决这一困境,2023年9月英国Bristol大学学者牵头成立了效度评价工具一站式资源站LATITUDES Network。该网站致力于收集、整理和推广研究效度评价工具,以促进原始研究效度评价的准确性,提升证据合成的稳健性和可靠性。本文对LATITUDES Network成立背景、收录的效度评价工具,以及评价工具使用的培训资源等内容进行了详细介绍,以期为国内学者更多地了解LATITUDES Network,更好地运用恰当的效度评价工具开展文献质量评价,以及为开发效度评价工具等提供参考。 展开更多
关键词 效度评价 偏倚风险 证据合成 LATITUDES network
原文传递
A data-driven model of drop size prediction based on artificial neural networks using small-scale data sets 被引量:1
8
作者 Bo Wang Han Zhou +3 位作者 Shan Jing Qiang Zheng Wenjie Lan Shaowei Li 《Chinese Journal of Chemical Engineering》 SCIE EI CAS CSCD 2024年第2期71-83,共13页
An artificial neural network(ANN)method is introduced to predict drop size in two kinds of pulsed columns with small-scale data sets.After training,the deviation between calculate and experimental results are 3.8%and ... An artificial neural network(ANN)method is introduced to predict drop size in two kinds of pulsed columns with small-scale data sets.After training,the deviation between calculate and experimental results are 3.8%and 9.3%,respectively.Through ANN model,the influence of interfacial tension and pulsation intensity on the droplet diameter has been developed.Droplet size gradually increases with the increase of interfacial tension,and decreases with the increase of pulse intensity.It can be seen that the accuracy of ANN model in predicting droplet size outside the training set range is reach the same level as the accuracy of correlation obtained based on experiments within this range.For two kinds of columns,the drop size prediction deviations of ANN model are 9.6%and 18.5%and the deviations in correlations are 11%and 15%. 展开更多
关键词 Artificial neural network Drop size Solvent extraction Pulsed column Two-phase flow HYDRODYNAMICS
在线阅读 下载PDF
Application of virtual reality technology improves the functionality of brain networks in individuals experiencing pain 被引量:3
9
作者 Takahiko Nagamine 《World Journal of Clinical Cases》 SCIE 2025年第3期66-68,共3页
Medical procedures are inherently invasive and carry the risk of inducing pain to the mind and body.Recently,efforts have been made to alleviate the discomfort associated with invasive medical procedures through the u... Medical procedures are inherently invasive and carry the risk of inducing pain to the mind and body.Recently,efforts have been made to alleviate the discomfort associated with invasive medical procedures through the use of virtual reality(VR)technology.VR has been demonstrated to be an effective treatment for pain associated with medical procedures,as well as for chronic pain conditions for which no effective treatment has been established.The precise mechanism by which the diversion from reality facilitated by VR contributes to the diminution of pain and anxiety has yet to be elucidated.However,the provision of positive images through VR-based visual stimulation may enhance the functionality of brain networks.The salience network is diminished,while the default mode network is enhanced.Additionally,the medial prefrontal cortex may establish a stronger connection with the default mode network,which could result in a reduction of pain and anxiety.Further research into the potential of VR technology to alleviate pain could lead to a reduction in the number of individuals who overdose on painkillers and contribute to positive change in the medical field. 展开更多
关键词 Virtual reality PAIN ANXIETY Salience network Default mode network
在线阅读 下载PDF
Big Model Strategy for Bridge Structural Health Monitoring Based on Data-Driven, Adaptive Method and Convolutional Neural Network (CNN) Group 被引量:1
10
作者 Yadong Xu Weixing Hong +3 位作者 Mohammad Noori Wael A.Altabey Ahmed Silik Nabeel S.D.Farhan 《Structural Durability & Health Monitoring》 EI 2024年第6期763-783,共21页
This study introduces an innovative“Big Model”strategy to enhance Bridge Structural Health Monitoring(SHM)using a Convolutional Neural Network(CNN),time-frequency analysis,and fine element analysis.Leveraging ensemb... This study introduces an innovative“Big Model”strategy to enhance Bridge Structural Health Monitoring(SHM)using a Convolutional Neural Network(CNN),time-frequency analysis,and fine element analysis.Leveraging ensemble methods,collaborative learning,and distributed computing,the approach effectively manages the complexity and scale of large-scale bridge data.The CNN employs transfer learning,fine-tuning,and continuous monitoring to optimize models for adaptive and accurate structural health assessments,focusing on extracting meaningful features through time-frequency analysis.By integrating Finite Element Analysis,time-frequency analysis,and CNNs,the strategy provides a comprehensive understanding of bridge health.Utilizing diverse sensor data,sophisticated feature extraction,and advanced CNN architecture,the model is optimized through rigorous preprocessing and hyperparameter tuning.This approach significantly enhances the ability to make accurate predictions,monitor structural health,and support proactive maintenance practices,thereby ensuring the safety and longevity of critical infrastructure. 展开更多
关键词 Structural Health Monitoring(SHM) BRIDGES big model Convolutional Neural network(CNN) Finite Element Method(FEM)
在线阅读 下载PDF
Robustness Optimization Algorithm with Multi-Granularity Integration for Scale-Free Networks Against Malicious Attacks 被引量:1
11
作者 ZHANG Yiheng LI Jinhai 《昆明理工大学学报(自然科学版)》 北大核心 2025年第1期54-71,共18页
Complex network models are frequently employed for simulating and studyingdiverse real-world complex systems.Among these models,scale-free networks typically exhibit greater fragility to malicious attacks.Consequently... Complex network models are frequently employed for simulating and studyingdiverse real-world complex systems.Among these models,scale-free networks typically exhibit greater fragility to malicious attacks.Consequently,enhancing the robustness of scale-free networks has become a pressing issue.To address this problem,this paper proposes a Multi-Granularity Integration Algorithm(MGIA),which aims to improve the robustness of scale-free networks while keeping the initial degree of each node unchanged,ensuring network connectivity and avoiding the generation of multiple edges.The algorithm generates a multi-granularity structure from the initial network to be optimized,then uses different optimization strategies to optimize the networks at various granular layers in this structure,and finally realizes the information exchange between different granular layers,thereby further enhancing the optimization effect.We propose new network refresh,crossover,and mutation operators to ensure that the optimized network satisfies the given constraints.Meanwhile,we propose new network similarity and network dissimilarity evaluation metrics to improve the effectiveness of the optimization operators in the algorithm.In the experiments,the MGIA enhances the robustness of the scale-free network by 67.6%.This improvement is approximately 17.2%higher than the optimization effects achieved by eight currently existing complex network robustness optimization algorithms. 展开更多
关键词 complex network model MULTI-GRANULARITY scale-free networks ROBUSTNESS algorithm integration
原文传递
Offload Strategy for Edge Computing in Satellite Networks Based on Software Defined Network 被引量:1
12
作者 Zhiguo Liu Yuqing Gui +1 位作者 Lin Wang Yingru Jiang 《Computers, Materials & Continua》 SCIE EI 2025年第1期863-879,共17页
Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in us... Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency. 展开更多
关键词 Satellite network edge computing task scheduling computing offloading
在线阅读 下载PDF
Data-Driven Modeling for Wind Turbine Blade Loads Based on Deep Neural Network
13
作者 Jianyong Ao Yanping Li +2 位作者 Shengqing Hu Songyu Gao Qi Yao 《Energy Engineering》 EI 2024年第12期3825-3841,共17页
Blades are essential components of wind turbines.Reducing their fatigue loads during operation helps to extend their lifespan,but it is difficult to quickly and accurately calculate the fatigue loads of blades.To solv... Blades are essential components of wind turbines.Reducing their fatigue loads during operation helps to extend their lifespan,but it is difficult to quickly and accurately calculate the fatigue loads of blades.To solve this problem,this paper innovatively designs a data-driven blade load modeling method based on a deep learning framework through mechanism analysis,feature selection,and model construction.In the mechanism analysis part,the generation mechanism of blade loads and the load theoretical calculationmethod based on material damage theory are analyzed,and four measurable operating state parameters related to blade loads are screened;in the feature extraction part,15 characteristic indicators of each screened parameter are extracted in the time and frequency domain,and feature selection is completed through correlation analysis with blade loads to determine the input parameters of data-driven modeling;in the model construction part,a deep neural network based on feedforward and feedback propagation is designed to construct the nonlinear coupling relationship between the unit operating parameter characteristics and blade loads.The results show that the proposed method mines the wind turbine operating state characteristics highly correlated with the blade load,such as the standard deviation of wind speed.The model built using these characteristics has reasonable calculation and fitting capabilities for the blade load and shows a better fitting level for untrained out-of-sample data than the traditional scheme.Based on the mean absolute percentage error calculation,the modeling accuracy of the two blade loads can reach more than 90%and 80%,respectively,providing a good foundation for the subsequent optimization control to suppress the blade load. 展开更多
关键词 Wind turbine BLADE fatigue load modeling deep neural network
在线阅读 下载PDF
A Novel Self-Supervised Learning Network for Binocular Disparity Estimation 被引量:1
14
作者 Jiawei Tian Yu Zhou +5 位作者 Xiaobing Chen Salman A.AlQahtani Hongrong Chen Bo Yang Siyu Lu Wenfeng Zheng 《Computer Modeling in Engineering & Sciences》 SCIE EI 2025年第1期209-229,共21页
Two-dimensional endoscopic images are susceptible to interferences such as specular reflections and monotonous texture illumination,hindering accurate three-dimensional lesion reconstruction by surgical robots.This st... Two-dimensional endoscopic images are susceptible to interferences such as specular reflections and monotonous texture illumination,hindering accurate three-dimensional lesion reconstruction by surgical robots.This study proposes a novel end-to-end disparity estimation model to address these challenges.Our approach combines a Pseudo-Siamese neural network architecture with pyramid dilated convolutions,integrating multi-scale image information to enhance robustness against lighting interferences.This study introduces a Pseudo-Siamese structure-based disparity regression model that simplifies left-right image comparison,improving accuracy and efficiency.The model was evaluated using a dataset of stereo endoscopic videos captured by the Da Vinci surgical robot,comprising simulated silicone heart sequences and real heart video data.Experimental results demonstrate significant improvement in the network’s resistance to lighting interference without substantially increasing parameters.Moreover,the model exhibited faster convergence during training,contributing to overall performance enhancement.This study advances endoscopic image processing accuracy and has potential implications for surgical robot applications in complex environments. 展开更多
关键词 Parallax estimation parallax regression model self-supervised learning Pseudo-Siamese neural network pyramid dilated convolution binocular disparity estimation
在线阅读 下载PDF
DEEP NEURAL NETWORKS COMBINING MULTI-TASK LEARNING FOR SOLVING DELAY INTEGRO-DIFFERENTIAL EQUATIONS 被引量:1
15
作者 WANG Chen-yao SHI Feng 《数学杂志》 2025年第1期13-38,共26页
Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay di... Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay differential equations(DDEs)and delay integrodifferential equations(DIDEs)with constant delays,primarily due to their low regularity at delayinduced breaking points.In this paper,a DNN method that combines multi-task learning(MTL)which is proposed to solve both the forward and inverse problems of DIDEs.The core idea of this approach is to divide the original equation into multiple tasks based on the delay,using auxiliary outputs to represent the integral terms,followed by the use of MTL to seamlessly incorporate the properties at the breaking points into the loss function.Furthermore,given the increased training dificulty associated with multiple tasks and outputs,we employ a sequential training scheme to reduce training complexity and provide reference solutions for subsequent tasks.This approach significantly enhances the approximation accuracy of solving DIDEs with DNNs,as demonstrated by comparisons with traditional DNN methods.We validate the effectiveness of this method through several numerical experiments,test various parameter sharing structures in MTL and compare the testing results of these structures.Finally,this method is implemented to solve the inverse problem of nonlinear DIDE and the results show that the unknown parameters of DIDE can be discovered with sparse or noisy data. 展开更多
关键词 Delay integro-differential equation Multi-task learning parameter sharing structure deep neural network sequential training scheme
在线阅读 下载PDF
Multi-Stage-Based Siamese Neural Network for Seal Image Recognition
16
作者 Jianfeng Lu Xiangye Huang +3 位作者 Caijin Li Renlin Xin Shanqing Zhang Mahmoud Emam 《Computer Modeling in Engineering & Sciences》 SCIE EI 2025年第1期405-423,共19页
Seal authentication is an important task for verifying the authenticity of stamped seals used in various domains to protect legal documents from tampering and counterfeiting.Stamped seal inspection is commonly audited... Seal authentication is an important task for verifying the authenticity of stamped seals used in various domains to protect legal documents from tampering and counterfeiting.Stamped seal inspection is commonly audited manually to ensure document authenticity.However,manual assessment of seal images is tedious and laborintensive due to human errors,inconsistent placement,and completeness of the seal.Traditional image recognition systems are inadequate enough to identify seal types accurately,necessitating a neural network-based method for seal image recognition.However,neural network-based classification algorithms,such as Residual Networks(ResNet)andVisualGeometryGroup with 16 layers(VGG16)yield suboptimal recognition rates on stamp datasets.Additionally,the fixed training data categories make handling new categories to be a challenging task.This paper proposes amulti-stage seal recognition algorithmbased on Siamese network to overcome these limitations.Firstly,the seal image is pre-processed by applying an image rotation correction module based on Histogram of Oriented Gradients(HOG).Secondly,the similarity between input seal image pairs is measured by utilizing a similarity comparison module based on the Siamese network.Finally,we compare the results with the pre-stored standard seal template images in the database to obtain the seal type.To evaluate the performance of the proposed method,we further create a new seal image dataset that contains two subsets with 210,000 valid labeled pairs in total.The proposed work has a practical significance in industries where automatic seal authentication is essential as in legal,financial,and governmental sectors,where automatic seal recognition can enhance document security and streamline validation processes.Furthermore,the experimental results show that the proposed multi-stage method for seal image recognition outperforms state-of-the-art methods on the two established datasets. 展开更多
关键词 Seal recognition seal authentication document tampering siamese network spatial transformer network similarity comparison network
在线阅读 下载PDF
Enhanced electrode-level diagnostics for lithium-ion battery degradation using physics-informed neural networks 被引量:1
17
作者 Rui Xiong Yinghao He +2 位作者 Yue Sun Yanbo Jia Weixiang Shen 《Journal of Energy Chemistry》 2025年第5期618-627,共10页
For the diagnostics and health management of lithium-ion batteries,numerous models have been developed to understand their degradation characteristics.These models typically fall into two categories:data-driven models... For the diagnostics and health management of lithium-ion batteries,numerous models have been developed to understand their degradation characteristics.These models typically fall into two categories:data-driven models and physical models,each offering unique advantages but also facing limitations.Physics-informed neural networks(PINNs)provide a robust framework to integrate data-driven models with physical principles,ensuring consistency with underlying physics while enabling generalization across diverse operational conditions.This study introduces a PINN-based approach to reconstruct open circuit voltage(OCV)curves and estimate key ageing parameters at both the cell and electrode levels.These parameters include available capacity,electrode capacities,and lithium inventory capacity.The proposed method integrates OCV reconstruction models as functional components into convolutional neural networks(CNNs)and is validated using a public dataset.The results reveal that the estimated ageing parameters closely align with those obtained through offline OCV tests,with errors in reconstructed OCV curves remaining within 15 mV.This demonstrates the ability of the method to deliver fast and accurate degradation diagnostics at the electrode level,advancing the potential for precise and efficient battery health management. 展开更多
关键词 Lithium-ion batteries Electrode level Ageing diagnosis Physics-informed neural network Convolutional neural networks
在线阅读 下载PDF
An integrated method of data-driven and mechanism models for formation evaluation with logs 被引量:1
18
作者 Meng-Lu Kang Jun Zhou +4 位作者 Juan Zhang Li-Zhi Xiao Guang-Zhi Liao Rong-Bo Shao Gang Luo 《Petroleum Science》 2025年第3期1110-1124,共15页
We propose an integrated method of data-driven and mechanism models for well logging formation evaluation,explicitly focusing on predicting reservoir parameters,such as porosity and water saturation.Accurately interpr... We propose an integrated method of data-driven and mechanism models for well logging formation evaluation,explicitly focusing on predicting reservoir parameters,such as porosity and water saturation.Accurately interpreting these parameters is crucial for effectively exploring and developing oil and gas.However,with the increasing complexity of geological conditions in this industry,there is a growing demand for improved accuracy in reservoir parameter prediction,leading to higher costs associated with manual interpretation.The conventional logging interpretation methods rely on empirical relationships between logging data and reservoir parameters,which suffer from low interpretation efficiency,intense subjectivity,and suitability for ideal conditions.The application of artificial intelligence in the interpretation of logging data provides a new solution to the problems existing in traditional methods.It is expected to improve the accuracy and efficiency of the interpretation.If large and high-quality datasets exist,data-driven models can reveal relationships of arbitrary complexity.Nevertheless,constructing sufficiently large logging datasets with reliable labels remains challenging,making it difficult to apply data-driven models effectively in logging data interpretation.Furthermore,data-driven models often act as“black boxes”without explaining their predictions or ensuring compliance with primary physical constraints.This paper proposes a machine learning method with strong physical constraints by integrating mechanism and data-driven models.Prior knowledge of logging data interpretation is embedded into machine learning regarding network structure,loss function,and optimization algorithm.We employ the Physically Informed Auto-Encoder(PIAE)to predict porosity and water saturation,which can be trained without labeled reservoir parameters using self-supervised learning techniques.This approach effectively achieves automated interpretation and facilitates generalization across diverse datasets. 展开更多
关键词 Well log Reservoir evaluation Label scarcity Mechanism model data-driven model Physically informed model Self-supervised learning Machine learning
原文传递
TMC-GCN: Encrypted Traffic Mapping Classification Method Based on Graph Convolutional Networks 被引量:1
19
作者 Baoquan Liu Xi Chen +2 位作者 Qingjun Yuan Degang Li Chunxiang Gu 《Computers, Materials & Continua》 2025年第2期3179-3201,共23页
With the emphasis on user privacy and communication security, encrypted traffic has increased dramatically, which brings great challenges to traffic classification. The classification method of encrypted traffic based... With the emphasis on user privacy and communication security, encrypted traffic has increased dramatically, which brings great challenges to traffic classification. The classification method of encrypted traffic based on GNN can deal with encrypted traffic well. However, existing GNN-based approaches ignore the relationship between client or server packets. In this paper, we design a network traffic topology based on GCN, called Flow Mapping Graph (FMG). FMG establishes sequential edges between vertexes by the arrival order of packets and establishes jump-order edges between vertexes by connecting packets in different bursts with the same direction. It not only reflects the time characteristics of the packet but also strengthens the relationship between the client or server packets. According to FMG, a Traffic Mapping Classification model (TMC-GCN) is designed, which can automatically capture and learn the characteristics and structure information of the top vertex in FMG. The TMC-GCN model is used to classify the encrypted traffic. The encryption stream classification problem is transformed into a graph classification problem, which can effectively deal with data from different data sources and application scenarios. By comparing the performance of TMC-GCN with other classical models in four public datasets, including CICIOT2023, ISCXVPN2016, CICAAGM2017, and GraphDapp, the effectiveness of the FMG algorithm is verified. The experimental results show that the accuracy rate of the TMC-GCN model is 96.13%, the recall rate is 95.04%, and the F1 rate is 94.54%. 展开更多
关键词 Encrypted traffic classification deep learning graph neural networks multi-layer perceptron graph convolutional networks
在线阅读 下载PDF
Traffic safety helmet wear detection based on improved YOLOv5 network 被引量:1
20
作者 GUI Dongdong SUN Bo 《Optoelectronics Letters》 2025年第1期35-42,共8页
Aiming at the problem that the current traffic safety helmet detection model can't balance the accuracy of detection with the size of the model and the poor generalization of the model,a method based on improving ... Aiming at the problem that the current traffic safety helmet detection model can't balance the accuracy of detection with the size of the model and the poor generalization of the model,a method based on improving you only look once version 5(YOLOv5) is proposed.By incorporating the lightweight Ghost Net module into the YOLOv5 backbone network,we effectively reduce the model size.The addition of the receptive fields block(RFB) module enhances feature extraction and improves the feature acquisition capability of the lightweight model.Subsequently,the high-performance lightweight convolution,GSConv,is integrated into the neck structure for further model size compression.Moreover,the baseline model's loss function is substituted with efficient insertion over union(EIoU),accelerating network convergence and enhancing detection precision.Experimental results corroborate the effectiveness of this improved algorithm in real-world traffic scenarios. 展开更多
关键词 network UNION BACKBONE
原文传递
上一页 1 2 250 下一页 到第
使用帮助 返回顶部