期刊文献+
共找到12,986篇文章
< 1 2 250 >
每页显示 20 50 100
DEEP NEURAL NETWORKS COMBINING MULTI-TASK LEARNING FOR SOLVING DELAY INTEGRO-DIFFERENTIAL EQUATIONS 被引量:1
1
作者 WANG Chen-yao SHI Feng 《数学杂志》 2025年第1期13-38,共26页
Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay di... Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay differential equations(DDEs)and delay integrodifferential equations(DIDEs)with constant delays,primarily due to their low regularity at delayinduced breaking points.In this paper,a DNN method that combines multi-task learning(MTL)which is proposed to solve both the forward and inverse problems of DIDEs.The core idea of this approach is to divide the original equation into multiple tasks based on the delay,using auxiliary outputs to represent the integral terms,followed by the use of MTL to seamlessly incorporate the properties at the breaking points into the loss function.Furthermore,given the increased training dificulty associated with multiple tasks and outputs,we employ a sequential training scheme to reduce training complexity and provide reference solutions for subsequent tasks.This approach significantly enhances the approximation accuracy of solving DIDEs with DNNs,as demonstrated by comparisons with traditional DNN methods.We validate the effectiveness of this method through several numerical experiments,test various parameter sharing structures in MTL and compare the testing results of these structures.Finally,this method is implemented to solve the inverse problem of nonlinear DIDE and the results show that the unknown parameters of DIDE can be discovered with sparse or noisy data. 展开更多
关键词 Delay integro-differential equation Multi-task learning parameter sharing structure deep neural network sequential training scheme
在线阅读 下载PDF
Integration of deep neural network modeling and LC-MS-based pseudo-targeted metabolomics to discriminate easily confused ginseng species 被引量:1
2
作者 Meiting Jiang Yuyang Sha +8 位作者 Yadan Zou Xiaoyan Xu Mengxiang Ding Xu Lian Hongda Wang Qilong Wang Kefeng Li De-an Guo Wenzhi Yang 《Journal of Pharmaceutical Analysis》 2025年第1期126-137,共12页
Metabolomics covers a wide range of applications in life sciences,biomedicine,and phytology.Data acquisition(to achieve high coverage and efficiency)and analysis(to pursue good classification)are two key segments invo... Metabolomics covers a wide range of applications in life sciences,biomedicine,and phytology.Data acquisition(to achieve high coverage and efficiency)and analysis(to pursue good classification)are two key segments involved in metabolomics workflows.Various chemometric approaches utilizing either pattern recognition or machine learning have been employed to separate different groups.However,insufficient feature extraction,inappropriate feature selection,overfitting,or underfitting lead to an insufficient capacity to discriminate plants that are often easily confused.Using two ginseng varieties,namely Panax japonicus(PJ)and Panax japonicus var.major(PJvm),containing the similar ginsenosides,we integrated pseudo-targeted metabolomics and deep neural network(DNN)modeling to achieve accurate species differentiation.A pseudo-targeted metabolomics approach was optimized through data acquisition mode,ion pairs generation,comparison between multiple reaction monitoring(MRM)and scheduled MRM(sMRM),and chromatographic elution gradient.In total,1980 ion pairs were monitored within 23 min,allowing for the most comprehensive ginseng metabolome analysis.The established DNN model demonstrated excellent classification performance(in terms of accuracy,precision,recall,F1 score,area under the curve,and receiver operating characteristic(ROC))using the entire metabolome data and feature-selection dataset,exhibiting superior advantages over random forest(RF),support vector machine(SVM),extreme gradient boosting(XGBoost),and multilayer perceptron(MLP).Moreover,DNNs were advantageous for automated feature learning,nonlinear modeling,adaptability,and generalization.This study confirmed practicality of the established strategy for efficient metabolomics data analysis and reliable classification performance even when using small-volume samples.This established approach holds promise for plant metabolomics and is not limited to ginseng. 展开更多
关键词 Liquid chromatography-mass spectrometry Pseudo-targeted metabolomics deep neural network Species differentiation GINSENG
在线阅读 下载PDF
Forecasting electricity prices in the spot market utilizing wavelet packet decomposition integrated with a hybrid deep neural network
3
作者 Heping Jia Yuchen Guo +5 位作者 Xiaobin Zhang Qianxin Ma Zhenglin Yang Yaxian Zheng Dan Zeng Dunnan Liu 《Global Energy Interconnection》 2025年第5期874-890,共17页
Accurate forecasting of electricity spot prices is crucial for market participants in formulating bidding strategies.However,the extreme volatility of electricity spot prices,influenced by various factors,poses signif... Accurate forecasting of electricity spot prices is crucial for market participants in formulating bidding strategies.However,the extreme volatility of electricity spot prices,influenced by various factors,poses significant challenges for forecasting.To address the data uncertainty of electricity prices and effectively mitigate gradient issues,overfitting,and computational challenges associated with using a single model during forecasting,this paper proposes a framework for forecasting spot market electricity prices by integrating wavelet packet decomposition(WPD)with a hybrid deep neural network.By ensuring accurate data decomposition,the WPD algorithm aids in detecting fluctuating patterns and isolating random noise.The hybrid model integrates temporal convolutional networks(TCN)and long short-term memory(LSTM)networks to enhance feature extraction and improve forecasting performance.Compared to other techniques,it significantly reduces average errors,decreasing mean absolute error(MAE)by 27.3%,root mean square error(RMSE)by 66.9%,and mean absolute percentage error(MAPE)by 22.8%.This framework effectively captures the intricate fluctuations present in the time series,resulting in more accurate and reliable predictions. 展开更多
关键词 Electricity price forecasting Long and short-term memory Hybrid deep neural network Wavelet packet decomposition Temporal neural network
在线阅读 下载PDF
A Modified Deep Residual-Convolutional Neural Network for Accurate Imputation of Missing Data
4
作者 Firdaus Firdaus Siti Nurmaini +8 位作者 Anggun Islami Annisa Darmawahyuni Ade Iriani Sapitri Muhammad Naufal Rachmatullah Bambang Tutuko Akhiar Wista Arum Muhammad Irfan Karim Yultrien Yultrien Ramadhana Noor Salassa Wandya 《Computers, Materials & Continua》 2025年第2期3419-3441,共23页
Handling missing data accurately is critical in clinical research, where data quality directly impacts decision-making and patient outcomes. While deep learning (DL) techniques for data imputation have gained attentio... Handling missing data accurately is critical in clinical research, where data quality directly impacts decision-making and patient outcomes. While deep learning (DL) techniques for data imputation have gained attention, challenges remain, especially when dealing with diverse data types. In this study, we introduce a novel data imputation method based on a modified convolutional neural network, specifically, a Deep Residual-Convolutional Neural Network (DRes-CNN) architecture designed to handle missing values across various datasets. Our approach demonstrates substantial improvements over existing imputation techniques by leveraging residual connections and optimized convolutional layers to capture complex data patterns. We evaluated the model on publicly available datasets, including Medical Information Mart for Intensive Care (MIMIC-III and MIMIC-IV), which contain critical care patient data, and the Beijing Multi-Site Air Quality dataset, which measures environmental air quality. The proposed DRes-CNN method achieved a root mean square error (RMSE) of 0.00006, highlighting its high accuracy and robustness. We also compared with Low Light-Convolutional Neural Network (LL-CNN) and U-Net methods, which had RMSE values of 0.00075 and 0.00073, respectively. This represented an improvement of approximately 92% over LL-CNN and 91% over U-Net. The results showed that this DRes-CNN-based imputation method outperforms current state-of-the-art models. These results established DRes-CNN as a reliable solution for addressing missing data. 展开更多
关键词 Data imputation missing data deep learning deep residual convolutional neural network
在线阅读 下载PDF
Demand Forecasting of a Microgrid-Powered Electric Vehicle Charging Station Enabled by Emerging Technologies and Deep Recurrent Neural Networks
5
作者 Sahbi Boubaker Adel Mellit +3 位作者 Nejib Ghazouani Walid Meskine Mohamed Benghanem Habib Kraiem 《Computer Modeling in Engineering & Sciences》 2025年第5期2237-2259,共23页
Electric vehicles(EVs)are gradually being deployed in the transportation sector.Although they have a high impact on reducing greenhouse gas emissions,their penetration is challenged by their random energy demand and d... Electric vehicles(EVs)are gradually being deployed in the transportation sector.Although they have a high impact on reducing greenhouse gas emissions,their penetration is challenged by their random energy demand and difficult scheduling of their optimal charging.To cope with these problems,this paper presents a novel approach for photovoltaic grid-connected microgrid EV charging station energy demand forecasting.The present study is part of a comprehensive framework involving emerging technologies such as drones and artificial intelligence designed to support the EVs’charging scheduling task.By using predictive algorithms for solar generation and load demand estimation,this approach aimed at ensuring dynamic and efficient energy flow between the solar energy source,the grid and the electric vehicles.The main contribution of this paper lies in developing an intelligent approach based on deep recurrent neural networks to forecast the energy demand using only its previous records.Therefore,various forecasters based on Long Short-term Memory,Gated Recurrent Unit,and their bi-directional and stacked variants were investigated using a real dataset collected from an EV charging station located at Trieste University(Italy).The developed forecasters have been evaluated and compared according to different metrics,including R,RMSE,MAE,and MAPE.We found that the obtained R values for both PV power generation and energy demand ranged between 97%and 98%.These study findings can be used for reliable and efficient decision-making on the management side of the optimal scheduling of the charging operations. 展开更多
关键词 MICROGRID electric vehicles charging station forecasting deep recurrent neural networks energy management system
在线阅读 下载PDF
Application of deep learning-based convolutional neural networks in gastrointestinal disease endoscopic examination
6
作者 Yang-Yang Wang Bin Liu Ji-Han Wang 《World Journal of Gastroenterology》 2025年第36期50-69,共20页
Gastrointestinal(GI)diseases,including gastric and colorectal cancers,signi-ficantly impact global health,necessitating accurate and efficient diagnostic me-thods.Endoscopic examination is the primary diagnostic tool;... Gastrointestinal(GI)diseases,including gastric and colorectal cancers,signi-ficantly impact global health,necessitating accurate and efficient diagnostic me-thods.Endoscopic examination is the primary diagnostic tool;however,its accu-racy is limited by operator dependency and interobserver variability.Advance-ments in deep learning,particularly convolutional neural networks(CNNs),show great potential for enhancing GI disease detection and classification.This review explores the application of CNNs in endoscopic imaging,focusing on polyp and tumor detection,disease classification,endoscopic ultrasound,and capsule endo-scopy analysis.We discuss the performance of CNN models with traditional dia-gnostic methods,highlighting their advantages in accuracy and real-time decision support.Despite promising results,challenges remain,including data availability,model interpretability,and clinical integration.Future directions include impro-ving model generalization,enhancing explainability,and conducting large-scale clinical trials.With continued advancements,CNN-powered artificial intelligence systems could revolutionize GI endoscopy by enhancing early disease detection,reducing diagnostic errors,and improving patient outcomes. 展开更多
关键词 Gastrointestinal diseases Endoscopic examination deep learning Convolutional neural networks Computer-aided diagnosis
在线阅读 下载PDF
The Blockchain Neural Network Superior to Deep Learning for Improving the Trust of Supply Chain
7
作者 Hsiao-Chun Han Der-Chen Huang 《Computer Modeling in Engineering & Sciences》 2025年第6期3921-3941,共21页
With the increasing importance of supply chain transparency,blockchain-based data has emerged as a valuable and verifiable source for analyzing procurement transaction risks.This study extends the mathematical model a... With the increasing importance of supply chain transparency,blockchain-based data has emerged as a valuable and verifiable source for analyzing procurement transaction risks.This study extends the mathematical model and proof of‘the Overall Performance Characteristics of the Supply Chain’to encompass multiple variables within blockchain data.Utilizing graph theory,the model is further developed into a single-layer neural network,which serves as the foundation for constructing two multi-layer deep learning neural network models,Feedforward Neural Network(abbreviated as FNN)and Deep Clustering Network(abbreviated as DCN).Furthermore,this study retrieves corporate data from the Chunghwa Yellow Pages online resource and Taiwan Economic Journal database(abbreviated as TEJ).These data are then virtualized using‘the Metaverse Algorithm’,and the selected virtualized blockchain variables are utilized to train a neural network model for classification.The results demonstrate that a single-layer neural network model,leveraging blockchain data and employing the Proof of Relation algorithm(abbreviated as PoR)as the activation function,effectively identifies anomalous enterprises,which constitute 7.2%of the total sample,aligning with expectations.In contrast,the multi-layer neural network models,DCN and FNN,classify an excessively large proportion of enterprises as anomalous(ranging from one-fourth to one-third),which deviates from expectations.This indicates that deep learning may still be inadequate in effectively capturing or identifying malicious corporate behaviors associated with distortions in procurement transaction data.In other words,procurement transaction blockchain data possesses intrinsic value that cannot be replaced by artificial intelligence(abbreviated as AI). 展开更多
关键词 Blockchain neural network deep learning consensus algorithm supply chain management information security management
在线阅读 下载PDF
Clustering-based temporal deep neural network denoising method for event-based sensors
8
作者 LI Jianing XU Jiangtao GAO Jiandong 《Optoelectronics Letters》 2025年第7期441-448,共8页
To enhance the denoising performance of event-based sensors,we introduce a clustering-based temporal deep neural network denoising method(CBTDNN).Firstly,to cluster the sensor output data and obtain the respective clu... To enhance the denoising performance of event-based sensors,we introduce a clustering-based temporal deep neural network denoising method(CBTDNN).Firstly,to cluster the sensor output data and obtain the respective cluster centers,a combination of density-based spatial clustering of applications with noise(DBSCAN)and Kmeans++is utilized.Subsequently,long short-term memory(LSTM)is employed to fit and yield optimized cluster centers with temporal information.Lastly,based on the new cluster centers and denoising ratio,a radius threshold is set,and noise points beyond this threshold are removed.The comprehensive denoising metrics F1_score of CBTDNN have achieved 0.8931,0.7735,and 0.9215 on the traffic sequences dataset,pedestrian detection dataset,and turntable dataset,respectively.And these metrics demonstrate improvements of 49.90%,33.07%,19.31%,and 22.97%compared to four contrastive algorithms,namely nearest neighbor(NNb),nearest neighbor with polarity(NNp),Autoencoder,and multilayer perceptron denoising filter(MLPF).These results demonstrate that the proposed method enhances the denoising performance of event-based sensors. 展开更多
关键词 cluster centers denoising kmeans cluster centersa temporal deep neural network CLUSTERING event based sensors dbscan
原文传递
Improving Fundus Detection Precision in Diabetic Retinopathy Using Derivative-Based Deep Neural Networks
9
作者 Asma Aldrees Hong Min +2 位作者 Ashit Kumar Dutta Yousef Ibrahim Daradkeh Mohd Anjum 《Computer Modeling in Engineering & Sciences》 2025年第3期2487-2511,共25页
Fundoscopic diagnosis involves assessing the proper functioning of the eye’s nerves,blood vessels,retinal health,and the impact of diabetes on the optic nerves.Fundus disorders are a major global health concern,affec... Fundoscopic diagnosis involves assessing the proper functioning of the eye’s nerves,blood vessels,retinal health,and the impact of diabetes on the optic nerves.Fundus disorders are a major global health concern,affecting millions of people worldwide due to their widespread occurrence.Fundus photography generates machine-based eye images that assist in diagnosing and treating ocular diseases such as diabetic retinopathy.As a result,accurate fundus detection is essential for early diagnosis and effective treatment,helping to prevent severe complications and improve patient outcomes.To address this need,this article introduces a Derivative Model for Fundus Detection using Deep NeuralNetworks(DMFD-DNN)to enhance diagnostic precision.Thismethod selects key features for fundus detection using the least derivative,which identifies features correlating with stored fundus images.Feature filtering relies on the minimum derivative,determined by extracting both similar and varying textures.In this research,the DNN model was integrated with the derivative model.Fundus images were segmented,features were extracted,and the DNN was iteratively trained to identify fundus regions reliably.The goal was to improve the precision of fundoscopic diagnosis by training the DNN incrementally,taking into account the least possible derivative across iterations,and using outputs from previous cycles.The hidden layer of the neural network operates on the most significant derivative,which may reduce precision across iterations.These derivatives are treated as inaccurate,and the model is subsequently trained using selective features and their corresponding extractions.The proposed model outperforms previous techniques in detecting fundus regions,achieving 94.98%accuracy and 91.57%sensitivity,with a minimal error rate of 5.43%.It significantly reduces feature extraction time to 1.462 s and minimizes computational overhead,thereby improving operational efficiency and scalability.Ultimately,the proposed model enhances diagnostic precision and reduces errors,leading to more effective fundus dysfunction diagnosis and treatment. 展开更多
关键词 deep neural network feature extraction fundus detection medical image processing
在线阅读 下载PDF
A survey of backdoor attacks and defenses:From deep neural networks to large language models
10
作者 Ling-Xin Jin Wei Jiang +5 位作者 Xiang-Yu Wen Mei-Yu Lin Jin-Yu Zhan Xing-Zhi Zhou Maregu Assefa Habtie Naoufel Werghi 《Journal of Electronic Science and Technology》 2025年第3期13-35,共23页
Deep neural networks(DNNs)have found extensive applications in safety-critical artificial intelligence systems,such as autonomous driving and facial recognition systems.However,recent research has revealed their susce... Deep neural networks(DNNs)have found extensive applications in safety-critical artificial intelligence systems,such as autonomous driving and facial recognition systems.However,recent research has revealed their susceptibility to backdoors maliciously injected by adversaries.This vulnerability arises due to the intricate architecture and opacity of DNNs,resulting in numerous redundant neurons embedded within the models.Adversaries exploit these vulnerabilities to conceal malicious backdoor information within DNNs,thereby causing erroneous outputs and posing substantial threats to the efficacy of DNN-based applications.This article presents a comprehensive survey of backdoor attacks against DNNs and the countermeasure methods employed to mitigate them.Initially,we trace the evolution of the concept from traditional backdoor attacks to backdoor attacks against DNNs,highlighting the feasibility and practicality of generating backdoor attacks against DNNs.Subsequently,we provide an overview of notable works encompassing various attack and defense strategies,facilitating a comparative analysis of their approaches.Through these discussions,we offer constructive insights aimed at refining these techniques.Finally,we extend our research perspective to the domain of large language models(LLMs)and synthesize the characteristics and developmental trends of backdoor attacks and defense methods targeting LLMs.Through a systematic review of existing studies on backdoor vulnerabilities in LLMs,we identify critical open challenges in this field and propose actionable directions for future research. 展开更多
关键词 Backdoor Attacks Backdoor defenses deep neural networks Large language model
在线阅读 下载PDF
Resource Allocation in V2X Networks:A Double Deep Q-Network Approach with Graph Neural Networks
11
作者 Zhengda Huan Jian Sun +3 位作者 Zeyu Chen Ziyi Zhang Xiao Sun Zenghui Xiao 《Computers, Materials & Continua》 2025年第9期5427-5443,共17页
With the advancement of Vehicle-to-Everything(V2X)technology,efficient resource allocation in dynamic vehicular networks has become a critical challenge for achieving optimal performance.Existing methods suffer from h... With the advancement of Vehicle-to-Everything(V2X)technology,efficient resource allocation in dynamic vehicular networks has become a critical challenge for achieving optimal performance.Existing methods suffer from high computational complexity and decision latency under high-density traffic and heterogeneous network conditions.To address these challenges,this study presents an innovative framework that combines Graph Neural Networks(GNNs)with a Double Deep Q-Network(DDQN),utilizing dynamic graph structures and reinforcement learning.An adaptive neighbor sampling mechanism is introduced to dynamically select the most relevant neighbors based on interference levels and network topology,thereby improving decision accuracy and efficiency.Meanwhile,the framework models communication links as nodes and interference relationships as edges,effectively capturing the direct impact of interference on resource allocation while reducing computational complexity and preserving critical interaction information.Employing an aggregation mechanism based on the Graph Attention Network(GAT),it dynamically adjusts the neighbor sampling scope and performs attention-weighted aggregation based on node importance,ensuring more efficient and adaptive resource management.This design ensures reliable Vehicle-to-Vehicle(V2V)communication while maintaining high Vehicle-to-Infrastructure(V2I)throughput.The framework retains the global feature learning capabilities of GNNs and supports distributed network deployment,allowing vehicles to extract low-dimensional graph embeddings from local observations for real-time resource decisions.Experimental results demonstrate that the proposed method significantly reduces computational overhead,mitigates latency,and improves resource utilization efficiency in vehicular networks under complex traffic scenarios.This research not only provides a novel solution to resource allocation challenges in V2X networks but also advances the application of DDQN in intelligent transportation systems,offering substantial theoretical significance and practical value. 展开更多
关键词 Resource allocation V2X double deep Q-network graph neural network
在线阅读 下载PDF
LOBO Optimization-Tuned Deep-Convolutional Neural Network for Brain Tumor Classification Approach
12
作者 A.Sahaya Anselin Nisha NARMADHA R. +2 位作者 AMIRTHALAKSHMIT.M. BALAMURUGAN V. VEDANARAYANAN V. 《Journal of Shanghai Jiaotong university(Science)》 2025年第1期107-114,共8页
The categorization of brain tumors is a significant issue for healthcare applications.Perfect and timely identification of brain tumors is important for employing an effective treatment of this disease.Brain tumors po... The categorization of brain tumors is a significant issue for healthcare applications.Perfect and timely identification of brain tumors is important for employing an effective treatment of this disease.Brain tumors possess high changes in terms of size,shape,and amount,and hence the classification process acts as a more difficult research problem.This paper suggests a deep learning model using the magnetic resonance imaging technique that overcomes the limitations associated with the existing classification methods.The effectiveness of the suggested method depends on the coyote optimization algorithm,also known as the LOBO algorithm,which optimizes the weights of the deep-convolutional neural network classifier.The accuracy,sensitivity,and specificity indices,which are obtained to be 92.40%,94.15%,and 91.92%,respectively,are used to validate the effectiveness of the suggested method.The result suggests that the suggested strategy is superior for effectively classifying brain tumors. 展开更多
关键词 brain tumor magnetic resonance imaging deep learning deep-convolutional neural network classifier LOBO optimization
原文传递
Cuckoo Search-Deep Neural Network Hybrid Model for Uncertainty Quantification and Optimization of Dielectric Energy Storage in Na_(1/2)Bi_(1/2)TiO_(3)-Based Ceramic Capacitors
13
作者 Shige Wang Yalong Liang +1 位作者 Lian Huang Pei Li 《Computers, Materials & Continua》 2025年第11期2729-2748,共20页
This study introduces a hybrid Cuckoo Search-Deep Neural Network(CS-DNN)model for uncertainty quantification and composition optimization of Na_(1/2)Bi_(1/2)TiO_(3)(NBT)-based dielectric energy storage ceramics.Addres... This study introduces a hybrid Cuckoo Search-Deep Neural Network(CS-DNN)model for uncertainty quantification and composition optimization of Na_(1/2)Bi_(1/2)TiO_(3)(NBT)-based dielectric energy storage ceramics.Addressing the limitations of traditional ferroelectric materials—such as hysteresis loss and low breakdown strength under high electric fields—we fabricate(1−x)NBBT8-xBMT solid solutions via chemical modification and systematically investigate their temperature stability and composition-dependent energy storage performance through XRD,SEM,and electrical characterization.The key innovation lies in integrating the CS metaheuristic algorithm with a DNN,overcoming localminima in training and establishing a robust composition-property prediction framework.Our model accurately predicts room-temperature dielectric constant(ε_(r)),maximum dielectric constant(ε_(max)),dielectric loss(tanδ),discharge energy density(W_(rec)),and charge-discharge efficiency(η)from compositional inputs.A Monte Carlo-based uncertainty quantification framework,combined with the 3σ statistical criterion,demonstrates that CSDNN outperforms conventional DNN models in three critical aspects:Higher prediction accuracy(R^(2)=0.9717 vs.0.9382 for ε_(max));Tighter error distribution,satisfying the 99.7% confidence interval under the 3σprinciple;Enhanced robustness,maintaining stable predictions across a 25% composition span in generalization tests.While the model’s generalization is constrained by both the limited experimental dataset(n=45)and the underlying assumptions of MC-based data augmentation,the CS-DNN framework establishes a machine learning-guided paradigm for accelerated discovery of high-temperature dielectric capacitors through its unique capability in quantifying composition-level energy storage uncertainties. 展开更多
关键词 Cuckoo search deep neural network ferroelectric ceramics dielectric energy storage uncertainty analysis monte Carlo simulation
在线阅读 下载PDF
Deep Learning Model for Identifying Internal Flaws Based on Image Quadtree SBFEM and Deep Neural Networks
14
作者 Hanyu Tao Dongye Sun +1 位作者 Tao Fang Wenhu Zhao 《Computer Modeling in Engineering & Sciences》 2025年第10期521-536,共16页
Structural internal flaws often weaken the performance and integral stability,while traditional nondestructive testing or inversion methods face challenges of high cost and low efficiency in quantitative flaw identifi... Structural internal flaws often weaken the performance and integral stability,while traditional nondestructive testing or inversion methods face challenges of high cost and low efficiency in quantitative flaw identification.To quickly identify internal flaws within structures,a deep learning model for flaw detection is proposed based on the image quadtree scaled boundary finite element method(SBFEM)combined with a deep neural network(DNN).The training dataset is generated fromthe numerical simulations using the balanced quadtree algorithmand SBFEM,where the structural domain is discretized based on recursive decomposition principles andmesh refinement is automatically performed in the flaw boundary regions.The model contains only six types of elements and hanging nodes don’t affect the solution accuracy,resulting in a high degree of automation and significantly reducing the cost of the training dataset.The deep artificial neural network for flaw detection is constructed using DNN as the learning framework,effectively mitigating the risk of the objective function converging to local optima during training.Statistical methods are employed to evaluate the accuracy of the inversionmodel,and the influences of flaw size and the number of training samples on the performance are examined.In statistical results of single flaw,the 95%confidence intervals of the relative error for(x,y,r)are[2.16%,2.76%],[1.53%,1.96%]and[1.49%,1.91%],respectively.The 95%confidence interval of the comprehensive relative error for double flaws is[3.06%,3.62%].The results demonstrate that the predicted flaw parameters align closely with the reserved clean data,indicating that themodel can accurately quantify both the location and size of structural flaws. 展开更多
关键词 Flaw detection deep neural network image quadtree scaled boundary finite element method
在线阅读 下载PDF
Big Texture Dataset Synthesized Based on Gradient and Convolution Kernels Using Pre-Trained Deep Neural Networks
15
作者 Farhan A.Alenizi Faten Khalid Karim +1 位作者 Alaa R.Al-Shamasneh Mohammad Hossein Shakoor 《Computer Modeling in Engineering & Sciences》 2025年第8期1793-1829,共37页
Deep neural networks provide accurate results for most applications.However,they need a big dataset to train properly.Providing a big dataset is a significant challenge in most applications.Image augmentation refers t... Deep neural networks provide accurate results for most applications.However,they need a big dataset to train properly.Providing a big dataset is a significant challenge in most applications.Image augmentation refers to techniques that increase the amount of image data.Common operations for image augmentation include changes in illumination,rotation,contrast,size,viewing angle,and others.Recently,Generative Adversarial Networks(GANs)have been employed for image generation.However,like image augmentation methods,GAN approaches can only generate images that are similar to the original images.Therefore,they also cannot generate new classes of data.Texture images presentmore challenges than general images,and generating textures is more complex than creating other types of images.This study proposes a gradient-based deep neural network method that generates a new class of texture.It is possible to rapidly generate new classes of textures using different kernels from pre-trained deep networks.After generating new textures for each class,the number of textures increases through image augmentation.During this process,several techniques are proposed to automatically remove incomplete and similar textures that are created.The proposed method is faster than some well-known generative networks by around 4 to 10 times.In addition,the quality of the generated textures surpasses that of these networks.The proposed method can generate textures that surpass those of someGANs and parametric models in certain image qualitymetrics.It can provide a big texture dataset to train deep networks.A new big texture dataset is created artificially using the proposed method.This dataset is approximately 2 GB in size and comprises 30,000 textures,each 150×150 pixels in size,organized into 600 classes.It is uploaded to the Kaggle site and Google Drive.This dataset is called BigTex.Compared to other texture datasets,the proposed dataset is the largest and can serve as a comprehensive texture dataset for training more powerful deep neural networks and mitigating overfitting. 展开更多
关键词 Big texture dataset data generation pre-trained deep neural network
在线阅读 下载PDF
Comment on“Integration of deep neural network modeling and LC-MS-based pseudo-targeted metabolomics as a practical strategy to differentiate ginseng species”
16
作者 Li Ping 《Journal of Pharmaceutical Analysis》 2025年第2期289-290,共2页
Traditional Chinese medicine(TCM),especially the plant-based,represents complex chemical system containing various primary and secondary metabolites.These botanical metabolites are structurally diversified and exhibit... Traditional Chinese medicine(TCM),especially the plant-based,represents complex chemical system containing various primary and secondary metabolites.These botanical metabolites are structurally diversified and exhibit significant difference in the acidity,alkalinity,molecular weight,polarity,and content,etc,which thus poses great challenges in assessing the quality of TCM[1]. 展开更多
关键词 chemical system pseudo targeted metabolomics assessing quality LC MS traditional chinese medicine tcm especially primary secondary metabolitesthese ginseng species differentiation deep neural network
暂未订购
Deep Convolution Neural Networks for Image-Based Android Malware Classification
17
作者 Amel Ksibi Mohammed Zakariah +1 位作者 Latifah Almuqren Ala Saleh Alluhaidan 《Computers, Materials & Continua》 2025年第3期4093-4116,共24页
The analysis of Android malware shows that this threat is constantly increasing and is a real threat to mobile devices since traditional approaches,such as signature-based detection,are no longer effective due to the ... The analysis of Android malware shows that this threat is constantly increasing and is a real threat to mobile devices since traditional approaches,such as signature-based detection,are no longer effective due to the continuously advancing level of sophistication.To resolve this problem,efficient and flexible malware detection tools are needed.This work examines the possibility of employing deep CNNs to detect Android malware by transforming network traffic into image data representations.Moreover,the dataset used in this study is the CIC-AndMal2017,which contains 20,000 instances of network traffic across five distinct malware categories:a.Trojan,b.Adware,c.Ransomware,d.Spyware,e.Worm.These network traffic features are then converted to image formats for deep learning,which is applied in a CNN framework,including the VGG16 pre-trained model.In addition,our approach yielded high performance,yielding an accuracy of 0.92,accuracy of 99.1%,precision of 98.2%,recall of 99.5%,and F1 score of 98.7%.Subsequent improvements to the classification model through changes within the VGG19 framework improved the classification rate to 99.25%.Through the results obtained,it is clear that CNNs are a very effective way to classify Android malware,providing greater accuracy than conventional techniques.The success of this approach also shows the applicability of deep learning in mobile security along with the direction for the future advancement of the real-time detection system and other deeper learning techniques to counter the increasing number of threats emerging in the future. 展开更多
关键词 Android malware detection deep convolutional neural network(DCNN) image processing CIC-AndMal2017 dataset exploratory data analysis VGG16 model
在线阅读 下载PDF
A Convolutional Neural Network-Based Deep Support Vector Machine for Parkinson’s Disease Detection with Small-Scale and Imbalanced Datasets
18
作者 Kwok Tai Chui Varsha Arya +2 位作者 Brij B.Gupta Miguel Torres-Ruiz Razaz Waheeb Attar 《Computers, Materials & Continua》 2026年第1期1410-1432,共23页
Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using d... Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using deep learning algorithms further enhances performance;nevertheless,it is challenging due to the nature of small-scale and imbalanced PD datasets.This paper proposed a convolutional neural network-based deep support vector machine(CNN-DSVM)to automate the feature extraction process using CNN and extend the conventional SVM to a DSVM for better classification performance in small-scale PD datasets.A customized kernel function reduces the impact of biased classification towards the majority class(healthy candidates in our consideration).An improved generative adversarial network(IGAN)was designed to generate additional training data to enhance the model’s performance.For performance evaluation,the proposed algorithm achieves a sensitivity of 97.6%and a specificity of 97.3%.The performance comparison is evaluated from five perspectives,including comparisons with different data generation algorithms,feature extraction techniques,kernel functions,and existing works.Results reveal the effectiveness of the IGAN algorithm,which improves the sensitivity and specificity by 4.05%–4.72%and 4.96%–5.86%,respectively;and the effectiveness of the CNN-DSVM algorithm,which improves the sensitivity by 1.24%–57.4%and specificity by 1.04%–163%and reduces biased detection towards the majority class.The ablation experiments confirm the effectiveness of individual components.Two future research directions have also been suggested. 展开更多
关键词 Convolutional neural network data generation deep support vector machine feature extraction generative artificial intelligence imbalanced dataset medical diagnosis Parkinson’s disease small-scale dataset
在线阅读 下载PDF
Prediction of three-dimensional ocean temperature in the South China Sea based on time series gridded data and a dynamic spatiotemporal graph neural network
19
作者 Feng Nan Zhuolin Li +3 位作者 Jie Yu Suixiang Shi Xinrong Wu Lingyu Xu 《Acta Oceanologica Sinica》 SCIE CAS CSCD 2024年第7期26-39,共14页
Ocean temperature is an important physical variable in marine ecosystems,and ocean temperature prediction is an important research objective in ocean-related fields.Currently,one of the commonly used methods for ocean... Ocean temperature is an important physical variable in marine ecosystems,and ocean temperature prediction is an important research objective in ocean-related fields.Currently,one of the commonly used methods for ocean temperature prediction is based on data-driven,but research on this method is mostly limited to the sea surface,with few studies on the prediction of internal ocean temperature.Existing graph neural network-based methods usually use predefined graphs or learned static graphs,which cannot capture the dynamic associations among data.In this study,we propose a novel dynamic spatiotemporal graph neural network(DSTGN)to predict threedimensional ocean temperature(3D-OT),which combines static graph learning and dynamic graph learning to automatically mine two unknown dependencies between sequences based on the original 3D-OT data without prior knowledge.Temporal and spatial dependencies in the time series were then captured using temporal and graph convolutions.We also integrated dynamic graph learning,static graph learning,graph convolution,and temporal convolution into an end-to-end framework for 3D-OT prediction using time-series grid data.In this study,we conducted prediction experiments using high-resolution 3D-OT from the Copernicus global ocean physical reanalysis,with data covering the vertical variation of temperature from the sea surface to 1000 m below the sea surface.We compared five mainstream models that are commonly used for ocean temperature prediction,and the results showed that the method achieved the best prediction results at all prediction scales. 展开更多
关键词 dynamic associations three-dimensional ocean temperature prediction graph neural network time series gridded data
在线阅读 下载PDF
Hybrid model for BOF oxygen blowing time prediction based on oxygen balance mechanism and deep neural network 被引量:11
20
作者 Xin Shao Qing Liu +3 位作者 Zicheng Xin Jiangshan Zhang Tao Zhou Shaoshuai Li 《International Journal of Minerals,Metallurgy and Materials》 SCIE EI CSCD 2024年第1期106-117,共12页
The amount of oxygen blown into the converter is one of the key parameters for the control of the converter blowing process,which directly affects the tap-to-tap time of converter. In this study, a hybrid model based ... The amount of oxygen blown into the converter is one of the key parameters for the control of the converter blowing process,which directly affects the tap-to-tap time of converter. In this study, a hybrid model based on oxygen balance mechanism (OBM) and deep neural network (DNN) was established for predicting oxygen blowing time in converter. A three-step method was utilized in the hybrid model. First, the oxygen consumption volume was predicted by the OBM model and DNN model, respectively. Second, a more accurate oxygen consumption volume was obtained by integrating the OBM model and DNN model. Finally, the converter oxygen blowing time was calculated according to the oxygen consumption volume and the oxygen supply intensity of each heat. The proposed hybrid model was verified using the actual data collected from an integrated steel plant in China, and compared with multiple linear regression model, OBM model, and neural network model including extreme learning machine, back propagation neural network, and DNN. The test results indicate that the hybrid model with a network structure of 3 hidden layer layers, 32-16-8 neurons per hidden layer, and 0.1 learning rate has the best prediction accuracy and stronger generalization ability compared with other models. The predicted hit ratio of oxygen consumption volume within the error±300 m^(3)is 96.67%;determination coefficient (R^(2)) and root mean square error (RMSE) are0.6984 and 150.03 m^(3), respectively. The oxygen blow time prediction hit ratio within the error±0.6 min is 89.50%;R2and RMSE are0.9486 and 0.3592 min, respectively. As a result, the proposed model can effectively predict the oxygen consumption volume and oxygen blowing time in the converter. 展开更多
关键词 basic oxygen furnace oxygen consumption oxygen blowing time oxygen balance mechanism deep neural network hybrid model
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部