期刊文献+
共找到3,449篇文章
< 1 2 173 >
每页显示 20 50 100
Deep Learning Mixed Hyper-Parameter Optimization Based on Improved Cuckoo Search Algorithm
1
作者 TONG Yu CHEN Rong HU Biling 《Wuhan University Journal of Natural Sciences》 2025年第2期195-204,共10页
Deep learning algorithm is an effective data mining method and has been used in many fields to solve practical problems.However,the deep learning algorithms often contain some hyper-parameters which may be continuous,... Deep learning algorithm is an effective data mining method and has been used in many fields to solve practical problems.However,the deep learning algorithms often contain some hyper-parameters which may be continuous,integer,or mixed,and are often given based on experience but largely affect the effectiveness of activity recognition.In order to adapt to different hyper-parameter optimization problems,our improved Cuckoo Search(CS)algorithm is proposed to optimize the mixed hyper-parameters in deep learning algorithm.The algorithm optimizes the hyper-parameters in the deep learning model robustly,and intelligently selects the combination of integer type and continuous hyper-parameters that make the model optimal.Then,the mixed hyper-parameter in Convolutional Neural Network(CNN),Long-Short-Term Memory(LSTM)and CNN-LSTM are optimized based on the methodology on the smart home activity recognition datasets.Results show that the methodology can improve the performance of the deep learning model and whether we are experienced or not,we can get a better deep learning model using our method. 展开更多
关键词 improved Cuckoo Search algorithm mixed hyper-parameter OPTIMIZATION deep learning
原文传递
Rapid pathologic grading-based diagnosis of esophageal squamous cell carcinoma via Raman spectroscopy and a deep learning algorithm
2
作者 Xin-Ying Yu Jian Chen +2 位作者 Lian-Yu Li Feng-En Chen Qiang He 《World Journal of Gastroenterology》 2025年第14期32-46,共15页
BACKGROUND Esophageal squamous cell carcinoma is a major histological subtype of esophageal cancer.Many molecular genetic changes are associated with its occurrence.Raman spectroscopy has become a new method for the e... BACKGROUND Esophageal squamous cell carcinoma is a major histological subtype of esophageal cancer.Many molecular genetic changes are associated with its occurrence.Raman spectroscopy has become a new method for the early diagnosis of tumors because it can reflect the structures of substances and their changes at the molecular level.AIM To detect alterations in Raman spectral information across different stages of esophageal neoplasia.METHODS Different grades of esophageal lesions were collected,and a total of 360 groups of Raman spectrum data were collected.A 1D-transformer network model was proposed to handle the task of classifying the spectral data of esophageal squamous cell carcinoma.In addition,a deep learning model was applied to visualize the Raman spectral data and interpret their molecular characteristics.RESULTS A comparison among Raman spectral data with different pathological grades and a visual analysis revealed that the Raman peaks with significant differences were concentrated mainly at 1095 cm^(-1)(DNA,symmetric PO,and stretching vibration),1132 cm^(-1)(cytochrome c),1171 cm^(-1)(acetoacetate),1216 cm^(-1)(amide III),and 1315 cm^(-1)(glycerol).A comparison among the training results of different models revealed that the 1Dtransformer network performed best.A 93.30%accuracy value,a 96.65%specificity value,a 93.30%sensitivity value,and a 93.17%F1 score were achieved.CONCLUSION Raman spectroscopy revealed significantly different waveforms for the different stages of esophageal neoplasia.The combination of Raman spectroscopy and deep learning methods could significantly improve the accuracy of classification. 展开更多
关键词 Raman spectroscopy Esophageal neoplasia Early diagnosis deep learning algorithm Rapid pathologic grading
暂未订购
Multi-Robot Task Allocation Using Multimodal Multi-Objective Evolutionary Algorithm Based on Deep Reinforcement Learning 被引量:4
3
作者 苗镇华 黄文焘 +1 位作者 张依恋 范勤勤 《Journal of Shanghai Jiaotong university(Science)》 EI 2024年第3期377-387,共11页
The overall performance of multi-robot collaborative systems is significantly affected by the multi-robot task allocation.To improve the effectiveness,robustness,and safety of multi-robot collaborative systems,a multi... The overall performance of multi-robot collaborative systems is significantly affected by the multi-robot task allocation.To improve the effectiveness,robustness,and safety of multi-robot collaborative systems,a multimodal multi-objective evolutionary algorithm based on deep reinforcement learning is proposed in this paper.The improved multimodal multi-objective evolutionary algorithm is used to solve multi-robot task allo-cation problems.Moreover,a deep reinforcement learning strategy is used in the last generation to provide a high-quality path for each assigned robot via an end-to-end manner.Comparisons with three popular multimodal multi-objective evolutionary algorithms on three different scenarios of multi-robot task allocation problems are carried out to verify the performance of the proposed algorithm.The experimental test results show that the proposed algorithm can generate sufficient equivalent schemes to improve the availability and robustness of multi-robot collaborative systems in uncertain environments,and also produce the best scheme to improve the overall task execution efficiency of multi-robot collaborative systems. 展开更多
关键词 multi-robot task allocation multi-robot cooperation path planning multimodal multi-objective evo-lutionary algorithm deep reinforcement learning
原文传递
Optimizing Deep Learning for Computer-Aided Diagnosis of Lung Diseases: An Automated Method Combining Evolutionary Algorithm, Transfer Learning, and Model Compression
4
作者 Hassen Louati Ali Louati +1 位作者 Elham Kariri Slim Bechikh 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第3期2519-2547,共29页
Recent developments in Computer Vision have presented novel opportunities to tackle complex healthcare issues,particularly in the field of lung disease diagnosis.One promising avenue involves the use of chest X-Rays,w... Recent developments in Computer Vision have presented novel opportunities to tackle complex healthcare issues,particularly in the field of lung disease diagnosis.One promising avenue involves the use of chest X-Rays,which are commonly utilized in radiology.To fully exploit their potential,researchers have suggested utilizing deep learning methods to construct computer-aided diagnostic systems.However,constructing and compressing these systems presents a significant challenge,as it relies heavily on the expertise of data scientists.To tackle this issue,we propose an automated approach that utilizes an evolutionary algorithm(EA)to optimize the design and compression of a convolutional neural network(CNN)for X-Ray image classification.Our approach accurately classifies radiography images and detects potential chest abnormalities and infections,including COVID-19.Furthermore,our approach incorporates transfer learning,where a pre-trainedCNNmodel on a vast dataset of chest X-Ray images is fine-tuned for the specific task of detecting COVID-19.This method can help reduce the amount of labeled data required for the task and enhance the overall performance of the model.We have validated our method via a series of experiments against state-of-the-art architectures. 展开更多
关键词 Computer-aided diagnosis deep learning evolutionary algorithms deep compression transfer learning
在线阅读 下载PDF
DeepSurNet-NSGA II:Deep Surrogate Model-Assisted Multi-Objective Evolutionary Algorithm for Enhancing Leg Linkage in Walking Robots
5
作者 Sayat Ibrayev Batyrkhan Omarov +1 位作者 Arman Ibrayeva Zeinel Momynkulov 《Computers, Materials & Continua》 SCIE EI 2024年第10期229-249,共21页
This research paper presents a comprehensive investigation into the effectiveness of the DeepSurNet-NSGA II(Deep Surrogate Model-Assisted Non-dominated Sorting Genetic Algorithm II)for solving complex multiobjective o... This research paper presents a comprehensive investigation into the effectiveness of the DeepSurNet-NSGA II(Deep Surrogate Model-Assisted Non-dominated Sorting Genetic Algorithm II)for solving complex multiobjective optimization problems,with a particular focus on robotic leg-linkage design.The study introduces an innovative approach that integrates deep learning-based surrogate models with the robust Non-dominated Sorting Genetic Algorithm II,aiming to enhance the efficiency and precision of the optimization process.Through a series of empirical experiments and algorithmic analyses,the paper demonstrates a high degree of correlation between solutions generated by the DeepSurNet-NSGA II and those obtained from direct experimental methods,underscoring the algorithm’s capability to accurately approximate the Pareto-optimal frontier while significantly reducing computational demands.The methodology encompasses a detailed exploration of the algorithm’s configuration,the experimental setup,and the criteria for performance evaluation,ensuring the reproducibility of results and facilitating future advancements in the field.The findings of this study not only confirm the practical applicability and theoretical soundness of the DeepSurNet-NSGA II in navigating the intricacies of multi-objective optimization but also highlight its potential as a transformative tool in engineering and design optimization.By bridging the gap between complex optimization challenges and achievable solutions,this research contributes valuable insights into the optimization domain,offering a promising direction for future inquiries and technological innovations. 展开更多
关键词 Multi-objective optimization genetic algorithm surrogate model deep learning walking robots
在线阅读 下载PDF
Surface wave inversion with unknown number of soil layers based on a hybrid learning procedure of deep learning and genetic algorithm
6
作者 Zan Zhou Thomas Man-Hoi Lok Wan-Huan Zhou 《Earthquake Engineering and Engineering Vibration》 SCIE EI CSCD 2024年第2期345-358,共14页
Surface wave inversion is a key step in the application of surface waves to soil velocity profiling.Currently,a common practice for the process of inversion is that the number of soil layers is assumed to be known bef... Surface wave inversion is a key step in the application of surface waves to soil velocity profiling.Currently,a common practice for the process of inversion is that the number of soil layers is assumed to be known before using heuristic search algorithms to compute the shear wave velocity profile or the number of soil layers is considered as an optimization variable.However,an improper selection of the number of layers may lead to an incorrect shear wave velocity profile.In this study,a deep learning and genetic algorithm hybrid learning procedure is proposed to perform the surface wave inversion without the need to assume the number of soil layers.First,a deep neural network is adapted to learn from a large number of synthetic dispersion curves for inferring the layer number.Then,the shear-wave velocity profile is determined by a genetic algorithm with the known layer number.By applying this procedure to both simulated and real-world cases,the results indicate that the proposed method is reliable and efficient for surface wave inversion. 展开更多
关键词 surface wave inversion analysis shear-wave velocity profile deep neural network genetic algorithm
在线阅读 下载PDF
Marine Predators Algorithm with Deep Learning-Based Leukemia Cancer Classification on Medical Images
7
作者 Sonali Das Saroja Kumar Rout +5 位作者 Sujit Kumar Panda Pradyumna Kumar Mohapatra Abdulaziz S.Almazyad Muhammed Basheer Jasser Guojiang Xiong Ali Wagdy Mohamed 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第10期893-916,共24页
In blood or bone marrow,leukemia is a form of cancer.A person with leukemia has an expansion of white blood cells(WBCs).It primarily affects children and rarely affects adults.Treatment depends on the type of leukemia... In blood or bone marrow,leukemia is a form of cancer.A person with leukemia has an expansion of white blood cells(WBCs).It primarily affects children and rarely affects adults.Treatment depends on the type of leukemia and the extent to which cancer has established throughout the body.Identifying leukemia in the initial stage is vital to providing timely patient care.Medical image-analysis-related approaches grant safer,quicker,and less costly solutions while ignoring the difficulties of these invasive processes.It can be simple to generalize Computer vision(CV)-based and image-processing techniques and eradicate human error.Many researchers have implemented computer-aided diagnosticmethods andmachine learning(ML)for laboratory image analysis,hopefully overcoming the limitations of late leukemia detection and determining its subgroups.This study establishes a Marine Predators Algorithm with Deep Learning Leukemia Cancer Classification(MPADL-LCC)algorithm onMedical Images.The projectedMPADL-LCC system uses a bilateral filtering(BF)technique to pre-process medical images.The MPADL-LCC system uses Faster SqueezeNet withMarine Predators Algorithm(MPA)as a hyperparameter optimizer for feature extraction.Lastly,the denoising autoencoder(DAE)methodology can be executed to accurately detect and classify leukemia cancer.The hyperparameter tuning process using MPA helps enhance leukemia cancer classification performance.Simulation results are compared with other recent approaches concerning various measurements and the MPADL-LCC algorithm exhibits the best results over other recent approaches. 展开更多
关键词 Leukemia cancer medical imaging image classification deep learning marine predators algorithm
在线阅读 下载PDF
A Deep-Learning and Transfer-Learning Hybrid Aerosol Retrieval Algorithm for FY4-AGRI:Development and Verification over Asia
8
作者 Disong Fu Hongrong Shi +9 位作者 Christian AGueymard Dazhi Yang Yu Zheng Huizheng Che Xuehua Fan Xinlei Han Lin Gao Jianchun Bian Minzheng Duan Xiangao Xia 《Engineering》 SCIE EI CAS CSCD 2024年第7期164-174,共11页
The Advanced Geosynchronous Radiation Imager(AGRI)is a mission-critical instrument for the Fengyun series of satellites.AGRI acquires full-disk images every 15 min and views East Asia every 5 min through 14 spectral b... The Advanced Geosynchronous Radiation Imager(AGRI)is a mission-critical instrument for the Fengyun series of satellites.AGRI acquires full-disk images every 15 min and views East Asia every 5 min through 14 spectral bands,enabling the detection of highly variable aerosol optical depth(AOD).Quantitative retrieval of AOD has hitherto been challenging,especially over land.In this study,an AOD retrieval algorithm is proposed that combines deep learning and transfer learning.The algorithm uses core concepts from both the Dark Target(DT)and Deep Blue(DB)algorithms to select features for the machinelearning(ML)algorithm,allowing for AOD retrieval at 550 nm over both dark and bright surfaces.The algorithm consists of two steps:①A baseline deep neural network(DNN)with skip connections is developed using 10 min Advanced Himawari Imager(AHI)AODs as the target variable,and②sunphotometer AODs from 89 ground-based stations are used to fine-tune the DNN parameters.Out-of-station validation shows that the retrieved AOD attains high accuracy,characterized by a coefficient of determination(R2)of 0.70,a mean bias error(MBE)of 0.03,and a percentage of data within the expected error(EE)of 70.7%.A sensitivity study reveals that the top-of-atmosphere reflectance at 650 and 470 nm,as well as the surface reflectance at 650 nm,are the two largest sources of uncertainty impacting the retrieval.In a case study of monitoring an extreme aerosol event,the AGRI AOD is found to be able to capture the detailed temporal evolution of the event.This work demonstrates the superiority of the transfer-learning technique in satellite AOD retrievals and the applicability of the retrieved AGRI AOD in monitoring extreme pollution events. 展开更多
关键词 Aerosol optical depth Retrieval algorithm deep learning Transfer learning Advanced Geosynchronous Radiation IMAGER
在线阅读 下载PDF
Gradient Optimizer Algorithm with Hybrid Deep Learning Based Failure Detection and Classification in the Industrial Environment
9
作者 Mohamed Zarouan Ibrahim M.Mehedi +1 位作者 Shaikh Abdul Latif Md.Masud Rana 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第2期1341-1364,共24页
Failure detection is an essential task in industrial systems for preventing costly downtime and ensuring the seamlessoperation of the system. Current industrial processes are getting smarter with the emergence of Indu... Failure detection is an essential task in industrial systems for preventing costly downtime and ensuring the seamlessoperation of the system. Current industrial processes are getting smarter with the emergence of Industry 4.0.Specifically, various modernized industrial processes have been equipped with quite a few sensors to collectprocess-based data to find faults arising or prevailing in processes along with monitoring the status of processes.Fault diagnosis of rotating machines serves a main role in the engineering field and industrial production. Dueto the disadvantages of existing fault, diagnosis approaches, which greatly depend on professional experienceand human knowledge, intellectual fault diagnosis based on deep learning (DL) has attracted the researcher’sinterest. DL reaches the desired fault classification and automatic feature learning. Therefore, this article designs a Gradient Optimizer Algorithm with Hybrid Deep Learning-based Failure Detection and Classification (GOAHDLFDC)in the industrial environment. The presented GOAHDL-FDC technique initially applies continuous wavelettransform (CWT) for preprocessing the actual vibrational signals of the rotating machinery. Next, the residualnetwork (ResNet18) model was exploited for the extraction of features from the vibration signals which are thenfed into theHDLmodel for automated fault detection. Finally, theGOA-based hyperparameter tuning is performedtoadjust the parameter valuesof theHDLmodel accurately.The experimental result analysis of the GOAHDL-FD Calgorithm takes place using a series of simulations and the experimentation outcomes highlight the better resultsof the GOAHDL-FDC technique under different aspects. 展开更多
关键词 Fault detection Industry 4.0 gradient optimizer algorithm deep learning rotating machineries artificial intelligence
在线阅读 下载PDF
Internet of Things Enabled DDoS Attack Detection Using Pigeon Inspired Optimization Algorithm with Deep Learning Approach
10
作者 Turki Ali Alghamdi Saud S.Alotaibi 《Computers, Materials & Continua》 SCIE EI 2024年第9期4047-4064,共18页
Internet of Things(IoTs)provides better solutions in various fields,namely healthcare,smart transportation,home,etc.Recognizing Denial of Service(DoS)outbreaks in IoT platforms is significant in certifying the accessi... Internet of Things(IoTs)provides better solutions in various fields,namely healthcare,smart transportation,home,etc.Recognizing Denial of Service(DoS)outbreaks in IoT platforms is significant in certifying the accessibility and integrity of IoT systems.Deep learning(DL)models outperform in detecting complex,non-linear relationships,allowing them to effectually severe slight deviations fromnormal IoT activities that may designate a DoS outbreak.The uninterrupted observation and real-time detection actions of DL participate in accurate and rapid detection,permitting proactive reduction events to be executed,hence securing the IoT network’s safety and functionality.Subsequently,this study presents pigeon-inspired optimization with a DL-based attack detection and classification(PIODL-ADC)approach in an IoT environment.The PIODL-ADC approach implements a hyperparameter-tuned DL method for Distributed Denial-of-Service(DDoS)attack detection in an IoT platform.Initially,the PIODL-ADC model utilizes Z-score normalization to scale input data into a uniformformat.For handling the convolutional and adaptive behaviors of IoT,the PIODL-ADCmodel employs the pigeon-inspired optimization(PIO)method for feature selection to detect the related features,considerably enhancing the recognition’s accuracy.Also,the Elman Recurrent Neural Network(ERNN)model is utilized to recognize and classify DDoS attacks.Moreover,reptile search algorithm(RSA)based hyperparameter tuning is employed to improve the precision and robustness of the ERNN method.A series of investigational validations is made to ensure the accomplishment of the PIODL-ADC method.The experimental outcome exhibited that the PIODL-ADC method shows greater accomplishment when related to existing models,with a maximum accuracy of 99.81%. 展开更多
关键词 Internet of things denial of service deep learning reptile search algorithm feature selection
在线阅读 下载PDF
Inversion of Seabed Geotechnical Properties in the Arctic Chukchi Deep Sea Basin Based on Time Domain Adaptive Search Matching Algorithm
11
作者 AN Long XU Chong +5 位作者 XING Junhui GONG Wei JIANG Xiaodian XU Haowei LIU Chuang YANG Boxue 《Journal of Ocean University of China》 SCIE CAS CSCD 2024年第4期933-942,共10页
The chirp sub-bottom profiler,for its high resolution,easy accessibility and cost-effectiveness,has been widely used in acoustic detection.In this paper,the acoustic impedance and grain size compositions were obtained... The chirp sub-bottom profiler,for its high resolution,easy accessibility and cost-effectiveness,has been widely used in acoustic detection.In this paper,the acoustic impedance and grain size compositions were obtained based on the chirp sub-bottom profiler data collected in the Chukchi Plateau area during the 11th Arctic Expedition of China.The time-domain adaptive search matching algorithm was used and validated on our established theoretical model.The misfit between the inversion result and the theoretical model is less than 0.067%.The grain size was calculated according to the empirical relationship between the acoustic impedance and the grain size of the sediment.The average acoustic impedance of sub-seafloor strata is 2.5026×10^(6) kg(s m^(2))^(-1)and the average grain size(θvalue)of the seafloor surface sediment is 7.1498,indicating the predominant occurrence of very fine silt sediment in the study area.Comparison of the inversion results and the laboratory measurements of nearby borehole samples shows that they are in general agreement. 展开更多
关键词 time domain adaptive search matching algorithm acoustic impedance inversion sedimentary grain size Arctic Ocean Chukchi deep Sea Basin
在线阅读 下载PDF
Extended Deep Learning Algorithm for Improved Brain Tumor Diagnosis System
12
作者 M.Adimoolam K.Maithili +7 位作者 N.M.Balamurugan R.Rajkumar S.Leelavathy Raju Kannadasan Mohd Anul Haq Ilyas Khan ElSayed M.Tag El Din Arfat Ahmad Khan 《Intelligent Automation & Soft Computing》 2024年第1期33-55,共23页
At present,the prediction of brain tumors is performed using Machine Learning(ML)and Deep Learning(DL)algorithms.Although various ML and DL algorithms are adapted to predict brain tumors to some range,some concerns st... At present,the prediction of brain tumors is performed using Machine Learning(ML)and Deep Learning(DL)algorithms.Although various ML and DL algorithms are adapted to predict brain tumors to some range,some concerns still need enhancement,particularly accuracy,sensitivity,false positive and false negative,to improve the brain tumor prediction system symmetrically.Therefore,this work proposed an Extended Deep Learning Algorithm(EDLA)to measure performance parameters such as accuracy,sensitivity,and false positive and false negative rates.In addition,these iterated measures were analyzed by comparing the EDLA method with the Convolutional Neural Network(CNN)way further using the SPSS tool,and respective graphical illustrations were shown.The results were that the mean performance measures for the proposed EDLA algorithm were calculated,and those measured were accuracy(97.665%),sensitivity(97.939%),false positive(3.012%),and false negative(3.182%)for ten iterations.Whereas in the case of the CNN,the algorithm means accuracy gained was 94.287%,mean sensitivity 95.612%,mean false positive 5.328%,and mean false negative 4.756%.These results show that the proposed EDLA method has outperformed existing algorithms,including CNN,and ensures symmetrically improved parameters.Thus EDLA algorithm introduces novelty concerning its performance and particular activation function.This proposed method will be utilized effectively in brain tumor detection in a precise and accurate manner.This algorithm would apply to brain tumor diagnosis and be involved in various medical diagnoses aftermodification.If the quantity of dataset records is enormous,then themethod’s computation power has to be updated. 展开更多
关键词 Brain tumor extended deep learning algorithm convolution neural network tumor detection deep learning
在线阅读 下载PDF
A low-complexity AMP detection algorithm with deep neural network for massive mimo systems
13
作者 Zufan Zhang Yang Li +1 位作者 Xiaoqin Yan Zonghua Ouyang 《Digital Communications and Networks》 CSCD 2024年第5期1375-1386,共12页
Signal detection plays an essential role in massive Multiple-Input Multiple-Output(MIMO)systems.However,existing detection methods have not yet made a good tradeoff between Bit Error Rate(BER)and computational complex... Signal detection plays an essential role in massive Multiple-Input Multiple-Output(MIMO)systems.However,existing detection methods have not yet made a good tradeoff between Bit Error Rate(BER)and computational complexity,resulting in slow convergence or high complexity.To address this issue,a low-complexity Approximate Message Passing(AMP)detection algorithm with Deep Neural Network(DNN)(denoted as AMP-DNN)is investigated in this paper.Firstly,an efficient AMP detection algorithm is derived by scalarizing the simplification of Belief Propagation(BP)algorithm.Secondly,by unfolding the obtained AMP detection algorithm,a DNN is specifically designed for the optimal performance gain.For the proposed AMP-DNN,the number of trainable parameters is only related to that of layers,regardless of modulation scheme,antenna number and matrix calculation,thus facilitating fast and stable training of the network.In addition,the AMP-DNN can detect different channels under the same distribution with only one training.The superior performance of the AMP-DNN is also verified by theoretical analysis and experiments.It is found that the proposed algorithm enables the reduction of BER without signal prior information,especially in the spatially correlated channel,and has a lower computational complexity compared with existing state-of-the-art methods. 展开更多
关键词 Massive MIMO system Approximate message passing(AMP)detection algorithm deep neural network(DNN) Bit error rate(BER) LOW-COMPLEXITY
在线阅读 下载PDF
A Novel Optimization Algorithm for Calibrating Pollutant Degradation Coefficient in Deep Tunnel Based on Storm Water Management Model
14
作者 Kaiyuan Zheng Ying Zhang 《Journal of Geoscience and Environment Protection》 2024年第12期207-217,共11页
Aiming at working out more accurate pollutant degradation coefficient of the deep tunnel system, this work puts forward a novel optimized algorithm to calibrate such coefficient and compare it with the ordinary fittin... Aiming at working out more accurate pollutant degradation coefficient of the deep tunnel system, this work puts forward a novel optimized algorithm to calibrate such coefficient and compare it with the ordinary fitting method. This algorithm incorporates the outlier filtration mechanism and the gradient descent mechanism to improve its performance, and the calibration result is substituted into storm water management model (SWMM) source codes to validate its effectiveness between simulated and observed data. COD, NH3-N, TN and TP are chosen as pollutant indicators of the observed data, and the RMSE, MSE and ME are selected as indicators to present the efficiency. The results show that the outlier filtration mechanism obtains better performance than fitting method, with the gradient descent mechanism nearly reduces 92.42% of the iterative amounts and improves 55 times of the computation efficiency than the ordinary iterative method, such algorithm is expected to function better with substantial observed data. 展开更多
关键词 SWMM Pollutant Degradation Coefficient deep Tunnel System Optimized algorithm
在线阅读 下载PDF
基于FCC-Deeplabv3+的城市地下管道缺陷语义分割方法
15
作者 田淙文 李波 +2 位作者 蓝雯飞 潘禹欣 姚为 《中南民族大学学报(自然科学版)》 CAS 2025年第1期107-117,共11页
城市地下管道图像缺陷具有种类多、背景复杂、噪声多、缺陷尺度变化大等特点,导致目前城市地下管道缺陷分割算法精度不够高.本研究提出了一种基于Deeplabv3+的改进分割模型FCC-Deeplabv3+,并将该模型首次应用到城市地下管道缺陷分割.结... 城市地下管道图像缺陷具有种类多、背景复杂、噪声多、缺陷尺度变化大等特点,导致目前城市地下管道缺陷分割算法精度不够高.本研究提出了一种基于Deeplabv3+的改进分割模型FCC-Deeplabv3+,并将该模型首次应用到城市地下管道缺陷分割.结合十字交叉注意力机制,使模型在预测时获取更丰富的上下文信息;提出了改进的解码器上采样策略,引入多尺度信息,减少中间层信息的丢失;使用基于增强的对比学习策略监督模型,提升了模型分割能力.此外,针对目前城市地下管道缺陷分割领域没有公开数据集的情况,基于Sewer-ML公开数据集,进行数据标注工作,构建了包含900张用于缺陷分割任务的数据集.通过实验验证了提出的缺陷分割模型的有效性及实时性,对比原始Deeplabv3+模型,mIoU提升了3.73%,mPA也提升了1.67%,并且相比其他基于深度学习的语义分割算法,也具有一定优势. 展开更多
关键词 FCC-deeplabv3+算法 缺陷分割 城市地下管道 十字交叉注意力 对比学习 深度监督
在线阅读 下载PDF
DDoS Attack Autonomous Detection Model Based on Multi-Strategy Integrate Zebra Optimization Algorithm
16
作者 Chunhui Li Xiaoying Wang +2 位作者 Qingjie Zhang Jiaye Liang Aijing Zhang 《Computers, Materials & Continua》 SCIE EI 2025年第1期645-674,共30页
Previous studies have shown that deep learning is very effective in detecting known attacks.However,when facing unknown attacks,models such as Deep Neural Networks(DNN)combined with Long Short-Term Memory(LSTM),Convol... Previous studies have shown that deep learning is very effective in detecting known attacks.However,when facing unknown attacks,models such as Deep Neural Networks(DNN)combined with Long Short-Term Memory(LSTM),Convolutional Neural Networks(CNN)combined with LSTM,and so on are built by simple stacking,which has the problems of feature loss,low efficiency,and low accuracy.Therefore,this paper proposes an autonomous detectionmodel for Distributed Denial of Service attacks,Multi-Scale Convolutional Neural Network-Bidirectional Gated Recurrent Units-Single Headed Attention(MSCNN-BiGRU-SHA),which is based on a Multistrategy Integrated Zebra Optimization Algorithm(MI-ZOA).The model undergoes training and testing with the CICDDoS2019 dataset,and its performance is evaluated on a new GINKS2023 dataset.The hyperparameters for Conv_filter and GRU_unit are optimized using the Multi-strategy Integrated Zebra Optimization Algorithm(MIZOA).The experimental results show that the test accuracy of the MSCNN-BiGRU-SHA model based on the MIZOA proposed in this paper is as high as 0.9971 in the CICDDoS 2019 dataset.The evaluation accuracy of the new dataset GINKS2023 created in this paper is 0.9386.Compared to the MSCNN-BiGRU-SHA model based on the Zebra Optimization Algorithm(ZOA),the detection accuracy on the GINKS2023 dataset has improved by 5.81%,precisionhas increasedby 1.35%,the recallhas improvedby 9%,and theF1scorehas increasedby 5.55%.Compared to the MSCNN-BiGRU-SHA models developed using Grid Search,Random Search,and Bayesian Optimization,the MSCNN-BiGRU-SHA model optimized with the MI-ZOA exhibits better performance in terms of accuracy,precision,recall,and F1 score. 展开更多
关键词 Distributed denial of service attack intrusion detection deep learning zebra optimization algorithm multi-strategy integrated zebra optimization algorithm
在线阅读 下载PDF
基于改进DeeplabV3+算法的地铁轨行区识别
17
作者 刘嘉宁 赵才友 张银喜 《铁道建筑》 北大核心 2025年第2期139-145,共7页
为解决现有基于深度学习的算法在地铁轨道区域识别上目标分割不精确、计算和存储资源需求大、检测速度慢的问题,提出了一种基于改进DeeplabV3+算法的地铁轨道区域识别算法。该模型将主干网络替换为有较低的模型大小和计算复杂度的轻量... 为解决现有基于深度学习的算法在地铁轨道区域识别上目标分割不精确、计算和存储资源需求大、检测速度慢的问题,提出了一种基于改进DeeplabV3+算法的地铁轨道区域识别算法。该模型将主干网络替换为有较低的模型大小和计算复杂度的轻量级卷积神经网络MobileNetV2,引入注意力机制CBAM(Channel Attention Module)来提高网络对特征的感知能力,并改进ASPP(Atrous Spatial Pyramid Pooling)使其能编码多尺度信息。应用自制数据集验证本文方法的有效性,并与经典DeeplabV3+、U-net、MaskR-CNN算法进行对比分析。结果表明:本文算法精确率、准确率、召回率、平均交并比分别为94.57%、94.43%、93.49%、90.24%,训练时长6.5 h,单张图像预测时长51.78 ms,模型大小为23 MB,均优于其他三种算法。本文算法在提高对轨道区域图像分割性能的同时,增强了模型的训练和检测效率,具有运用于地铁轨道区域识别的可行性和实用性。 展开更多
关键词 地铁 轨道区域识别 深度学习 语义分割 deeplabV3+算法
在线阅读 下载PDF
基于改进DeepLabv3+卷积神经网络的废钢智能判定算法
18
作者 吉孟扬 施凯旋 +1 位作者 郭宇 杨博晟 《轧钢》 北大核心 2025年第5期150-158,共9页
废钢等级判定是实现钢铁合理循环利用的关键环节。针对现有废钢判定方法检测精度不足、效率较低等问题,本文提出了一种基于改进DeepLabv3+卷积神经网络的废钢智能判定算法,该算法在空洞空间金字塔池化(ASPP)层后增加混合注意力机制,并... 废钢等级判定是实现钢铁合理循环利用的关键环节。针对现有废钢判定方法检测精度不足、效率较低等问题,本文提出了一种基于改进DeepLabv3+卷积神经网络的废钢智能判定算法,该算法在空洞空间金字塔池化(ASPP)层后增加混合注意力机制,并使用深度条带空洞卷积代替ASPP层中部分空洞卷积;通过构建不同料型、不同视角、不同时间段等实际场景的废钢堆图像数据集,训练获得了废钢智能判定模型。改进型算法能有效提升网络的检测精度,在以ResNet作为主干网的对照组中,平均交并比m_(IoU)提升约2.54%,在以Xception作为主干网的对照组中,m_(IoU)提升约4.42%,有效提高了废钢语义分割精度;通过厚度和距离两因素建立转换模型,完成各类废钢在图片中占据的像素点占比到实际质量占比的转换,并使用全连接网络方式将算法得出的结果和工人实际结果进行拟合。本文使用大量数据对所提出的模型进行实验,实验结果表明:本文模型判定精度能够达到93.75%,明显优于现有方法,并且能够满足实际生产需要。 展开更多
关键词 废钢 深度学习 语义分割 deepLabv3+卷积神经网络 智能算法
原文传递
Heart Disease Prediction Model Using Feature Selection and Ensemble Deep Learning with Optimized Weight
19
作者 Iman S.Al-Mahdi Saad M.Darwish Magda M.Madbouly 《Computer Modeling in Engineering & Sciences》 2025年第4期875-909,共35页
Heart disease prediction is a critical issue in healthcare,where accurate early diagnosis can save lives and reduce healthcare costs.The problem is inherently complex due to the high dimensionality of medical data,irr... Heart disease prediction is a critical issue in healthcare,where accurate early diagnosis can save lives and reduce healthcare costs.The problem is inherently complex due to the high dimensionality of medical data,irrelevant or redundant features,and the variability in risk factors such as age,lifestyle,andmedical history.These challenges often lead to inefficient and less accuratemodels.Traditional predictionmethodologies face limitations in effectively handling large feature sets and optimizing classification performance,which can result in overfitting poor generalization,and high computational cost.This work proposes a novel classification model for heart disease prediction that addresses these challenges by integrating feature selection through a Genetic Algorithm(GA)with an ensemble deep learning approach optimized using the Tunicate Swarm Algorithm(TSA).GA selects the most relevant features,reducing dimensionality and improvingmodel efficiency.Theselected features are then used to train an ensemble of deep learning models,where the TSA optimizes the weight of each model in the ensemble to enhance prediction accuracy.This hybrid approach addresses key challenges in the field,such as high dimensionality,redundant features,and classification performance,by introducing an efficient feature selection mechanism and optimizing the weighting of deep learning models in the ensemble.These enhancements result in a model that achieves superior accuracy,generalization,and efficiency compared to traditional methods.The proposed model demonstrated notable advancements in both prediction accuracy and computational efficiency over traditionalmodels.Specifically,it achieved an accuracy of 97.5%,a sensitivity of 97.2%,and a specificity of 97.8%.Additionally,with a 60-40 data split and 5-fold cross-validation,the model showed a significant reduction in training time(90 s),memory consumption(950 MB),and CPU usage(80%),highlighting its effectiveness in processing large,complex medical datasets for heart disease prediction. 展开更多
关键词 Heart disease prediction feature selection ensemble deep learning optimization genetic algorithm(GA) ensemble deep learning tunicate swarm algorithm(TSA) feature selection
在线阅读 下载PDF
Combining deep reinforcement learning with heuristics to solve the traveling salesman problem
20
作者 Li Hong Yu Liu +1 位作者 Mengqiao Xu Wenhui Deng 《Chinese Physics B》 2025年第1期96-106,共11页
Recent studies employing deep learning to solve the traveling salesman problem(TSP)have mainly focused on learning construction heuristics.Such methods can improve TSP solutions,but still depend on additional programs... Recent studies employing deep learning to solve the traveling salesman problem(TSP)have mainly focused on learning construction heuristics.Such methods can improve TSP solutions,but still depend on additional programs.However,methods that focus on learning improvement heuristics to iteratively refine solutions remain insufficient.Traditional improvement heuristics are guided by a manually designed search strategy and may only achieve limited improvements.This paper proposes a novel framework for learning improvement heuristics,which automatically discovers better improvement policies for heuristics to iteratively solve the TSP.Our framework first designs a new architecture based on a transformer model to make the policy network parameterized,which introduces an action-dropout layer to prevent action selection from overfitting.It then proposes a deep reinforcement learning approach integrating a simulated annealing mechanism(named RL-SA)to learn the pairwise selected policy,aiming to improve the 2-opt algorithm's performance.The RL-SA leverages the whale optimization algorithm to generate initial solutions for better sampling efficiency and uses the Gaussian perturbation strategy to tackle the sparse reward problem of reinforcement learning.The experiment results show that the proposed approach is significantly superior to the state-of-the-art learning-based methods,and further reduces the gap between learning-based methods and highly optimized solvers in the benchmark datasets.Moreover,our pre-trained model M can be applied to guide the SA algorithm(named M-SA(ours)),which performs better than existing deep models in small-,medium-,and large-scale TSPLIB datasets.Additionally,the M-SA(ours)achieves excellent generalization performance in a real-world dataset on global liner shipping routes,with the optimization percentages in distance reduction ranging from3.52%to 17.99%. 展开更多
关键词 traveling salesman problem deep reinforcement learning simulated annealing algorithm transformer model whale optimization algorithm
原文传递
上一页 1 2 173 下一页 到第
使用帮助 返回顶部