期刊文献+
共找到533,959篇文章
< 1 2 250 >
每页显示 20 50 100
Research Review of Deep Learning Algorithms for Agricultural Disease Image Classification
1
作者 Shengjiu JIANG Qian WANG 《Plant Diseases and Pests》 2026年第1期30-34,共5页
In the context of rural revitalization and the development of smart agriculture, image classification technology based on deep learning has emerged as a crucial tool for digital monitoring and intelligent prevention a... In the context of rural revitalization and the development of smart agriculture, image classification technology based on deep learning has emerged as a crucial tool for digital monitoring and intelligent prevention and control of agricultural diseases. This paper provides a systematic review of the evolutionary development of algorithms within this field. Addressing challenges such as domain drift and limited global awareness in classical convolutional neural networks (CNNs) applied to complex agricultural environments, the paper focuses on the latest advancements in vision transformers (ViT) and their hybrid architectures to enhance cross-domain robustness and fine-grained recognition capabilities. In response to the challenges posed by scarce long-tail data and limited edge computing power in real-world scenarios, the paper explores solutions related to few-shot learning and ultra-lightweight network deployment. Finally, a forward-looking analysis is presented on the application paradigms of multimodal feature fusion, vision-based large models, and explainable artificial intelligence (AI) within smart plant protection. This analysis aims to offer theoretical insights for the development of efficient and transparent intelligent diagnostic systems for agricultural diseases, thereby supporting the advancement of digital agriculture and the construction of a robust agricultural nation. 展开更多
关键词 Agricultural disease image Classification algorithm Deep learning Research Review
在线阅读 下载PDF
Flood predictions from metrics to classes by multiple machine learning algorithms coupling with clustering-deduced membership degree
2
作者 ZHAI Xiaoyan ZHANG Yongyong +5 位作者 XIA Jun ZHANG Yongqiang TANG Qiuhong SHAO Quanxi CHEN Junxu ZHANG Fan 《Journal of Geographical Sciences》 2026年第1期149-176,共28页
Accurate prediction of flood events is important for flood control and risk management.Machine learning techniques contributed greatly to advances in flood predictions,and existing studies mainly focused on predicting... Accurate prediction of flood events is important for flood control and risk management.Machine learning techniques contributed greatly to advances in flood predictions,and existing studies mainly focused on predicting flood resource variables using single or hybrid machine learning techniques.However,class-based flood predictions have rarely been investigated,which can aid in quickly diagnosing comprehensive flood characteristics and proposing targeted management strategies.This study proposed a prediction approach of flood regime metrics and event classes coupling machine learning algorithms with clustering-deduced membership degrees.Five algorithms were adopted for this exploration.Results showed that the class membership degrees accurately determined event classes with class hit rates up to 100%,compared with the four classes clustered from nine regime metrics.The nonlinear algorithms(Multiple Linear Regression,Random Forest,and least squares-Support Vector Machine)outperformed the linear techniques(Multiple Linear Regression and Stepwise Regression)in predicting flood regime metrics.The proposed approach well predicted flood event classes with average class hit rates of 66.0%-85.4%and 47.2%-76.0%in calibration and validation periods,respectively,particularly for the slow and late flood events.The predictive capability of the proposed prediction approach for flood regime metrics and classes was considerably stronger than that of hydrological modeling approach. 展开更多
关键词 flood regime metrics class prediction machine learning algorithms hydrological model
原文传递
Research on UAV-MEC Cooperative Scheduling Algorithms Based on Multi-Agent Deep Reinforcement Learning
3
作者 Yonghua Huo Ying Liu +1 位作者 Anni Jiang Yang Yang 《Computers, Materials & Continua》 2026年第3期1823-1850,共28页
With the advent of sixth-generation mobile communications(6G),space-air-ground integrated networks have become mainstream.This paper focuses on collaborative scheduling for mobile edge computing(MEC)under a three-tier... With the advent of sixth-generation mobile communications(6G),space-air-ground integrated networks have become mainstream.This paper focuses on collaborative scheduling for mobile edge computing(MEC)under a three-tier heterogeneous architecture composed of mobile devices,unmanned aerial vehicles(UAVs),and macro base stations(BSs).This scenario typically faces fast channel fading,dynamic computational loads,and energy constraints,whereas classical queuing-theoretic or convex-optimization approaches struggle to yield robust solutions in highly dynamic settings.To address this issue,we formulate a multi-agent Markov decision process(MDP)for an air-ground-fused MEC system,unify link selection,bandwidth/power allocation,and task offloading into a continuous action space and propose a joint scheduling strategy that is based on an improved MATD3 algorithm.The improvements include Alternating Layer Normalization(ALN)in the actor to suppress gradient variance,Residual Orthogonalization(RO)in the critic to reduce the correlation between the twin Q-value estimates,and a dynamic-temperature reward to enable adaptive trade-offs during training.On a multi-user,dual-link simulation platform,we conduct ablation and baseline comparisons.The results reveal that the proposed method has better convergence and stability.Compared with MADDPG,TD3,and DSAC,our algorithm achieves more robust performance across key metrics. 展开更多
关键词 UAV-MEC networks multi-agent deep reinforcement learning MATD3 task offloading
在线阅读 下载PDF
Flyrock distance prediction using a hybrid LightGBM ensemble learning and two nature-based metaheuristic algorithms
4
作者 Qiang Wang Jianwei Xiang +4 位作者 Pengfei Yue Shihua Zhang Yijun Lu Runhua Zhang Jiandong Huang 《Journal of Rock Mechanics and Geotechnical Engineering》 2026年第1期129-150,共22页
Traditional mining in open pit mines often uses explosives,leading to environmental hazards,with flyrock being a critical issue.In detail,excess flying rock beyond the designated explosion area was identified as the p... Traditional mining in open pit mines often uses explosives,leading to environmental hazards,with flyrock being a critical issue.In detail,excess flying rock beyond the designated explosion area was identified as the primary cause of fatal and non-fatal blasting hazards in open pit mining.Therefore,the accurate and reliable prediction of flyrock becomes crucial for effectively managing and mitigating associated problems.This study used the Light Gradient Boosting Machine(LightGBM)model to predict flyrock in a lead-zinc mine,with promising results.To improve its accuracy,multi-verse optimizer(MVO)and ant lion optimizer(ALO)metaheuristic algorithms were introduced.Results showed MVO-LightGBM outperformed conventional LightGBM.Additionally,decision tree(DT),support vector machine(SVM),and classification and regression tree(CART)models were trained and compared with MVO-LightGBM.The MVO-LightGBM model excelled over DT,SVM,and CART.This study highlights MVO-LightGBM's effectiveness and potential for broader applications.Furthermore,a multiple parametric sensitivity analysis(MPSA)algorithm was employed to specify the sensitivity of parameters.MPSA results indicated that the highest and lowest sensitivities are relevant to blasted rock per hole and spacing with theγ=1752.12 andγ=49.52,respectively. 展开更多
关键词 Flyrock distance BLASTING Ensemble learning Light gradient boosting machine(LightGBM) Ant lion optimizer Multi-verse optimizer
在线阅读 下载PDF
An Intelligent System for Pavement Health Monitoring Using Perception Sensors Aided Deep Learning Algorithms
5
作者 Wael A.Altabey 《Structural Durability & Health Monitoring》 2026年第2期78-96,共19页
The study of long-term pavement performance is a fundamental topic in the field of highway engineering.Through comprehensive and in-depth research on the pavement system,the previous scattered,one-sided,superficial,an... The study of long-term pavement performance is a fundamental topic in the field of highway engineering.Through comprehensive and in-depth research on the pavement system,the previous scattered,one-sided,superficial,and perceptual knowledge and experience are summarized and sublimated into a systematic and complete engineering theory,thereby providing powerful guidance and assistance for the practice of pavement design,construction,maintenance,operation,and management.In this research,the mentoring system deployment technology for automatic monitoring is carried out for long-term pavement performance.By burying a variety of sensors in different parts of the road surface,base,roadbed,slope,etc.,a sensor monitoring network based on the Internet of Things technology is formed to achieve accurate,reliable,and continuous observation of environmental meteorology,physical state,mechanical response,structural deformation,and other indicators.The large amount of data and high real-time requirements mean that the perception data collected from sensors,including temperature,humidity,pressure,asphalt strain,and displacement,can be used to train a deep learning model based on a Convolutional Neural Network(CNN)algorithm.This model predicts multi-point pavement displacement to detect damage such as asphalt cracks and potholes.The response of the proposed CNN achieved a high accuracy rate,regression rate,and F-score equal to 87.24%,84.12%,and 85.96%,respectively.This work highlights the potential of using a variety of sensors to aid deep learning algorithms for monitoring long-term pavement performance. 展开更多
关键词 Structural health monitoring(SHM) long-term pavement performance pavement sensors Internet of Things(IoT) deep learning convolutional neural network
在线阅读 下载PDF
Empirical tropospheric zenith wet delay models with strong generalization capability based on a robust machine learning fusion algorithm
6
作者 Jiahao Zhang Qin Liang Yunqing Huang 《Geodesy and Geodynamics》 2026年第2期211-224,共14页
Tropospheric zenith wet delay(ZWD)plays a vital role in the analysis of space geodetic observations.In recent years,machine learning methods have been increasingly applied to improve the accuracy of ZWD calculations.H... Tropospheric zenith wet delay(ZWD)plays a vital role in the analysis of space geodetic observations.In recent years,machine learning methods have been increasingly applied to improve the accuracy of ZWD calculations.However,a single machine learning model has limited generalization capabilities.To address these limitations,this study introduces a novel machine learning fusion(MLF)algorithm with stronger generalization capabilities to enhance ZWD modeling and prediction accuracy.The MLF algorithm utilizes a two-layer structure integrating extra trees(ET),backpropagation neural network(BPNN),and linear regression models.By comparing the root mean square error(RMSE)of these models,we found that both ET-based and MLF-based models outperform RF-based and BPNN-based models in terms of internal and external accuracy,across both surface meteorological data-based and blind models.The improvement in exte rnal accuracy is particularly significant in the blind models.Our re sults show that the MLF(with an RMSE of 3.93 cm)and ET(3.99 cm)models outperform the traditional GPT3model(4.07 cm),while the RF(4.21 cm)and BPNN(4.14 cm)have worse external accuracies than the GPT3 model.It is worth noting that the BPNN suffered from overfitting during external accuracy tests,which was avoided by the MLF.In summary,regardless of the availability of surface meteorological data,the MLF-based empirical models demonstrate superior internal and external accuracy compared to the other tested models in this study. 展开更多
关键词 Tropospheric zenith wet delay Machine learning Extra trees Machine learning fusion algorithm Empirical models
原文传递
Multi-Algorithm Machine Learning Framework for Predicting Crystal Structures of Lithium Manganese Silicate Cathodes Using DFT Data
7
作者 Muhammad Ishtiaq Yeon-JuLee +2 位作者 Annabathini Geetha Bhavani Sung-Gyu Kang Nagireddy Gari Subba Reddy 《Computers, Materials & Continua》 2026年第4期612-627,共16页
Lithium manganese silicate(Li-Mn-Si-O)cathodes are key components of lithium-ion batteries,and their physical and mechanical properties are strongly influenced by their underlying crystal structures.In this study,a ra... Lithium manganese silicate(Li-Mn-Si-O)cathodes are key components of lithium-ion batteries,and their physical and mechanical properties are strongly influenced by their underlying crystal structures.In this study,a range of machine learning(ML)algorithms were developed and compared to predict the crystal systems of Li-Mn-Si-O cathode materials using density functional theory(DFT)data obtained from the Materials Project database.The dataset comprised 211 compositions characterized by key descriptors,including formation energy,energy above the hull,bandgap,atomic site number,density,and unit cell volume.These features were utilized to classify the materials into monoclinic(0)and triclinic(1)crystal systems.A comprehensive comparison of various classification algorithms including Decision Tree,Random Forest,XGBoost,Support VectorMachine,k-Nearest Neighbor,Stochastic Gradient Descent,Gaussian Naive Bayes,Gaussian Process,and Artificial Neural Network(ANN)was conducted.Among these,the optimized ANN architecture(6–14-14-14-1)exhibited the highest predictive performance,achieving an accuracy of 95.3%,aMatthews correlation coefficient(MCC)of 0.894,and an F-score of 0.963,demonstrating excellent consistency with DFT-predicted crystal structures.Meanwhile,RandomForest and Gaussian Processmodels also exhibited reliable and consistent predictive capability,indicating their potential as complementary approaches,particularly when data are limited or computational efficiency is required.This comparative framework provides valuable insights into model selection for crystal system classification in complex cathode materials. 展开更多
关键词 Machine learning crystal structure classification cathode materials:batteries
在线阅读 下载PDF
A Multi-Objective Deep Reinforcement Learning Algorithm for Computation Offloading in Internet of Vehicles
8
作者 Junjun Ren Guoqiang Chen +1 位作者 Zheng-Yi Chai Dong Yuan 《Computers, Materials & Continua》 2026年第1期2111-2136,共26页
Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrain... Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively. 展开更多
关键词 Deep reinforcement learning internet of vehicles multi-objective optimization cloud-edge computing computation offloading service caching
在线阅读 下载PDF
Bearing capacity prediction of open caissons in two-layered clays using five tree-based machine learning algorithms 被引量:2
9
作者 Rungroad Suppakul Kongtawan Sangjinda +3 位作者 Wittaya Jitchaijaroen Natakorn Phuksuksakul Suraparb Keawsawasvong Peem Nuaklong 《Intelligent Geoengineering》 2025年第2期55-65,共11页
Open caissons are widely used in foundation engineering because of their load-bearing efficiency and adaptability in diverse soil conditions.However,accurately predicting their undrained bearing capacity in layered so... Open caissons are widely used in foundation engineering because of their load-bearing efficiency and adaptability in diverse soil conditions.However,accurately predicting their undrained bearing capacity in layered soils remains a complex challenge.This study presents a novel application of five ensemble machine(ML)algorithms-random forest(RF),gradient boosting machine(GBM),extreme gradient boosting(XGBoost),adaptive boosting(AdaBoost),and categorical boosting(CatBoost)-to predict the undrained bearing capacity factor(Nc)of circular open caissons embedded in two-layered clay on the basis of results from finite element limit analysis(FELA).The input dataset consists of 1188 numerical simulations using the Tresca failure criterion,varying in geometrical and soil parameters.The FELA was performed via OptumG2 software with adaptive meshing techniques and verified against existing benchmark studies.The ML models were trained on 70% of the dataset and tested on the remaining 30%.Their performance was evaluated using six statistical metrics:coefficient of determination(R²),mean absolute error(MAE),root mean squared error(RMSE),index of scatter(IOS),RMSE-to-standard deviation ratio(RSR),and variance explained factor(VAF).The results indicate that all the models achieved high accuracy,with R²values exceeding 97.6%and RMSE values below 0.02.Among them,AdaBoost and CatBoost consistently outperformed the other methods across both the training and testing datasets,demonstrating superior generalizability and robustness.The proposed ML framework offers an efficient,accurate,and data-driven alternative to traditional methods for estimating caisson capacity in stratified soils.This approach can aid in reducing computational costs while improving reliability in the early stages of foundation design. 展开更多
关键词 Two-layered clay Open caisson Tree-based algorithms FELA Machine learning
在线阅读 下载PDF
Methodology for Detecting Non-Technical Energy Losses Using an Ensemble of Machine Learning Algorithms
10
作者 Irbek Morgoev Roman Klyuev Angelika Morgoeva 《Computer Modeling in Engineering & Sciences》 2025年第5期1381-1399,共19页
Non-technical losses(NTL)of electric power are a serious problem for electric distribution companies.The solution determines the cost,stability,reliability,and quality of the supplied electricity.The widespread use of... Non-technical losses(NTL)of electric power are a serious problem for electric distribution companies.The solution determines the cost,stability,reliability,and quality of the supplied electricity.The widespread use of advanced metering infrastructure(AMI)and Smart Grid allows all participants in the distribution grid to store and track electricity consumption.During the research,a machine learning model is developed that allows analyzing and predicting the probability of NTL for each consumer of the distribution grid based on daily electricity consumption readings.This model is an ensemble meta-algorithm(stacking)that generalizes the algorithms of random forest,LightGBM,and a homogeneous ensemble of artificial neural networks.The best accuracy of the proposed meta-algorithm in comparison to basic classifiers is experimentally confirmed on the test sample.Such a model,due to good accuracy indicators(ROC-AUC-0.88),can be used as a methodological basis for a decision support system,the purpose of which is to form a sample of suspected NTL sources.The use of such a sample will allow the top management of electric distribution companies to increase the efficiency of raids by performers,making them targeted and accurate,which should contribute to the fight against NTL and the sustainable development of the electric power industry. 展开更多
关键词 Non-technical losses smart grid machine learning electricity theft FRAUD ensemble algorithm hybrid method forecasting classification supervised learning
在线阅读 下载PDF
Neuromorphic devices assisted by machine learning algorithms
11
作者 Ziwei Huo Qijun Sun +4 位作者 Jinran Yu Yichen Wei Yifei Wang Jeong Ho Cho Zhong Lin Wang 《International Journal of Extreme Manufacturing》 2025年第4期178-215,共38页
Neuromorphic computing extends beyond sequential processing modalities and outperforms traditional von Neumann architectures in implementing more complicated tasks,e.g.,pattern processing,image recognition,and decisio... Neuromorphic computing extends beyond sequential processing modalities and outperforms traditional von Neumann architectures in implementing more complicated tasks,e.g.,pattern processing,image recognition,and decision making.It features parallel interconnected neural networks,high fault tolerance,robustness,autonomous learning capability,and ultralow energy dissipation.The algorithms of artificial neural network(ANN)have also been widely used because of their facile self-organization and self-learning capabilities,which mimic those of the human brain.To some extent,ANN reflects several basic functions of the human brain and can be efficiently integrated into neuromorphic devices to perform neuromorphic computations.This review highlights recent advances in neuromorphic devices assisted by machine learning algorithms.First,the basic structure of simple neuron models inspired by biological neurons and the information processing in simple neural networks are particularly discussed.Second,the fabrication and research progress of neuromorphic devices are presented regarding to materials and structures.Furthermore,the fabrication of neuromorphic devices,including stand-alone neuromorphic devices,neuromorphic device arrays,and integrated neuromorphic systems,is discussed and demonstrated with reference to some respective studies.The applications of neuromorphic devices assisted by machine learning algorithms in different fields are categorized and investigated.Finally,perspectives,suggestions,and potential solutions to the current challenges of neuromorphic devices are provided. 展开更多
关键词 neuromorphic devices machine learning algorithms artificial synapses MEMRISTORS field-effect transistors
在线阅读 下载PDF
A Comparison among Different Machine Learning Algorithms in Land Cover Classification Based on the Google Earth Engine Platform: The Case Study of Hung Yen Province, Vietnam
12
作者 Le Thi Lan Tran Quoc Vinh Phạm Quy Giang 《Journal of Environmental & Earth Sciences》 2025年第1期132-139,共8页
Based on the Google Earth Engine cloud computing data platform,this study employed three algorithms including Support Vector Machine,Random Forest,and Classification and Regression Tree to classify the current status ... Based on the Google Earth Engine cloud computing data platform,this study employed three algorithms including Support Vector Machine,Random Forest,and Classification and Regression Tree to classify the current status of land covers in Hung Yen province of Vietnam using Landsat 8 OLI satellite images,a free data source with reasonable spatial and temporal resolution.The results of the study show that all three algorithms presented good classification for five basic types of land cover including Rice land,Water bodies,Perennial vegetation,Annual vegetation,Built-up areas as their overall accuracy and Kappa coefficient were greater than 80%and 0.8,respectively.Among the three algorithms,SVM achieved the highest accuracy as its overall accuracy was 86%and the Kappa coefficient was 0.88.Land cover classification based on the SVM algorithm shows that Built-up areas cover the largest area with nearly 31,495 ha,accounting for more than 33.8%of the total natural area,followed by Rice land and Perennial vegetation which cover an area of over 30,767 ha(33%)and 15,637 ha(16.8%),respectively.Water bodies and Annual vegetation cover the smallest areas with 8,820(9.5%)ha and 6,302 ha(6.8%),respectively.The results of this study can be used for land use management and planning as well as other natural resource and environmental management purposes in the province. 展开更多
关键词 Google Earth Engine Land Cover LANDSAT Machine learning algorithm
在线阅读 下载PDF
Optimization Algorithms Based on Double-Integral Coevolutionary Neurodynamics in Deep Learning
13
作者 Dan Su Jie Han +1 位作者 Chunhua Yang Weihua Gui 《IEEE/CAA Journal of Automatica Sinica》 2025年第6期1236-1245,共10页
Deep neural networks are increasingly exposed to attack threats,and at the same time,the need for privacy protection is growing.As a result,the challenge of developing neural networks that are both robust and capable ... Deep neural networks are increasingly exposed to attack threats,and at the same time,the need for privacy protection is growing.As a result,the challenge of developing neural networks that are both robust and capable of strong generalization while maintaining privacy becomes pressing.Training neural networks under privacy constraints is one way to minimize privacy leakage,and one way to do this is to add noise to the data or model.However,noise may cause gradient directions to deviate from the optimal trajectory during training,leading to unstable parameter updates,slow convergence,and reduced model generalization capability.To overcome these challenges,we propose an optimization algorithm based on double-integral coevolutionary neurodynamics(DICND),designed to accelerate convergence and improve generalization in noisy conditions.Theoretical analysis proves the global convergence of the DICND algorithm and demonstrates its ability to converge to near-global minima efficiently under noisy conditions.Numerical simulations and image classification experiments further confirm the DICND algorithm's significant advantages in enhancing generalization performance. 展开更多
关键词 Coevolutionary neurodynamics(CND) deep learning GENERALIZATION noise resistance optimization algorithm
在线阅读 下载PDF
Reaction process optimization based on interpretable machine learning and metaheuristic optimization algorithms
14
作者 Dian Zhang Bo Ouyang Zheng-Hong Luo 《Chinese Journal of Chemical Engineering》 2025年第8期77-85,共9页
The optimization of reaction processes is crucial for the green, efficient, and sustainable development of the chemical industry. However, how to address the problems posed by multiple variables, nonlinearities, and u... The optimization of reaction processes is crucial for the green, efficient, and sustainable development of the chemical industry. However, how to address the problems posed by multiple variables, nonlinearities, and uncertainties during optimization remains a formidable challenge. In this study, a strategy combining interpretable machine learning with metaheuristic optimization algorithms is employed to optimize the reaction process. First, experimental data from a biodiesel production process are collected to establish a database. These data are then used to construct a predictive model based on artificial neural network (ANN) models. Subsequently, interpretable machine learning techniques are applied for quantitative analysis and verification of the model. Finally, four metaheuristic optimization algorithms are coupled with the ANN model to achieve the desired optimization. The research results show that the methanol: palm fatty acid distillate (PFAD) molar ratio contributes the most to the reaction outcome, accounting for 41%. The ANN-simulated annealing (SA) hybrid method is more suitable for this optimization, and the optimal process parameters are a catalyst concentration of 3.00% (mass), a methanol: PFAD molar ratio of 8.67, and a reaction time of 30 min. This study provides deeper insights into reaction process optimization, which will facilitate future applications in various reaction optimization processes. 展开更多
关键词 Reaction process optimization Interpretable machine learning Metaheuristic optimization algorithm BIODIESEL
在线阅读 下载PDF
A fully automated quantitative analysis method based on deep learning algorithms for immunohistochemical staining expression intensities
15
作者 Yongjian Deng Bojun Cai Xiaomei Wang 《Intelligent Oncology》 2025年第3期256-264,共9页
This paper focuses primarily on exploring the application of deep learning techniques and image processing algorithms in immunohistochemistry analysis,specifically targeting automated quantitative methods for nu-clear... This paper focuses primarily on exploring the application of deep learning techniques and image processing algorithms in immunohistochemistry analysis,specifically targeting automated quantitative methods for nu-clear,membrane,and cytoplasmic expressions of animal cells in whole-slide images.Cell nuclei,membranes,and cytoplasm were precisely identified and quantified by employing optical density separation techniques to differentiate between hematoxylin and 3,3'-diaminobenzidine staining components in combination with the CellViT nuclear segmentation algorithm and the region growing algorithm.Experimental validation demon-strates that the proposed algorithm performs excellently in terms of accuracy and recall.Compared to traditional manual interpretation,this algorithm achieve greater accuracy in specific quantitative metrics. 展开更多
关键词 Deep learning Immunohistochemistry analysis Image processing algorithm Optical density separation Quantification of whole-slide images
在线阅读 下载PDF
Comparative analysis of different machine learning algorithms for urban footprint extraction in diverse urban contexts using high-resolution remote sensing imagery
16
作者 GUI Baoling Anshuman BHARDWAJ Lydia SAM 《Journal of Geographical Sciences》 2025年第3期664-696,共33页
While algorithms have been created for land usage in urban settings,there have been few investigations into the extraction of urban footprint(UF).To address this research gap,the study employs several widely used imag... While algorithms have been created for land usage in urban settings,there have been few investigations into the extraction of urban footprint(UF).To address this research gap,the study employs several widely used image classification method classified into three categories to evaluate their segmentation capabilities for extracting UF across eight cities.The results indicate that pixel-based methods only excel in clear urban environments,and their overall accuracy is not consistently high.RF and SVM perform well but lack stability in object-based UF extraction,influenced by feature selection and classifier performance.Deep learning enhances feature extraction but requires powerful computing and faces challenges with complex urban layouts.SAM excels in medium-sized urban areas but falters in intricate layouts.Integrating traditional and deep learning methods optimizes UF extraction,balancing accuracy and processing efficiency.Future research should focus on adapting algorithms for diverse urban landscapes to enhance UF extraction accuracy and applicability. 展开更多
关键词 urban footprint mapping high-resolution remote sensing imagery machine learning deep learning segmentanythingmodel
原文传递
Exploring the Effectiveness of Machine Learning and Deep Learning Algorithms for Sentiment Analysis:A Systematic Literature Review
17
作者 Jungpil Shin Wahidur Rahman +5 位作者 Tanvir Ahmed Bakhtiar Mazrur Md.Mohsin Mia Romana Idress Ekfa Md.Sajib Rana Pankoo Kim 《Computers, Materials & Continua》 2025年第9期4105-4153,共49页
Sentiment Analysis,a significant domain within Natural Language Processing(NLP),focuses on extracting and interpreting subjective information-such as emotions,opinions,and attitudes-from textual data.With the increasi... Sentiment Analysis,a significant domain within Natural Language Processing(NLP),focuses on extracting and interpreting subjective information-such as emotions,opinions,and attitudes-from textual data.With the increasing volume of user-generated content on social media and digital platforms,sentiment analysis has become essential for deriving actionable insights across various sectors.This study presents a systematic literature review of sentiment analysis methodologies,encompassing traditional machine learning algorithms,lexicon-based approaches,and recent advancements in deep learning techniques.The review follows a structured protocol comprising three phases:planning,execution,and analysis/reporting.During the execution phase,67 peer-reviewed articles were initially retrieved,with 25 meeting predefined inclusion and exclusion criteria.The analysis phase involved a detailed examination of each study’s methodology,experimental setup,and key contributions.Among the deep learning models evaluated,Long Short-Term Memory(LSTM)networks were identified as the most frequently adopted architecture for sentiment classification tasks.This review highlights current trends,technical challenges,and emerging opportunities in the field,providing valuable guidance for future research and development in applications such as market analysis,public health monitoring,financial forecasting,and crisis management. 展开更多
关键词 Natural Language Processing(NLP) Machine learning(ML) sentiment analysis deep learning textual data
在线阅读 下载PDF
Screening of Key Genes in Pre-eclampsia and Construction of a Risk-Assessment Model Based on Machine-Learning Algorithms
18
作者 FAN Li-ping XIE Xiao-hong RAO Cai-li 《Chinese Journal of Biomedical Engineering(English Edition)》 2025年第3期109-117,共9页
Objective:To identify potential key genes associated with pre-eclampsia through bioinformatics analysis,construct predictive models using machine-learning algorithms,and evaluate the models'performance in predicti... Objective:To identify potential key genes associated with pre-eclampsia through bioinformatics analysis,construct predictive models using machine-learning algorithms,and evaluate the models'performance in predicting pre-eclampsia.Methods:Gene-expression microarray datasets GSE10588,GSE66273,and GSE30186 related to pre-eclampsia were downloaded from the gene expression omnibus(GEO).Data were normalized using R,and differentially expressed genes(DEGs)were identified.LASSO regression was applied to further filter DEGs.Based on the selected DEGs,six machine-learning models-logistic regression(LR),random forest(RF),support vector machine(SVM),K-nearest neighbors(KNN),neural network(NN),and eXtreme gradient boosting(XGBoost)were built in R,and their performance was validated.Results:From the three datasets,a total of 1,363 genes were extracted.LASSO regression narrowed these to 265 candidate key genes.Multivariate analysis ultimately identified four genes closely associated with pre-eclampsia:EVI5,GCLM,LEP,and SYNPO2L.Using these four key genes,six machine-learning models were constructed.Receiver operating characteristic(ROC)analysis showed that all models achieved AUC>0.9:LR(AUC=0.983,95%CI=0.942-0.998),RF(AUC=0.961,95%CI=0.912-0.987),SVM(AUC=0.936,95%CI=0.879-0.972),KNN(AUC=0.970,95%CI=0.924-0.992),NN(AUC=0.916,95%CI=0.854-0.958),and XGBoost(AUC=0.952,95%CI=0.900-0.982).There was no statistically significant difference among the AUCs of the models(P>0.05).Conclusion:This study identified four key genes linked to preeclampsia through integrated bioinformatics analysis.Predictive models built on these genes can accurately forecast the occurrence of pre-eclampsia,suggesting that the four genes may serve as potential biomarkers for early diagnosis and therapeutic targeting of pre-eclampsia. 展开更多
关键词 PRE-ECLAMPSIA gene screening BIOINFORMATICS machine-learning algorithms
暂未订购
Homomorphic Encryption for Machine Learning Applications with CKKS Algorithms:A Survey of Developments and Applications
19
作者 Lingling Wu Xu An Wang +7 位作者 Jiasen Liu Yunxuan Su Zheng Tu Wenhao Liu Haibo Lei Dianhua Tang Yunfei Cao Jianping Zhang 《Computers, Materials & Continua》 2025年第10期89-119,共31页
Due to the rapid advancement of information technology,data has emerged as the core resource driving decision-making and innovation across all industries.As the foundation of artificial intelligence,machine learning(M... Due to the rapid advancement of information technology,data has emerged as the core resource driving decision-making and innovation across all industries.As the foundation of artificial intelligence,machine learning(ML)has expanded its applications into intelligent recommendation systems,autonomous driving,medical diagnosis,and financial risk assessment.However,it relies on massive datasets,which contain sensitive personal information.Consequently,Privacy-Preserving Machine Learning(PPML)has become a critical research direction.To address the challenges of efficiency and accuracy in encrypted data computation within PPML,Homomorphic Encryption(HE)technology is a crucial solution,owing to its capability to facilitate computations on encrypted data.However,the integration of machine learning and homomorphic encryption technologies faces multiple challenges.Against this backdrop,this paper reviews homomorphic encryption technologies,with a focus on the advantages of the Cheon-Kim-Kim-Song(CKKS)algorithm in supporting approximate floating-point computations.This paper reviews the development of three machine learning techniques:K-nearest neighbors(KNN),K-means clustering,and face recognition-in integration with homomorphic encryption.It proposes feasible schemes for typical scenarios,summarizes limitations and future optimization directions.Additionally,it presents a systematic exploration of the integration of homomorphic encryption and machine learning from the essence of the technology,application implementation,performance trade-offs,technological convergence and future pathways to advance technological development. 展开更多
关键词 Homomorphic encryption machine learning CKKS PPML
在线阅读 下载PDF
Machine Learning-Based High Entropy Alloys-Algorithms and Workflow:A Review
20
作者 Hao Cheng Cheng-Lei Wang +3 位作者 Xiao-Du Li Li Pan Chao-Jie Liang Wei-Jie Liu 《Acta Metallurgica Sinica(English Letters)》 2025年第9期1453-1480,共28页
High-entropy alloys(HEAs)have attracted considerable attention because of their excellent properties and broad compositional design space.However,traditional trial-and-error methods for screening HEAs are costly and i... High-entropy alloys(HEAs)have attracted considerable attention because of their excellent properties and broad compositional design space.However,traditional trial-and-error methods for screening HEAs are costly and inefficient,thereby limiting the development of new materials.Although density functional theory(DFT),molecular dynamics(MD),and thermodynamic modeling have improved the design efficiency,their indirect connection to properties has led to limitations in calculation and prediction.With the awarding of the Nobel Prize in Physics and Chemistry to artificial intelligence(AI)related researchers,there has been a renewed enthusiasm for the application of machine learning(ML)in the field of alloy materials.In this study,common and advanced ML models and strategies in HEA design were introduced,and the mechanism by which ML can play a role in composition optimization and performance prediction was investigated through case studies.The general workflow of ML application in material design was also introduced from the programmer’s point of view,including data preprocessing,feature engineering,model training,evaluation,optimization,and interpretability.Furthermore,data scarcity,multi-model coupling,and other challenges and opportunities at the current stage were analyzed,and an outlook on future research directions was provided. 展开更多
关键词 Machine learning High-entropy alloys Artificial intelligence Alloy design
原文传递
上一页 1 2 250 下一页 到第
使用帮助 返回顶部