Accurate prediction of flood events is important for flood control and risk management.Machine learning techniques contributed greatly to advances in flood predictions,and existing studies mainly focused on predicting...Accurate prediction of flood events is important for flood control and risk management.Machine learning techniques contributed greatly to advances in flood predictions,and existing studies mainly focused on predicting flood resource variables using single or hybrid machine learning techniques.However,class-based flood predictions have rarely been investigated,which can aid in quickly diagnosing comprehensive flood characteristics and proposing targeted management strategies.This study proposed a prediction approach of flood regime metrics and event classes coupling machine learning algorithms with clustering-deduced membership degrees.Five algorithms were adopted for this exploration.Results showed that the class membership degrees accurately determined event classes with class hit rates up to 100%,compared with the four classes clustered from nine regime metrics.The nonlinear algorithms(Multiple Linear Regression,Random Forest,and least squares-Support Vector Machine)outperformed the linear techniques(Multiple Linear Regression and Stepwise Regression)in predicting flood regime metrics.The proposed approach well predicted flood event classes with average class hit rates of 66.0%-85.4%and 47.2%-76.0%in calibration and validation periods,respectively,particularly for the slow and late flood events.The predictive capability of the proposed prediction approach for flood regime metrics and classes was considerably stronger than that of hydrological modeling approach.展开更多
With the advent of sixth-generation mobile communications(6G),space-air-ground integrated networks have become mainstream.This paper focuses on collaborative scheduling for mobile edge computing(MEC)under a three-tier...With the advent of sixth-generation mobile communications(6G),space-air-ground integrated networks have become mainstream.This paper focuses on collaborative scheduling for mobile edge computing(MEC)under a three-tier heterogeneous architecture composed of mobile devices,unmanned aerial vehicles(UAVs),and macro base stations(BSs).This scenario typically faces fast channel fading,dynamic computational loads,and energy constraints,whereas classical queuing-theoretic or convex-optimization approaches struggle to yield robust solutions in highly dynamic settings.To address this issue,we formulate a multi-agent Markov decision process(MDP)for an air-ground-fused MEC system,unify link selection,bandwidth/power allocation,and task offloading into a continuous action space and propose a joint scheduling strategy that is based on an improved MATD3 algorithm.The improvements include Alternating Layer Normalization(ALN)in the actor to suppress gradient variance,Residual Orthogonalization(RO)in the critic to reduce the correlation between the twin Q-value estimates,and a dynamic-temperature reward to enable adaptive trade-offs during training.On a multi-user,dual-link simulation platform,we conduct ablation and baseline comparisons.The results reveal that the proposed method has better convergence and stability.Compared with MADDPG,TD3,and DSAC,our algorithm achieves more robust performance across key metrics.展开更多
Traditional mining in open pit mines often uses explosives,leading to environmental hazards,with flyrock being a critical issue.In detail,excess flying rock beyond the designated explosion area was identified as the p...Traditional mining in open pit mines often uses explosives,leading to environmental hazards,with flyrock being a critical issue.In detail,excess flying rock beyond the designated explosion area was identified as the primary cause of fatal and non-fatal blasting hazards in open pit mining.Therefore,the accurate and reliable prediction of flyrock becomes crucial for effectively managing and mitigating associated problems.This study used the Light Gradient Boosting Machine(LightGBM)model to predict flyrock in a lead-zinc mine,with promising results.To improve its accuracy,multi-verse optimizer(MVO)and ant lion optimizer(ALO)metaheuristic algorithms were introduced.Results showed MVO-LightGBM outperformed conventional LightGBM.Additionally,decision tree(DT),support vector machine(SVM),and classification and regression tree(CART)models were trained and compared with MVO-LightGBM.The MVO-LightGBM model excelled over DT,SVM,and CART.This study highlights MVO-LightGBM's effectiveness and potential for broader applications.Furthermore,a multiple parametric sensitivity analysis(MPSA)algorithm was employed to specify the sensitivity of parameters.MPSA results indicated that the highest and lowest sensitivities are relevant to blasted rock per hole and spacing with theγ=1752.12 andγ=49.52,respectively.展开更多
Lithium manganese silicate(Li-Mn-Si-O)cathodes are key components of lithium-ion batteries,and their physical and mechanical properties are strongly influenced by their underlying crystal structures.In this study,a ra...Lithium manganese silicate(Li-Mn-Si-O)cathodes are key components of lithium-ion batteries,and their physical and mechanical properties are strongly influenced by their underlying crystal structures.In this study,a range of machine learning(ML)algorithms were developed and compared to predict the crystal systems of Li-Mn-Si-O cathode materials using density functional theory(DFT)data obtained from the Materials Project database.The dataset comprised 211 compositions characterized by key descriptors,including formation energy,energy above the hull,bandgap,atomic site number,density,and unit cell volume.These features were utilized to classify the materials into monoclinic(0)and triclinic(1)crystal systems.A comprehensive comparison of various classification algorithms including Decision Tree,Random Forest,XGBoost,Support VectorMachine,k-Nearest Neighbor,Stochastic Gradient Descent,Gaussian Naive Bayes,Gaussian Process,and Artificial Neural Network(ANN)was conducted.Among these,the optimized ANN architecture(6–14-14-14-1)exhibited the highest predictive performance,achieving an accuracy of 95.3%,aMatthews correlation coefficient(MCC)of 0.894,and an F-score of 0.963,demonstrating excellent consistency with DFT-predicted crystal structures.Meanwhile,RandomForest and Gaussian Processmodels also exhibited reliable and consistent predictive capability,indicating their potential as complementary approaches,particularly when data are limited or computational efficiency is required.This comparative framework provides valuable insights into model selection for crystal system classification in complex cathode materials.展开更多
Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrain...Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively.展开更多
Open caissons are widely used in foundation engineering because of their load-bearing efficiency and adaptability in diverse soil conditions.However,accurately predicting their undrained bearing capacity in layered so...Open caissons are widely used in foundation engineering because of their load-bearing efficiency and adaptability in diverse soil conditions.However,accurately predicting their undrained bearing capacity in layered soils remains a complex challenge.This study presents a novel application of five ensemble machine(ML)algorithms-random forest(RF),gradient boosting machine(GBM),extreme gradient boosting(XGBoost),adaptive boosting(AdaBoost),and categorical boosting(CatBoost)-to predict the undrained bearing capacity factor(Nc)of circular open caissons embedded in two-layered clay on the basis of results from finite element limit analysis(FELA).The input dataset consists of 1188 numerical simulations using the Tresca failure criterion,varying in geometrical and soil parameters.The FELA was performed via OptumG2 software with adaptive meshing techniques and verified against existing benchmark studies.The ML models were trained on 70% of the dataset and tested on the remaining 30%.Their performance was evaluated using six statistical metrics:coefficient of determination(R²),mean absolute error(MAE),root mean squared error(RMSE),index of scatter(IOS),RMSE-to-standard deviation ratio(RSR),and variance explained factor(VAF).The results indicate that all the models achieved high accuracy,with R²values exceeding 97.6%and RMSE values below 0.02.Among them,AdaBoost and CatBoost consistently outperformed the other methods across both the training and testing datasets,demonstrating superior generalizability and robustness.The proposed ML framework offers an efficient,accurate,and data-driven alternative to traditional methods for estimating caisson capacity in stratified soils.This approach can aid in reducing computational costs while improving reliability in the early stages of foundation design.展开更多
Non-technical losses(NTL)of electric power are a serious problem for electric distribution companies.The solution determines the cost,stability,reliability,and quality of the supplied electricity.The widespread use of...Non-technical losses(NTL)of electric power are a serious problem for electric distribution companies.The solution determines the cost,stability,reliability,and quality of the supplied electricity.The widespread use of advanced metering infrastructure(AMI)and Smart Grid allows all participants in the distribution grid to store and track electricity consumption.During the research,a machine learning model is developed that allows analyzing and predicting the probability of NTL for each consumer of the distribution grid based on daily electricity consumption readings.This model is an ensemble meta-algorithm(stacking)that generalizes the algorithms of random forest,LightGBM,and a homogeneous ensemble of artificial neural networks.The best accuracy of the proposed meta-algorithm in comparison to basic classifiers is experimentally confirmed on the test sample.Such a model,due to good accuracy indicators(ROC-AUC-0.88),can be used as a methodological basis for a decision support system,the purpose of which is to form a sample of suspected NTL sources.The use of such a sample will allow the top management of electric distribution companies to increase the efficiency of raids by performers,making them targeted and accurate,which should contribute to the fight against NTL and the sustainable development of the electric power industry.展开更多
Neuromorphic computing extends beyond sequential processing modalities and outperforms traditional von Neumann architectures in implementing more complicated tasks,e.g.,pattern processing,image recognition,and decisio...Neuromorphic computing extends beyond sequential processing modalities and outperforms traditional von Neumann architectures in implementing more complicated tasks,e.g.,pattern processing,image recognition,and decision making.It features parallel interconnected neural networks,high fault tolerance,robustness,autonomous learning capability,and ultralow energy dissipation.The algorithms of artificial neural network(ANN)have also been widely used because of their facile self-organization and self-learning capabilities,which mimic those of the human brain.To some extent,ANN reflects several basic functions of the human brain and can be efficiently integrated into neuromorphic devices to perform neuromorphic computations.This review highlights recent advances in neuromorphic devices assisted by machine learning algorithms.First,the basic structure of simple neuron models inspired by biological neurons and the information processing in simple neural networks are particularly discussed.Second,the fabrication and research progress of neuromorphic devices are presented regarding to materials and structures.Furthermore,the fabrication of neuromorphic devices,including stand-alone neuromorphic devices,neuromorphic device arrays,and integrated neuromorphic systems,is discussed and demonstrated with reference to some respective studies.The applications of neuromorphic devices assisted by machine learning algorithms in different fields are categorized and investigated.Finally,perspectives,suggestions,and potential solutions to the current challenges of neuromorphic devices are provided.展开更多
Based on the Google Earth Engine cloud computing data platform,this study employed three algorithms including Support Vector Machine,Random Forest,and Classification and Regression Tree to classify the current status ...Based on the Google Earth Engine cloud computing data platform,this study employed three algorithms including Support Vector Machine,Random Forest,and Classification and Regression Tree to classify the current status of land covers in Hung Yen province of Vietnam using Landsat 8 OLI satellite images,a free data source with reasonable spatial and temporal resolution.The results of the study show that all three algorithms presented good classification for five basic types of land cover including Rice land,Water bodies,Perennial vegetation,Annual vegetation,Built-up areas as their overall accuracy and Kappa coefficient were greater than 80%and 0.8,respectively.Among the three algorithms,SVM achieved the highest accuracy as its overall accuracy was 86%and the Kappa coefficient was 0.88.Land cover classification based on the SVM algorithm shows that Built-up areas cover the largest area with nearly 31,495 ha,accounting for more than 33.8%of the total natural area,followed by Rice land and Perennial vegetation which cover an area of over 30,767 ha(33%)and 15,637 ha(16.8%),respectively.Water bodies and Annual vegetation cover the smallest areas with 8,820(9.5%)ha and 6,302 ha(6.8%),respectively.The results of this study can be used for land use management and planning as well as other natural resource and environmental management purposes in the province.展开更多
Deep neural networks are increasingly exposed to attack threats,and at the same time,the need for privacy protection is growing.As a result,the challenge of developing neural networks that are both robust and capable ...Deep neural networks are increasingly exposed to attack threats,and at the same time,the need for privacy protection is growing.As a result,the challenge of developing neural networks that are both robust and capable of strong generalization while maintaining privacy becomes pressing.Training neural networks under privacy constraints is one way to minimize privacy leakage,and one way to do this is to add noise to the data or model.However,noise may cause gradient directions to deviate from the optimal trajectory during training,leading to unstable parameter updates,slow convergence,and reduced model generalization capability.To overcome these challenges,we propose an optimization algorithm based on double-integral coevolutionary neurodynamics(DICND),designed to accelerate convergence and improve generalization in noisy conditions.Theoretical analysis proves the global convergence of the DICND algorithm and demonstrates its ability to converge to near-global minima efficiently under noisy conditions.Numerical simulations and image classification experiments further confirm the DICND algorithm's significant advantages in enhancing generalization performance.展开更多
The optimization of reaction processes is crucial for the green, efficient, and sustainable development of the chemical industry. However, how to address the problems posed by multiple variables, nonlinearities, and u...The optimization of reaction processes is crucial for the green, efficient, and sustainable development of the chemical industry. However, how to address the problems posed by multiple variables, nonlinearities, and uncertainties during optimization remains a formidable challenge. In this study, a strategy combining interpretable machine learning with metaheuristic optimization algorithms is employed to optimize the reaction process. First, experimental data from a biodiesel production process are collected to establish a database. These data are then used to construct a predictive model based on artificial neural network (ANN) models. Subsequently, interpretable machine learning techniques are applied for quantitative analysis and verification of the model. Finally, four metaheuristic optimization algorithms are coupled with the ANN model to achieve the desired optimization. The research results show that the methanol: palm fatty acid distillate (PFAD) molar ratio contributes the most to the reaction outcome, accounting for 41%. The ANN-simulated annealing (SA) hybrid method is more suitable for this optimization, and the optimal process parameters are a catalyst concentration of 3.00% (mass), a methanol: PFAD molar ratio of 8.67, and a reaction time of 30 min. This study provides deeper insights into reaction process optimization, which will facilitate future applications in various reaction optimization processes.展开更多
While algorithms have been created for land usage in urban settings,there have been few investigations into the extraction of urban footprint(UF).To address this research gap,the study employs several widely used imag...While algorithms have been created for land usage in urban settings,there have been few investigations into the extraction of urban footprint(UF).To address this research gap,the study employs several widely used image classification method classified into three categories to evaluate their segmentation capabilities for extracting UF across eight cities.The results indicate that pixel-based methods only excel in clear urban environments,and their overall accuracy is not consistently high.RF and SVM perform well but lack stability in object-based UF extraction,influenced by feature selection and classifier performance.Deep learning enhances feature extraction but requires powerful computing and faces challenges with complex urban layouts.SAM excels in medium-sized urban areas but falters in intricate layouts.Integrating traditional and deep learning methods optimizes UF extraction,balancing accuracy and processing efficiency.Future research should focus on adapting algorithms for diverse urban landscapes to enhance UF extraction accuracy and applicability.展开更多
Sentiment Analysis,a significant domain within Natural Language Processing(NLP),focuses on extracting and interpreting subjective information-such as emotions,opinions,and attitudes-from textual data.With the increasi...Sentiment Analysis,a significant domain within Natural Language Processing(NLP),focuses on extracting and interpreting subjective information-such as emotions,opinions,and attitudes-from textual data.With the increasing volume of user-generated content on social media and digital platforms,sentiment analysis has become essential for deriving actionable insights across various sectors.This study presents a systematic literature review of sentiment analysis methodologies,encompassing traditional machine learning algorithms,lexicon-based approaches,and recent advancements in deep learning techniques.The review follows a structured protocol comprising three phases:planning,execution,and analysis/reporting.During the execution phase,67 peer-reviewed articles were initially retrieved,with 25 meeting predefined inclusion and exclusion criteria.The analysis phase involved a detailed examination of each study’s methodology,experimental setup,and key contributions.Among the deep learning models evaluated,Long Short-Term Memory(LSTM)networks were identified as the most frequently adopted architecture for sentiment classification tasks.This review highlights current trends,technical challenges,and emerging opportunities in the field,providing valuable guidance for future research and development in applications such as market analysis,public health monitoring,financial forecasting,and crisis management.展开更多
Objective:To identify potential key genes associated with pre-eclampsia through bioinformatics analysis,construct predictive models using machine-learning algorithms,and evaluate the models'performance in predicti...Objective:To identify potential key genes associated with pre-eclampsia through bioinformatics analysis,construct predictive models using machine-learning algorithms,and evaluate the models'performance in predicting pre-eclampsia.Methods:Gene-expression microarray datasets GSE10588,GSE66273,and GSE30186 related to pre-eclampsia were downloaded from the gene expression omnibus(GEO).Data were normalized using R,and differentially expressed genes(DEGs)were identified.LASSO regression was applied to further filter DEGs.Based on the selected DEGs,six machine-learning models-logistic regression(LR),random forest(RF),support vector machine(SVM),K-nearest neighbors(KNN),neural network(NN),and eXtreme gradient boosting(XGBoost)were built in R,and their performance was validated.Results:From the three datasets,a total of 1,363 genes were extracted.LASSO regression narrowed these to 265 candidate key genes.Multivariate analysis ultimately identified four genes closely associated with pre-eclampsia:EVI5,GCLM,LEP,and SYNPO2L.Using these four key genes,six machine-learning models were constructed.Receiver operating characteristic(ROC)analysis showed that all models achieved AUC>0.9:LR(AUC=0.983,95%CI=0.942-0.998),RF(AUC=0.961,95%CI=0.912-0.987),SVM(AUC=0.936,95%CI=0.879-0.972),KNN(AUC=0.970,95%CI=0.924-0.992),NN(AUC=0.916,95%CI=0.854-0.958),and XGBoost(AUC=0.952,95%CI=0.900-0.982).There was no statistically significant difference among the AUCs of the models(P>0.05).Conclusion:This study identified four key genes linked to preeclampsia through integrated bioinformatics analysis.Predictive models built on these genes can accurately forecast the occurrence of pre-eclampsia,suggesting that the four genes may serve as potential biomarkers for early diagnosis and therapeutic targeting of pre-eclampsia.展开更多
Due to the rapid advancement of information technology,data has emerged as the core resource driving decision-making and innovation across all industries.As the foundation of artificial intelligence,machine learning(M...Due to the rapid advancement of information technology,data has emerged as the core resource driving decision-making and innovation across all industries.As the foundation of artificial intelligence,machine learning(ML)has expanded its applications into intelligent recommendation systems,autonomous driving,medical diagnosis,and financial risk assessment.However,it relies on massive datasets,which contain sensitive personal information.Consequently,Privacy-Preserving Machine Learning(PPML)has become a critical research direction.To address the challenges of efficiency and accuracy in encrypted data computation within PPML,Homomorphic Encryption(HE)technology is a crucial solution,owing to its capability to facilitate computations on encrypted data.However,the integration of machine learning and homomorphic encryption technologies faces multiple challenges.Against this backdrop,this paper reviews homomorphic encryption technologies,with a focus on the advantages of the Cheon-Kim-Kim-Song(CKKS)algorithm in supporting approximate floating-point computations.This paper reviews the development of three machine learning techniques:K-nearest neighbors(KNN),K-means clustering,and face recognition-in integration with homomorphic encryption.It proposes feasible schemes for typical scenarios,summarizes limitations and future optimization directions.Additionally,it presents a systematic exploration of the integration of homomorphic encryption and machine learning from the essence of the technology,application implementation,performance trade-offs,technological convergence and future pathways to advance technological development.展开更多
High-entropy alloys(HEAs)have attracted considerable attention because of their excellent properties and broad compositional design space.However,traditional trial-and-error methods for screening HEAs are costly and i...High-entropy alloys(HEAs)have attracted considerable attention because of their excellent properties and broad compositional design space.However,traditional trial-and-error methods for screening HEAs are costly and inefficient,thereby limiting the development of new materials.Although density functional theory(DFT),molecular dynamics(MD),and thermodynamic modeling have improved the design efficiency,their indirect connection to properties has led to limitations in calculation and prediction.With the awarding of the Nobel Prize in Physics and Chemistry to artificial intelligence(AI)related researchers,there has been a renewed enthusiasm for the application of machine learning(ML)in the field of alloy materials.In this study,common and advanced ML models and strategies in HEA design were introduced,and the mechanism by which ML can play a role in composition optimization and performance prediction was investigated through case studies.The general workflow of ML application in material design was also introduced from the programmer’s point of view,including data preprocessing,feature engineering,model training,evaluation,optimization,and interpretability.Furthermore,data scarcity,multi-model coupling,and other challenges and opportunities at the current stage were analyzed,and an outlook on future research directions was provided.展开更多
The nonlinearity of hedonic datasets demands flexible automated valuation models to appraise housing prices accurately,and artificial intelligence models have been employed in mass appraisal to this end.However,they h...The nonlinearity of hedonic datasets demands flexible automated valuation models to appraise housing prices accurately,and artificial intelligence models have been employed in mass appraisal to this end.However,they have been referred to as“blackbox”models owing to difficulties associated with interpretation.In this study,we compared the results of traditional hedonic pricing models with those of machine learning algorithms,e.g.,random forest and deep neural network models.Commonly implemented measures,e.g.,Gini importance and permutation importance,provide only the magnitude of each explanatory variable’s importance,which results in ambiguous interpretability.To address this issue,we employed the SHapley Additive exPlanation(SHAP)method and explored its effectiveness through comparisons with traditionally explainable measures in hedonic pricing models.The results demonstrated that(1)the random forest model with the SHAP method could be a reliable instrument for appraising housing prices with high accuracy and sufficient interpretability,(2)the interpretable results retrieved from the SHAP method can be consolidated by the support of statistical evidence,and(3)housing characteristics and local amenities are primary contributors in property valuation,which is consistent with the findings of previous studies.Thus,our novel methodological framework and robust findings provide informative insights into the use of machine learning methods in property valuation based on the comparative analysis.展开更多
This study aims to eliminate the subjectivity and inconsistency inherent in the traditional International Association of Drilling Contractors(IADC)bit wear rating process,which heavily depends on the experience of dri...This study aims to eliminate the subjectivity and inconsistency inherent in the traditional International Association of Drilling Contractors(IADC)bit wear rating process,which heavily depends on the experience of drilling engineers and often leads to unreliable results.Leveraging advancements in computer vision and deep learning algorithms,this research proposes an automated detection and classification method for polycrystalline diamond compact(PDC)bit damage.YOLOv10 was employed to locate the PDC bit cutters,followed by two SqueezeNet models to perform wear rating and wear type classifications.A comprehensive dataset was created based on the IADC dull bit evaluation standards.Additionally,this study discusses the necessity of data augmentation and finds that certain methods,such as cropping,splicing,and mixing,may reduce the accuracy of cutter detection.The experimental results demonstrate that the proposed method significantly enhances the accuracy of bit damage detection and classification while also providing substantial improvements in processing speed and computational efficiency,offering a valuable tool for optimizing drilling operations and reducing costs.展开更多
This paper explores the possibility of using machine learning algorithms to predict type 2 diabetes.We selected two commonly used classification models:random forest and logistic regression,modeled patients’clinical ...This paper explores the possibility of using machine learning algorithms to predict type 2 diabetes.We selected two commonly used classification models:random forest and logistic regression,modeled patients’clinical and lifestyle data,and compared their prediction performance.We found that the random forest model achieved the highest accuracy,demonstrated excellent classification results on the test set,and better distinguished between diabetic and non-diabetic patients by the confusion matrix and other evaluation metrics.The support vector machine and logistic regression perform slightly less well but achieve a high level of accuracy.The experimental results validate the effectiveness of the three machine learning algorithms,especially random forest,in the diabetes prediction task and provide useful practical experience for the intelligent prevention and control of chronic diseases.This study promotes the innovation of the diabetes prediction and management model,which is expected to alleviate the pressure on medical resources,reduce the burden of social health care,and improve the prognosis and quality of life of patients.In the future,we can consider expanding the data scale,exploring other machine learning algorithms,and integrating multimodal data to further realize the potential of artificial intelligence(AI)in the field of diabetes.展开更多
Deep Learning(DL)offers promising solutions for analyzing wearable signals and gaining valuable insights into cognitive disorders.While previous review studies have explored various aspects of DL in cognitive healthca...Deep Learning(DL)offers promising solutions for analyzing wearable signals and gaining valuable insights into cognitive disorders.While previous review studies have explored various aspects of DL in cognitive healthcare,there remains a lack of comprehensive analysis that integrates wearable signals,data processing techniques,and the broader applications,benefits,and challenges of DL methods.Addressing this limitation,our study provides an extensive review of DL’s role in cognitive healthcare,with a particular emphasis on wearables,data processing,and the inherent challenges in this field.This review also highlights the considerable promise of DL approaches in addressing a broad spectrum of cognitive issues.By enhancing the understanding and analysis of wearable signal modalities,DL models can achieve remarkable accuracy in cognitive healthcare.Convolutional Neural Network(CNN),Recurrent Neural Network(RNN),and Long Short-term Memory(LSTM)networks have demonstrated improved performance and effectiveness in the early diagnosis and progression monitoring of neurological disorders.Beyond cognitive impairment detection,DL has been applied to emotion recognition,sleep analysis,stress monitoring,and neurofeedback.These applications lead to advanced diagnosis,personalized treatment,early intervention,assistive technologies,remote monitoring,and reduced healthcare costs.Nevertheless,the integration of DL and wearable technologies presents several challenges,such as data quality,privacy,interpretability,model generalizability,ethical concerns,and clinical adoption.These challenges emphasize the importance of conducting future research in areas such as multimodal signal analysis and explainable AI.The findings of this review aim to benefit clinicians,healthcare professionals,and society by facilitating better patient outcomes in cognitive healthcare.展开更多
基金National Key Research and Development Program of China,No.2023YFC3006704National Natural Science Foundation of China,No.42171047CAS-CSIRO Partnership Joint Project of 2024,No.177GJHZ2023097MI。
文摘Accurate prediction of flood events is important for flood control and risk management.Machine learning techniques contributed greatly to advances in flood predictions,and existing studies mainly focused on predicting flood resource variables using single or hybrid machine learning techniques.However,class-based flood predictions have rarely been investigated,which can aid in quickly diagnosing comprehensive flood characteristics and proposing targeted management strategies.This study proposed a prediction approach of flood regime metrics and event classes coupling machine learning algorithms with clustering-deduced membership degrees.Five algorithms were adopted for this exploration.Results showed that the class membership degrees accurately determined event classes with class hit rates up to 100%,compared with the four classes clustered from nine regime metrics.The nonlinear algorithms(Multiple Linear Regression,Random Forest,and least squares-Support Vector Machine)outperformed the linear techniques(Multiple Linear Regression and Stepwise Regression)in predicting flood regime metrics.The proposed approach well predicted flood event classes with average class hit rates of 66.0%-85.4%and 47.2%-76.0%in calibration and validation periods,respectively,particularly for the slow and late flood events.The predictive capability of the proposed prediction approach for flood regime metrics and classes was considerably stronger than that of hydrological modeling approach.
文摘With the advent of sixth-generation mobile communications(6G),space-air-ground integrated networks have become mainstream.This paper focuses on collaborative scheduling for mobile edge computing(MEC)under a three-tier heterogeneous architecture composed of mobile devices,unmanned aerial vehicles(UAVs),and macro base stations(BSs).This scenario typically faces fast channel fading,dynamic computational loads,and energy constraints,whereas classical queuing-theoretic or convex-optimization approaches struggle to yield robust solutions in highly dynamic settings.To address this issue,we formulate a multi-agent Markov decision process(MDP)for an air-ground-fused MEC system,unify link selection,bandwidth/power allocation,and task offloading into a continuous action space and propose a joint scheduling strategy that is based on an improved MATD3 algorithm.The improvements include Alternating Layer Normalization(ALN)in the actor to suppress gradient variance,Residual Orthogonalization(RO)in the critic to reduce the correlation between the twin Q-value estimates,and a dynamic-temperature reward to enable adaptive trade-offs during training.On a multi-user,dual-link simulation platform,we conduct ablation and baseline comparisons.The results reveal that the proposed method has better convergence and stability.Compared with MADDPG,TD3,and DSAC,our algorithm achieves more robust performance across key metrics.
基金funded by the Key Laboratory of Geological Safety of Coastal Urban Underground Space,Ministry of Natural Resources of China(Grant No.BHKF2022Y02)Natural Science Foundation of Guangdong Province,China(Grant No.2024A1515011162)Natural Science Foundation of Shandong Province,China(Grant No.ZR2024QE021).
文摘Traditional mining in open pit mines often uses explosives,leading to environmental hazards,with flyrock being a critical issue.In detail,excess flying rock beyond the designated explosion area was identified as the primary cause of fatal and non-fatal blasting hazards in open pit mining.Therefore,the accurate and reliable prediction of flyrock becomes crucial for effectively managing and mitigating associated problems.This study used the Light Gradient Boosting Machine(LightGBM)model to predict flyrock in a lead-zinc mine,with promising results.To improve its accuracy,multi-verse optimizer(MVO)and ant lion optimizer(ALO)metaheuristic algorithms were introduced.Results showed MVO-LightGBM outperformed conventional LightGBM.Additionally,decision tree(DT),support vector machine(SVM),and classification and regression tree(CART)models were trained and compared with MVO-LightGBM.The MVO-LightGBM model excelled over DT,SVM,and CART.This study highlights MVO-LightGBM's effectiveness and potential for broader applications.Furthermore,a multiple parametric sensitivity analysis(MPSA)algorithm was employed to specify the sensitivity of parameters.MPSA results indicated that the highest and lowest sensitivities are relevant to blasted rock per hole and spacing with theγ=1752.12 andγ=49.52,respectively.
基金supported by the Learning&Academic Research Institution for Master’s,PhD students,and Postdocs LAMP Program of the National Research Foundation of Korea(NRF)grant funded by the Ministry of Education(No.RS-2023-00301974)This work was also supported by the Glocal University 30 Project fund of Gyeongsang National University in 2025.
文摘Lithium manganese silicate(Li-Mn-Si-O)cathodes are key components of lithium-ion batteries,and their physical and mechanical properties are strongly influenced by their underlying crystal structures.In this study,a range of machine learning(ML)algorithms were developed and compared to predict the crystal systems of Li-Mn-Si-O cathode materials using density functional theory(DFT)data obtained from the Materials Project database.The dataset comprised 211 compositions characterized by key descriptors,including formation energy,energy above the hull,bandgap,atomic site number,density,and unit cell volume.These features were utilized to classify the materials into monoclinic(0)and triclinic(1)crystal systems.A comprehensive comparison of various classification algorithms including Decision Tree,Random Forest,XGBoost,Support VectorMachine,k-Nearest Neighbor,Stochastic Gradient Descent,Gaussian Naive Bayes,Gaussian Process,and Artificial Neural Network(ANN)was conducted.Among these,the optimized ANN architecture(6–14-14-14-1)exhibited the highest predictive performance,achieving an accuracy of 95.3%,aMatthews correlation coefficient(MCC)of 0.894,and an F-score of 0.963,demonstrating excellent consistency with DFT-predicted crystal structures.Meanwhile,RandomForest and Gaussian Processmodels also exhibited reliable and consistent predictive capability,indicating their potential as complementary approaches,particularly when data are limited or computational efficiency is required.This comparative framework provides valuable insights into model selection for crystal system classification in complex cathode materials.
基金supported by Key Science and Technology Program of Henan Province,China(Grant Nos.242102210147,242102210027)Fujian Province Young and Middle aged Teacher Education Research Project(Science and Technology Category)(No.JZ240101)(Corresponding author:Dong Yuan).
文摘Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively.
文摘Open caissons are widely used in foundation engineering because of their load-bearing efficiency and adaptability in diverse soil conditions.However,accurately predicting their undrained bearing capacity in layered soils remains a complex challenge.This study presents a novel application of five ensemble machine(ML)algorithms-random forest(RF),gradient boosting machine(GBM),extreme gradient boosting(XGBoost),adaptive boosting(AdaBoost),and categorical boosting(CatBoost)-to predict the undrained bearing capacity factor(Nc)of circular open caissons embedded in two-layered clay on the basis of results from finite element limit analysis(FELA).The input dataset consists of 1188 numerical simulations using the Tresca failure criterion,varying in geometrical and soil parameters.The FELA was performed via OptumG2 software with adaptive meshing techniques and verified against existing benchmark studies.The ML models were trained on 70% of the dataset and tested on the remaining 30%.Their performance was evaluated using six statistical metrics:coefficient of determination(R²),mean absolute error(MAE),root mean squared error(RMSE),index of scatter(IOS),RMSE-to-standard deviation ratio(RSR),and variance explained factor(VAF).The results indicate that all the models achieved high accuracy,with R²values exceeding 97.6%and RMSE values below 0.02.Among them,AdaBoost and CatBoost consistently outperformed the other methods across both the training and testing datasets,demonstrating superior generalizability and robustness.The proposed ML framework offers an efficient,accurate,and data-driven alternative to traditional methods for estimating caisson capacity in stratified soils.This approach can aid in reducing computational costs while improving reliability in the early stages of foundation design.
文摘Non-technical losses(NTL)of electric power are a serious problem for electric distribution companies.The solution determines the cost,stability,reliability,and quality of the supplied electricity.The widespread use of advanced metering infrastructure(AMI)and Smart Grid allows all participants in the distribution grid to store and track electricity consumption.During the research,a machine learning model is developed that allows analyzing and predicting the probability of NTL for each consumer of the distribution grid based on daily electricity consumption readings.This model is an ensemble meta-algorithm(stacking)that generalizes the algorithms of random forest,LightGBM,and a homogeneous ensemble of artificial neural networks.The best accuracy of the proposed meta-algorithm in comparison to basic classifiers is experimentally confirmed on the test sample.Such a model,due to good accuracy indicators(ROC-AUC-0.88),can be used as a methodological basis for a decision support system,the purpose of which is to form a sample of suspected NTL sources.The use of such a sample will allow the top management of electric distribution companies to increase the efficiency of raids by performers,making them targeted and accurate,which should contribute to the fight against NTL and the sustainable development of the electric power industry.
基金financially supported by the National Natural Science Foundation of China(No.52073031)the National Key Research and Development Program of China(Nos.2023YFB3208102,2021YFB3200304)+4 种基金the China National Postdoctoral Program for Innovative Talents(No.BX2021302)the Beijing Nova Program(Nos.Z191100001119047,Z211100002121148)the Fundamental Research Funds for the Central Universities(No.E0EG6801X2)the‘Hundred Talents Program’of the Chinese Academy of Sciencesthe BrainLink program funded by the MSIT through the NRF of Korea(No.RS-2023-00237308).
文摘Neuromorphic computing extends beyond sequential processing modalities and outperforms traditional von Neumann architectures in implementing more complicated tasks,e.g.,pattern processing,image recognition,and decision making.It features parallel interconnected neural networks,high fault tolerance,robustness,autonomous learning capability,and ultralow energy dissipation.The algorithms of artificial neural network(ANN)have also been widely used because of their facile self-organization and self-learning capabilities,which mimic those of the human brain.To some extent,ANN reflects several basic functions of the human brain and can be efficiently integrated into neuromorphic devices to perform neuromorphic computations.This review highlights recent advances in neuromorphic devices assisted by machine learning algorithms.First,the basic structure of simple neuron models inspired by biological neurons and the information processing in simple neural networks are particularly discussed.Second,the fabrication and research progress of neuromorphic devices are presented regarding to materials and structures.Furthermore,the fabrication of neuromorphic devices,including stand-alone neuromorphic devices,neuromorphic device arrays,and integrated neuromorphic systems,is discussed and demonstrated with reference to some respective studies.The applications of neuromorphic devices assisted by machine learning algorithms in different fields are categorized and investigated.Finally,perspectives,suggestions,and potential solutions to the current challenges of neuromorphic devices are provided.
文摘Based on the Google Earth Engine cloud computing data platform,this study employed three algorithms including Support Vector Machine,Random Forest,and Classification and Regression Tree to classify the current status of land covers in Hung Yen province of Vietnam using Landsat 8 OLI satellite images,a free data source with reasonable spatial and temporal resolution.The results of the study show that all three algorithms presented good classification for five basic types of land cover including Rice land,Water bodies,Perennial vegetation,Annual vegetation,Built-up areas as their overall accuracy and Kappa coefficient were greater than 80%and 0.8,respectively.Among the three algorithms,SVM achieved the highest accuracy as its overall accuracy was 86%and the Kappa coefficient was 0.88.Land cover classification based on the SVM algorithm shows that Built-up areas cover the largest area with nearly 31,495 ha,accounting for more than 33.8%of the total natural area,followed by Rice land and Perennial vegetation which cover an area of over 30,767 ha(33%)and 15,637 ha(16.8%),respectively.Water bodies and Annual vegetation cover the smallest areas with 8,820(9.5%)ha and 6,302 ha(6.8%),respectively.The results of this study can be used for land use management and planning as well as other natural resource and environmental management purposes in the province.
基金supported by the National Natural Science Foundation of China(62394340,62394345,62473383).This work was carried out in part using computing resources at the High Performance Computing Center of Central South University。
文摘Deep neural networks are increasingly exposed to attack threats,and at the same time,the need for privacy protection is growing.As a result,the challenge of developing neural networks that are both robust and capable of strong generalization while maintaining privacy becomes pressing.Training neural networks under privacy constraints is one way to minimize privacy leakage,and one way to do this is to add noise to the data or model.However,noise may cause gradient directions to deviate from the optimal trajectory during training,leading to unstable parameter updates,slow convergence,and reduced model generalization capability.To overcome these challenges,we propose an optimization algorithm based on double-integral coevolutionary neurodynamics(DICND),designed to accelerate convergence and improve generalization in noisy conditions.Theoretical analysis proves the global convergence of the DICND algorithm and demonstrates its ability to converge to near-global minima efficiently under noisy conditions.Numerical simulations and image classification experiments further confirm the DICND algorithm's significant advantages in enhancing generalization performance.
基金supported by the National Natural Science Foundation of China(22408227,22238005)the Postdoctoral Research Foundation of China(GZC20231576).
文摘The optimization of reaction processes is crucial for the green, efficient, and sustainable development of the chemical industry. However, how to address the problems posed by multiple variables, nonlinearities, and uncertainties during optimization remains a formidable challenge. In this study, a strategy combining interpretable machine learning with metaheuristic optimization algorithms is employed to optimize the reaction process. First, experimental data from a biodiesel production process are collected to establish a database. These data are then used to construct a predictive model based on artificial neural network (ANN) models. Subsequently, interpretable machine learning techniques are applied for quantitative analysis and verification of the model. Finally, four metaheuristic optimization algorithms are coupled with the ANN model to achieve the desired optimization. The research results show that the methanol: palm fatty acid distillate (PFAD) molar ratio contributes the most to the reaction outcome, accounting for 41%. The ANN-simulated annealing (SA) hybrid method is more suitable for this optimization, and the optimal process parameters are a catalyst concentration of 3.00% (mass), a methanol: PFAD molar ratio of 8.67, and a reaction time of 30 min. This study provides deeper insights into reaction process optimization, which will facilitate future applications in various reaction optimization processes.
文摘While algorithms have been created for land usage in urban settings,there have been few investigations into the extraction of urban footprint(UF).To address this research gap,the study employs several widely used image classification method classified into three categories to evaluate their segmentation capabilities for extracting UF across eight cities.The results indicate that pixel-based methods only excel in clear urban environments,and their overall accuracy is not consistently high.RF and SVM perform well but lack stability in object-based UF extraction,influenced by feature selection and classifier performance.Deep learning enhances feature extraction but requires powerful computing and faces challenges with complex urban layouts.SAM excels in medium-sized urban areas but falters in intricate layouts.Integrating traditional and deep learning methods optimizes UF extraction,balancing accuracy and processing efficiency.Future research should focus on adapting algorithms for diverse urban landscapes to enhance UF extraction accuracy and applicability.
基金supported by the“Technology Commercialization Collaboration Platform Construction”project of the Innopolis Foundation(Project Number:2710033536)the Competitive Research Fund of The University of Aizu,Japan.
文摘Sentiment Analysis,a significant domain within Natural Language Processing(NLP),focuses on extracting and interpreting subjective information-such as emotions,opinions,and attitudes-from textual data.With the increasing volume of user-generated content on social media and digital platforms,sentiment analysis has become essential for deriving actionable insights across various sectors.This study presents a systematic literature review of sentiment analysis methodologies,encompassing traditional machine learning algorithms,lexicon-based approaches,and recent advancements in deep learning techniques.The review follows a structured protocol comprising three phases:planning,execution,and analysis/reporting.During the execution phase,67 peer-reviewed articles were initially retrieved,with 25 meeting predefined inclusion and exclusion criteria.The analysis phase involved a detailed examination of each study’s methodology,experimental setup,and key contributions.Among the deep learning models evaluated,Long Short-Term Memory(LSTM)networks were identified as the most frequently adopted architecture for sentiment classification tasks.This review highlights current trends,technical challenges,and emerging opportunities in the field,providing valuable guidance for future research and development in applications such as market analysis,public health monitoring,financial forecasting,and crisis management.
文摘Objective:To identify potential key genes associated with pre-eclampsia through bioinformatics analysis,construct predictive models using machine-learning algorithms,and evaluate the models'performance in predicting pre-eclampsia.Methods:Gene-expression microarray datasets GSE10588,GSE66273,and GSE30186 related to pre-eclampsia were downloaded from the gene expression omnibus(GEO).Data were normalized using R,and differentially expressed genes(DEGs)were identified.LASSO regression was applied to further filter DEGs.Based on the selected DEGs,six machine-learning models-logistic regression(LR),random forest(RF),support vector machine(SVM),K-nearest neighbors(KNN),neural network(NN),and eXtreme gradient boosting(XGBoost)were built in R,and their performance was validated.Results:From the three datasets,a total of 1,363 genes were extracted.LASSO regression narrowed these to 265 candidate key genes.Multivariate analysis ultimately identified four genes closely associated with pre-eclampsia:EVI5,GCLM,LEP,and SYNPO2L.Using these four key genes,six machine-learning models were constructed.Receiver operating characteristic(ROC)analysis showed that all models achieved AUC>0.9:LR(AUC=0.983,95%CI=0.942-0.998),RF(AUC=0.961,95%CI=0.912-0.987),SVM(AUC=0.936,95%CI=0.879-0.972),KNN(AUC=0.970,95%CI=0.924-0.992),NN(AUC=0.916,95%CI=0.854-0.958),and XGBoost(AUC=0.952,95%CI=0.900-0.982).There was no statistically significant difference among the AUCs of the models(P>0.05).Conclusion:This study identified four key genes linked to preeclampsia through integrated bioinformatics analysis.Predictive models built on these genes can accurately forecast the occurrence of pre-eclampsia,suggesting that the four genes may serve as potential biomarkers for early diagnosis and therapeutic targeting of pre-eclampsia.
基金supported by the fllowing projects:Natural Science Foundation of China under Grant 62172436Self-Initiated Scientific Research Project of the Chinese People's Armed Police Force under Grant ZZKY20243129Basic Frontier Innovation Project of the Engineering University of the Chinese People's Armed Police Force under Grant WJY202421.
文摘Due to the rapid advancement of information technology,data has emerged as the core resource driving decision-making and innovation across all industries.As the foundation of artificial intelligence,machine learning(ML)has expanded its applications into intelligent recommendation systems,autonomous driving,medical diagnosis,and financial risk assessment.However,it relies on massive datasets,which contain sensitive personal information.Consequently,Privacy-Preserving Machine Learning(PPML)has become a critical research direction.To address the challenges of efficiency and accuracy in encrypted data computation within PPML,Homomorphic Encryption(HE)technology is a crucial solution,owing to its capability to facilitate computations on encrypted data.However,the integration of machine learning and homomorphic encryption technologies faces multiple challenges.Against this backdrop,this paper reviews homomorphic encryption technologies,with a focus on the advantages of the Cheon-Kim-Kim-Song(CKKS)algorithm in supporting approximate floating-point computations.This paper reviews the development of three machine learning techniques:K-nearest neighbors(KNN),K-means clustering,and face recognition-in integration with homomorphic encryption.It proposes feasible schemes for typical scenarios,summarizes limitations and future optimization directions.Additionally,it presents a systematic exploration of the integration of homomorphic encryption and machine learning from the essence of the technology,application implementation,performance trade-offs,technological convergence and future pathways to advance technological development.
基金the National Natural Science Foundation of China(52161011)the Central Guiding Local Science and Technology Development Fund Project(Guike ZY23055005,Guike ZY24212036 and GuikeAB25069457)+5 种基金the Guangxi Science and Technology Project(2023GXNSFDA026046 and Guike AB24010247)the Scientifc Research and Technology Development Program of Guilin(20220110-3 and 20230110-3)the Scientifc Research and Technology Development Program of Nanning Jiangnan district(20230715-02)the Guangxi Key Laboratory of Superhard Material(2022-K-001)the Guangxi Key Laboratory of Information Materials(231003-Z,231033-K and 231013-Z)the Innovation Project of GUET Graduate Education(2025YCXS177)for the fnancial support given to this work.
文摘High-entropy alloys(HEAs)have attracted considerable attention because of their excellent properties and broad compositional design space.However,traditional trial-and-error methods for screening HEAs are costly and inefficient,thereby limiting the development of new materials.Although density functional theory(DFT),molecular dynamics(MD),and thermodynamic modeling have improved the design efficiency,their indirect connection to properties has led to limitations in calculation and prediction.With the awarding of the Nobel Prize in Physics and Chemistry to artificial intelligence(AI)related researchers,there has been a renewed enthusiasm for the application of machine learning(ML)in the field of alloy materials.In this study,common and advanced ML models and strategies in HEA design were introduced,and the mechanism by which ML can play a role in composition optimization and performance prediction was investigated through case studies.The general workflow of ML application in material design was also introduced from the programmer’s point of view,including data preprocessing,feature engineering,model training,evaluation,optimization,and interpretability.Furthermore,data scarcity,multi-model coupling,and other challenges and opportunities at the current stage were analyzed,and an outlook on future research directions was provided.
基金supported by the National Research Foundation of Korea grant funded by the Korea government(MSIT)(RS-2025-16067531:Kwangwon Ahn)Hankuk University of Foreign Studies Research Fund(0f 2025:Sihyun An).
文摘The nonlinearity of hedonic datasets demands flexible automated valuation models to appraise housing prices accurately,and artificial intelligence models have been employed in mass appraisal to this end.However,they have been referred to as“blackbox”models owing to difficulties associated with interpretation.In this study,we compared the results of traditional hedonic pricing models with those of machine learning algorithms,e.g.,random forest and deep neural network models.Commonly implemented measures,e.g.,Gini importance and permutation importance,provide only the magnitude of each explanatory variable’s importance,which results in ambiguous interpretability.To address this issue,we employed the SHapley Additive exPlanation(SHAP)method and explored its effectiveness through comparisons with traditionally explainable measures in hedonic pricing models.The results demonstrated that(1)the random forest model with the SHAP method could be a reliable instrument for appraising housing prices with high accuracy and sufficient interpretability,(2)the interpretable results retrieved from the SHAP method can be consolidated by the support of statistical evidence,and(3)housing characteristics and local amenities are primary contributors in property valuation,which is consistent with the findings of previous studies.Thus,our novel methodological framework and robust findings provide informative insights into the use of machine learning methods in property valuation based on the comparative analysis.
基金support of the CNPC International Collaborative Research Project(No.2022DQ0410)。
文摘This study aims to eliminate the subjectivity and inconsistency inherent in the traditional International Association of Drilling Contractors(IADC)bit wear rating process,which heavily depends on the experience of drilling engineers and often leads to unreliable results.Leveraging advancements in computer vision and deep learning algorithms,this research proposes an automated detection and classification method for polycrystalline diamond compact(PDC)bit damage.YOLOv10 was employed to locate the PDC bit cutters,followed by two SqueezeNet models to perform wear rating and wear type classifications.A comprehensive dataset was created based on the IADC dull bit evaluation standards.Additionally,this study discusses the necessity of data augmentation and finds that certain methods,such as cropping,splicing,and mixing,may reduce the accuracy of cutter detection.The experimental results demonstrate that the proposed method significantly enhances the accuracy of bit damage detection and classification while also providing substantial improvements in processing speed and computational efficiency,offering a valuable tool for optimizing drilling operations and reducing costs.
文摘This paper explores the possibility of using machine learning algorithms to predict type 2 diabetes.We selected two commonly used classification models:random forest and logistic regression,modeled patients’clinical and lifestyle data,and compared their prediction performance.We found that the random forest model achieved the highest accuracy,demonstrated excellent classification results on the test set,and better distinguished between diabetic and non-diabetic patients by the confusion matrix and other evaluation metrics.The support vector machine and logistic regression perform slightly less well but achieve a high level of accuracy.The experimental results validate the effectiveness of the three machine learning algorithms,especially random forest,in the diabetes prediction task and provide useful practical experience for the intelligent prevention and control of chronic diseases.This study promotes the innovation of the diabetes prediction and management model,which is expected to alleviate the pressure on medical resources,reduce the burden of social health care,and improve the prognosis and quality of life of patients.In the future,we can consider expanding the data scale,exploring other machine learning algorithms,and integrating multimodal data to further realize the potential of artificial intelligence(AI)in the field of diabetes.
基金the Asian Institute of Technology,Khlong Nueng,Thailand for their support in carrying out this study。
文摘Deep Learning(DL)offers promising solutions for analyzing wearable signals and gaining valuable insights into cognitive disorders.While previous review studies have explored various aspects of DL in cognitive healthcare,there remains a lack of comprehensive analysis that integrates wearable signals,data processing techniques,and the broader applications,benefits,and challenges of DL methods.Addressing this limitation,our study provides an extensive review of DL’s role in cognitive healthcare,with a particular emphasis on wearables,data processing,and the inherent challenges in this field.This review also highlights the considerable promise of DL approaches in addressing a broad spectrum of cognitive issues.By enhancing the understanding and analysis of wearable signal modalities,DL models can achieve remarkable accuracy in cognitive healthcare.Convolutional Neural Network(CNN),Recurrent Neural Network(RNN),and Long Short-term Memory(LSTM)networks have demonstrated improved performance and effectiveness in the early diagnosis and progression monitoring of neurological disorders.Beyond cognitive impairment detection,DL has been applied to emotion recognition,sleep analysis,stress monitoring,and neurofeedback.These applications lead to advanced diagnosis,personalized treatment,early intervention,assistive technologies,remote monitoring,and reduced healthcare costs.Nevertheless,the integration of DL and wearable technologies presents several challenges,such as data quality,privacy,interpretability,model generalizability,ethical concerns,and clinical adoption.These challenges emphasize the importance of conducting future research in areas such as multimodal signal analysis and explainable AI.The findings of this review aim to benefit clinicians,healthcare professionals,and society by facilitating better patient outcomes in cognitive healthcare.