期刊文献+
共找到1,452篇文章
< 1 2 73 >
每页显示 20 50 100
Beyond the Cloud: Federated Learning and Edge AI for the Next Decade 被引量:1
1
作者 Sooraj George Thomas Praveen Kumar Myakala 《Journal of Computer and Communications》 2025年第2期37-50,共14页
As AI systems scale, the limitations of cloud-based architectures, including latency, bandwidth, and privacy concerns, demand decentralized alternatives. Federated learning (FL) and Edge AI provide a paradigm shift by... As AI systems scale, the limitations of cloud-based architectures, including latency, bandwidth, and privacy concerns, demand decentralized alternatives. Federated learning (FL) and Edge AI provide a paradigm shift by combining privacy preserving training with efficient, on device computation. This paper introduces a cutting-edge FL-edge integration framework, achieving a 10% to 15% increase in model accuracy and reducing communication costs by 25% in heterogeneous environments. Blockchain based secure aggregation ensures robust and tamper-proof model updates, while exploratory quantum AI techniques enhance computational efficiency. By addressing key challenges such as device variability and non-IID data, this work sets the stage for the next generation of adaptive, privacy-first AI systems, with applications in IoT, healthcare, and autonomous systems. 展开更多
关键词 Federated learning Edge ai Decentralized Computing Privacy-Preserving ai Blockchain Quantum ai
在线阅读 下载PDF
AI-Powered Threat Detection in Online Communities: A Multi-Modal Deep Learning Approach
2
作者 Ravi Teja Potla 《Journal of Computer and Communications》 2025年第2期155-171,共17页
The fast increase of online communities has brought about an increase in cyber threats inclusive of cyberbullying, hate speech, misinformation, and online harassment, making content moderation a pressing necessity. Tr... The fast increase of online communities has brought about an increase in cyber threats inclusive of cyberbullying, hate speech, misinformation, and online harassment, making content moderation a pressing necessity. Traditional single-modal AI-based detection systems, which analyze both text, photos, or movies in isolation, have established useless at taking pictures multi-modal threats, in which malicious actors spread dangerous content throughout a couple of formats. To cope with these demanding situations, we advise a multi-modal deep mastering framework that integrates Natural Language Processing (NLP), Convolutional Neural Networks (CNNs), and Long Short-Term Memory (LSTM) networks to become aware of and mitigate online threats effectively. Our proposed model combines BERT for text class, ResNet50 for photograph processing, and a hybrid LSTM-3-d CNN community for video content material analysis. We constructed a large-scale dataset comprising 500,000 textual posts, 200,000 offensive images, and 50,000 annotated motion pictures from more than one platform, which includes Twitter, Reddit, YouTube, and online gaming forums. The system became carefully evaluated using trendy gadget mastering metrics which include accuracy, precision, remember, F1-score, and ROC-AUC curves. Experimental outcomes demonstrate that our multi-modal method extensively outperforms single-modal AI classifiers, achieving an accuracy of 92.3%, precision of 91.2%, do not forget of 90.1%, and an AUC rating of 0.95. The findings validate the necessity of integrating multi-modal AI for actual-time, high-accuracy online chance detection and moderation. Future paintings will have consciousness on improving hostile robustness, enhancing scalability for real-world deployment, and addressing ethical worries associated with AI-driven content moderation. 展开更多
关键词 Multi-Model ai Deep learning Natural Language Processing (NLP) Explainable ai (XI) Federated learning Cyber Threat Detection LSTM CNNS
在线阅读 下载PDF
Membrane Fouling Prediction and Control Using AI and Machine Learning: A Comprehensive Review
3
作者 Doaa Salim Musallam Samhan Al-Kathiri Gaddala Babu Rao +5 位作者 Noor Mohammed Said Qahoor Saikat Banerjee Naladi Ram Babu Gadidamalla Kavitha Nageswara Rao Lakkimsetty Rakesh Namdeti 《Journal of Environmental & Earth Sciences》 2025年第6期315-350,共36页
Membrane fouling is a persistent challenge in membrane-based technologies,significantly impacting efficiency,operational costs,and system lifespan in applications like water treatment,desalination,and industrial proce... Membrane fouling is a persistent challenge in membrane-based technologies,significantly impacting efficiency,operational costs,and system lifespan in applications like water treatment,desalination,and industrial processing.Foul-ing,caused by the accumulation of particulates,organic compounds,and microorganisms,leads to reduced permeability,increased energy demands,and frequent maintenance.Traditional fouling control approaches,relying on empirical models and reactive strategies,often fail to address these issues efficiently.In this context,artificial intelligence(AI)and machine learning(ML)have emerged as innovative tools offering predictive and proactive solutions for fouling man-agement.By utilizing historical and real-time data,AI/ML techniques such as artificial neural networks,support vector machines,and ensemble models enable accurate prediction of fouling onset,identification of fouling mechanisms,and optimization of control measures.This review provides a detailed examination of the integration of AI/ML in membrane fouling prediction and mitigation,discussing advanced algorithms,the role of sensor-based monitoring,and the importance of robust datasets in enhancing predictive accuracy.Case studies highlighting successful AI/ML applications across various membrane processes are presented,demonstrating their transformative potential in improving system performance.Emerging trends,such as hybrid modeling and IoT-enabled smart systems,are explored,alongside a criti-cal analysis of research gaps and opportunities.This review emphasizes AI/ML as a cornerstone for sustainable,cost-effective membrane operations. 展开更多
关键词 Membrane Fouling Artificial Intelligence(ai) Machine learning(ml) Fouling Prediction Smart Membrane Systems
在线阅读 下载PDF
Innovation in the “Basic-Clinical” Connection Teaching Model of Biochemistry Course Empowered by AI Case-Guided Learning System
4
作者 Yungang Shi Meixia Jia Changfeng Wang 《Journal of Clinical and Nursing Research》 2025年第9期75-80,共6页
Against the background of the continuous reform in medical education,biochemistry,as a fundamental medical course,maintains a close connection with clinical practice.However,under the traditional teaching model,the ef... Against the background of the continuous reform in medical education,biochemistry,as a fundamental medical course,maintains a close connection with clinical practice.However,under the traditional teaching model,the effectiveness of the“basic-clinical”connection is relatively poor,which hinders the improvement of educational outcomes.In the practical teaching of higher vocational medical education,the integration of the AI Case-Guided Learning System can enhance students’enthusiasm for knowledge exploration and effectively improve teaching quality.Starting from the perspective of the“basic-clinical”connection teaching in the biochemistry course,this paper analyzes the application value of the AI Case-Guided Learning System and proposes specific application strategies,aiming to accumulate experience for the innovation of biochemistry teaching. 展开更多
关键词 ai Case-Guided learning System Biochemistry Basic-clinical
在线阅读 下载PDF
Exploration of a New Educational Model Based on Generative AIEmpowered Interdisciplinary Project-Based Learning
5
作者 Qijun Xu Fengtao Hao 《Journal of Educational Theory and Management》 2025年第1期15-18,共4页
This study explores a novel educational model of generative AI-empowered interdisciplinary project-based learning(PBL).By analyzing the current applications of generative AI technology in information technology curric... This study explores a novel educational model of generative AI-empowered interdisciplinary project-based learning(PBL).By analyzing the current applications of generative AI technology in information technology curricula,it elucidates its advantages and operational mechanisms in interdisciplinary PBL.Combining case studies and empirical research,the investigation proposes implementation pathways and strategies for the generative AI-enhanced interdisciplinary PBL model,detailing specific applications across three phases:project preparation,implementation,and evaluation.The research demonstrates that generative AI-enabled interdisciplinary project-based learning can effectively enhance students’learning motivation,interdisciplinary thinking capabilities,and innovative competencies,providing new conceptual frameworks and practical approaches for educational model innovation. 展开更多
关键词 Generative ai Project-Based learning Educational Model
在线阅读 下载PDF
An explainable feature selection framework for web phishing detection with machine learning
6
作者 Sakib Shahriar Shafin 《Data Science and Management》 2025年第2期127-136,共10页
In the evolving landscape of cyber threats,phishing attacks pose significant challenges,particularly through deceptive webpages designed to extract sensitive information under the guise of legitimacy.Conventional and ... In the evolving landscape of cyber threats,phishing attacks pose significant challenges,particularly through deceptive webpages designed to extract sensitive information under the guise of legitimacy.Conventional and machine learning(ML)-based detection systems struggle to detect phishing websites owing to their constantly changing tactics.Furthermore,newer phishing websites exhibit subtle and expertly concealed indicators that are not readily detectable.Hence,effective detection depends on identifying the most critical features.Traditional feature selection(FS)methods often struggle to enhance ML model performance and instead decrease it.To combat these issues,we propose an innovative method using explainable AI(XAI)to enhance FS in ML models and improve the identification of phishing websites.Specifically,we employ SHapley Additive exPlanations(SHAP)for global perspective and aggregated local interpretable model-agnostic explanations(LIME)to deter-mine specific localized patterns.The proposed SHAP and LIME-aggregated FS(SLA-FS)framework pinpoints the most informative features,enabling more precise,swift,and adaptable phishing detection.Applying this approach to an up-to-date web phishing dataset,we evaluate the performance of three ML models before and after FS to assess their effectiveness.Our findings reveal that random forest(RF),with an accuracy of 97.41%and XGBoost(XGB)at 97.21%significantly benefit from the SLA-FS framework,while k-nearest neighbors lags.Our framework increases the accuracy of RF and XGB by 0.65%and 0.41%,respectively,outperforming traditional filter or wrapper methods and any prior methods evaluated on this dataset,showcasing its potential. 展开更多
关键词 Webpage phishing Explainable ai Feature selection Machine learning
在线阅读 下载PDF
Explainable AI Based Multi-Task Learning Method for Stroke Prognosis
7
作者 Nan Ding Xingyu Zeng +1 位作者 Jianping Wu Liutao Zhao 《Computers, Materials & Continua》 2025年第9期5299-5315,共17页
Predicting the health status of stroke patients at different stages of the disease is a critical clinical task.The onset and development of stroke are affected by an array of factors,encompassing genetic predispositio... Predicting the health status of stroke patients at different stages of the disease is a critical clinical task.The onset and development of stroke are affected by an array of factors,encompassing genetic predisposition,environmental exposure,unhealthy lifestyle habits,and existing medical conditions.Although existing machine learning-based methods for predicting stroke patients’health status have made significant progress,limitations remain in terms of prediction accuracy,model explainability,and system optimization.This paper proposes a multi-task learning approach based on Explainable Artificial Intelligence(XAI)for predicting the health status of stroke patients.First,we design a comprehensive multi-task learning framework that utilizes the task correlation of predicting various health status indicators in patients,enabling the parallel prediction of multiple health indicators.Second,we develop a multi-task Area Under Curve(AUC)optimization algorithm based on adaptive low-rank representation,which removes irrelevant information from the model structure to enhance the performance of multi-task AUC optimization.Additionally,the model’s explainability is analyzed through the stability analysis of SHAP values.Experimental results demonstrate that our approach outperforms comparison algorithms in key prognostic metrics F1 score and Efficiency. 展开更多
关键词 Explainable ai stroke prognosis multi-task learning AUC optimization
在线阅读 下载PDF
Machine Learning and Explainable AI-Guided Design and Optimization of High-Entropy Alloys as Binder Phases for WC-Based Cemented Carbides
8
作者 Jianping Li Wan Xiong +7 位作者 Tenghang Zhang Hao Cheng Kun Shen Miaojin He Yu Zhang Junxin Song Ying Deng Qiaowang Chen 《Computers, Materials & Continua》 2025年第8期2189-2216,共28页
Tungsten carbide-based(WC-based)cemented carbides are widely recognized as high-performance tool materials.Traditionally,single metals such as cobalt(Co)or nickel(Ni)serve as the binder phase,providing toughness and s... Tungsten carbide-based(WC-based)cemented carbides are widely recognized as high-performance tool materials.Traditionally,single metals such as cobalt(Co)or nickel(Ni)serve as the binder phase,providing toughness and structural integrity.Replacing this phase with high-entropy alloys(HEAs)offers a promising approach to enhancing mechanical properties and addressing sustainability challenges.However,the complex multi-element composition of HEAs complicates conventional experimental design,making it difficult to explore the vast compositional space efficiently.Traditional trial-and-error methods are time-consuming,resource-intensive,and often ineffective in identifying optimal compositions.In contrast,artificial intelligence(AI)-driven approaches enable rapid screening and optimization of alloy compositions,significantly improving predictive accuracy and interpretability.Feature selection techniques were employed to identify key alloying elements influencing hardness,toughness,and wear resistance.To enhance model interpretability,explainable artificial intelligence(XAI)techniques—SHapley Additive exPlanations(SHAP)and Local Interpretable Model-agnostic Explanations(LIME)—were applied to quantify the contributions of individual elements and uncover complex elemental interactions.Furthermore,a high-throughput machine learning(ML)–driven screening approach was implemented to optimize the binder phase composition,facilitating the discovery of HEAs with superiormechanical properties.Experimental validation demonstrated strong agreement between model predictions and measured performance,confirming the reliability of the ML framework.This study underscores the potential of integrating ML and XAI for data-driven materials design,providing a novel strategy for optimizing high-entropy cemented carbides. 展开更多
关键词 Cemented carbide high-entropy binder phase machine learning HARDNESS interpretable ai composition-property modeling
在线阅读 下载PDF
Research on the Influencing Mechanism of College Students’ Reliance on AI Tools and Weakened Learning Ability and Educational Coping Strategies
9
作者 Xiang Yuan Ling Peng 《Journal of Contemporary Educational Research》 2025年第6期80-86,共7页
With the rapid popularization of artificial intelligence technology in the field of higher education,college students are increasingly dependent on AI tools such as ChatGPT,automatic writing assistants,and intelligent... With the rapid popularization of artificial intelligence technology in the field of higher education,college students are increasingly dependent on AI tools such as ChatGPT,automatic writing assistants,and intelligent translators.Behind the convenience and efficiency,a decline trend in students’core learning abilities such as autonomous learning ability,critical thinking ability,and knowledge construction ability has gradually emerged.This study aims to explore the interactive logical mechanism between college students’reliance on AI tools and the weakening of their learning abilities,and on this basis,propose practical and feasible educational intervention strategies.Research has found that while AI tools lower the learning threshold,they also weaken students’cognitive investment and independent thinking abilities,further intensifying their reliance on technology.In this regard,this paper proposes a three-dimensional intervention path based on guided usage,ability compensation,and value reconstruction to achieve the collaborative improvement of students’technical usage ability and learning ability.This research has certain theoretical value and practical enlightenment significance for solving the structural predicament of higher education in the intelligent era. 展开更多
关键词 Reliance on ai tools learning ability Coping strategy Interactive logic
在线阅读 下载PDF
Exploring the Path of AIGC and AI Agents Empowering Front-End Teaching and Learning
10
作者 Dongxing Wang Wang Yu Weixing Wang 《Journal of Contemporary Educational Research》 2025年第11期278-283,共6页
In response to the pain points of rapid iteration of front-end education technology,large differences in learner foundations,and a lack of practical scenarios,this paper combines generative artificial intelligence and... In response to the pain points of rapid iteration of front-end education technology,large differences in learner foundations,and a lack of practical scenarios,this paper combines generative artificial intelligence and AI agents to analyze the empowerment logic from three dimensions:knowledge ecology reconstruction,cognitive collaborative upgrading,and teaching methodology innovation.It explores its application scenarios in teaching and learning,sorts out challenges such as technology adaptation and learning dependence,and proposes paths such as building an exclusive AI ecosystem and optimizing the guidance mechanism of intelligent agents to provide support for the digital transformation of front-end education. 展开更多
关键词 aiGC ai intelligent agent Front-end education Teaching and learning efficiency
在线阅读 下载PDF
A Lightweight Explainable Deep Learning for Blood Cell Classification
11
作者 Ngoc-Hoang-Quyen Nguyen Thanh-Tung Nguyen Anh-Cang Phan 《Computer Modeling in Engineering & Sciences》 2025年第11期2435-2456,共22页
Blood cell disorders are among the leading causes of serious diseases such as leukemia,anemia,blood clotting disorders,and immune-related conditions.The global incidence of hematological diseases is increasing,affecti... Blood cell disorders are among the leading causes of serious diseases such as leukemia,anemia,blood clotting disorders,and immune-related conditions.The global incidence of hematological diseases is increasing,affecting both children and adults.In clinical practice,blood smear analysis is still largely performed manually,relying heavily on the experience and expertise of laboratory technicians or hematologists.This manual process introduces risks of diagnostic errors,especially in cases with rare or morphologically ambiguous cells.The situation is more critical in developing countries,where there is a shortage of specialized medical personnel and limited access to modern diagnostic tools.High testing costs and delays in diagnosis hinder access to quality healthcare services.In this context,the integration of Artificial Intelligence(AI),particularly Explainable AI(XAI)based on deep learning,offers a promising solution for improving the accuracy,efficiency,and transparency of hematological diagnostics.In this study,we propose a Ghost Residual Network(GRsNet)integrated with XAI techniques such as Gradient-weighted Class Activation Mapping(Grad-CAM),Local Interpretable Model-Agnostic Explanations(LIME),and SHapley Additive exPlanations(SHAP)for automatic blood cell classification.These techniques provide visual explanations by highlighting important regions in the input images,thereby supporting clinical decision-making.The proposed model is evaluated on two public datasets:Naturalize 2K-PBC and Microscopic Blood Cell,achieving a classification accuracy of up to 95%.The results demonstrate the model’s strong potential for automated hematological diagnosis,particularly in resource-constrained settings.It not only enhances diagnostic reliability but also contributes to advancing digital transformation and equitable access to AI-driven healthcare in developing regions. 展开更多
关键词 Deep learning blood cells peripheral blood smear blood cell classification explainable ai
在线阅读 下载PDF
Enhancing User Experience in AI-Powered Human-Computer Communication with Vocal Emotions Identification Using a Novel Deep Learning Method
12
作者 Ahmed Alhussen Arshiya Sajid Ansari Mohammad Sajid Mohammadi 《Computers, Materials & Continua》 2025年第2期2909-2929,共21页
Voice, motion, and mimicry are naturalistic control modalities that have replaced text or display-driven control in human-computer communication (HCC). Specifically, the vocals contain a lot of knowledge, revealing de... Voice, motion, and mimicry are naturalistic control modalities that have replaced text or display-driven control in human-computer communication (HCC). Specifically, the vocals contain a lot of knowledge, revealing details about the speaker’s goals and desires, as well as their internal condition. Certain vocal characteristics reveal the speaker’s mood, intention, and motivation, while word study assists the speaker’s demand to be understood. Voice emotion recognition has become an essential component of modern HCC networks. Integrating findings from the various disciplines involved in identifying vocal emotions is also challenging. Many sound analysis techniques were developed in the past. Learning about the development of artificial intelligence (AI), and especially Deep Learning (DL) technology, research incorporating real data is becoming increasingly common these days. Thus, this research presents a novel selfish herd optimization-tuned long/short-term memory (SHO-LSTM) strategy to identify vocal emotions in human communication. The RAVDESS public dataset is used to train the suggested SHO-LSTM technique. Mel-frequency cepstral coefficient (MFCC) and wiener filter (WF) techniques are used, respectively, to remove noise and extract features from the data. LSTM and SHO are applied to the extracted data to optimize the LSTM network’s parameters for effective emotion recognition. Python Software was used to execute our proposed framework. In the finding assessment phase, Numerous metrics are used to evaluate the proposed model’s detection capability, Such as F1-score (95%), precision (95%), recall (96%), and accuracy (97%). The suggested approach is tested on a Python platform, and the SHO-LSTM’s outcomes are contrasted with those of other previously conducted research. Based on comparative assessments, our suggested approach outperforms the current approaches in vocal emotion recognition. 展开更多
关键词 Human-computer communication(HCC) vocal emotions live vocal artificial intelligence(ai) deep learning(DL) selfish herd optimization-tuned long/short K term memory(SHO-LSTM)
在线阅读 下载PDF
基于生成式AI的高职计算机专业课程精准教学策略探究
13
作者 司元雷 梁赛平 张勇昌 《北京工业职业技术学院学报》 2026年第1期80-84,共5页
在职业教育数字化转型与产业智能化升级背景下,高职计算机专业面临教学精准性不足的挑战。针对教学目标与岗位需求错位、学情诊断主观模糊、教学资源适配低效等问题,借助生成式AI构建精准教学策略,通过重构动态岗位能力目标、多模态学... 在职业教育数字化转型与产业智能化升级背景下,高职计算机专业面临教学精准性不足的挑战。针对教学目标与岗位需求错位、学情诊断主观模糊、教学资源适配低效等问题,借助生成式AI构建精准教学策略,通过重构动态岗位能力目标、多模态学情诊断及智能资源适配,实现教学全流程的精准化与个性化。教学实践数据表明:通过实施精准教学策略,学生在高阶思维、项目实践及职业素养方面提升显著,教学满意度与效能大幅提高。 展开更多
关键词 生成式ai 高职计算机专业 精准教学 多模态学情诊断 智能资源适配
在线阅读 下载PDF
A Deep Learning-Based Computational Algorithm for Identifying Damage Load Condition: An Artificial Intelligence Inverse Problem Solution for Failure Analysis 被引量:8
14
作者 Shaofei Ren Guorong Chen +2 位作者 Tiange Li Qijun Chen Shaofan Li 《Computer Modeling in Engineering & Sciences》 SCIE EI 2018年第12期287-307,共21页
In this work,we have developed a novel machine(deep)learning computational framework to determine and identify damage loading parameters(conditions)for structures and materials based on the permanent or residual plast... In this work,we have developed a novel machine(deep)learning computational framework to determine and identify damage loading parameters(conditions)for structures and materials based on the permanent or residual plastic deformation distribution or damage state of the structure.We have shown that the developed machine learning algorithm can accurately and(practically)uniquely identify both prior static as well as impact loading conditions in an inverse manner,based on the residual plastic strain and plastic deformation as forensic signatures.The paper presents the detailed machine learning algorithm,data acquisition and learning processes,and validation/verification examples.This development may have significant impacts on forensic material analysis and structure failure analysis,and it provides a powerful tool for material and structure forensic diagnosis,determination,and identification of damage loading conditions in accidental failure events,such as car crashes and infrastructure or building structure collapses. 展开更多
关键词 Artificial intelligence(ai) deep learning forensic materials engineering PLASTIC DEFORMATION structural FaiLURE analysis.
在线阅读 下载PDF
A station-data-based model residual machine learning method for fine-grained meteorological grid prediction 被引量:2
15
作者 Chuansai ZHOU Haochen LI +2 位作者 Chen YU Jiangjiang XIA Pingwen ZHANG 《Applied Mathematics and Mechanics(English Edition)》 SCIE EI CSCD 2022年第2期155-166,共12页
Fine-grained weather forecasting data,i.e.,the grid data with high-resolution,have attracted increasing attention in recent years,especially for some specific applications such as the Winter Olympic Games.Although Eur... Fine-grained weather forecasting data,i.e.,the grid data with high-resolution,have attracted increasing attention in recent years,especially for some specific applications such as the Winter Olympic Games.Although European Centre for Medium-Range Weather Forecasts(ECMWF)provides grid prediction up to 240 hours,the coarse data are unable to meet high requirements of these major events.In this paper,we propose a method,called model residual machine learning(MRML),to generate grid prediction with high-resolution based on high-precision stations forecasting.MRML applies model output machine learning(MOML)for stations forecasting.Subsequently,MRML utilizes these forecasts to improve the quality of the grid data by fitting a machine learning(ML)model to the residuals.We demonstrate that MRML achieves high capability at diverse meteorological elements,specifically,temperature,relative humidity,and wind speed.In addition,MRML could be easily extended to other post-processing methods by invoking different techniques.In our experiments,MRML outperforms the traditional downscaling methods such as piecewise linear interpolation(PLI)on the testing data. 展开更多
关键词 machine learning(ml) POST-PROCESSING fine-grained weather forecasting model residual machine learning(MRml)
在线阅读 下载PDF
Comprehensive analysis of multiple machine learning techniques for rock slope failure prediction 被引量:2
16
作者 Arsalan Mahmoodzadeh Abed Alanazi +4 位作者 Adil Hussein Mohammed Hawkar Hashim Ibrahim Abdullah Alqahtani Shtwai Alsubai Ahmed Babeker Elhag 《Journal of Rock Mechanics and Geotechnical Engineering》 SCIE CSCD 2024年第11期4386-4398,共13页
In this study,twelve machine learning(ML)techniques are used to accurately estimate the safety factor of rock slopes(SFRS).The dataset used for developing these models consists of 344 rock slopes from various open-pit... In this study,twelve machine learning(ML)techniques are used to accurately estimate the safety factor of rock slopes(SFRS).The dataset used for developing these models consists of 344 rock slopes from various open-pit mines around Iran,evenly distributed between the training(80%)and testing(20%)datasets.The models are evaluated for accuracy using Janbu's limit equilibrium method(LEM)and commercial tool GeoStudio methods.Statistical assessment metrics show that the random forest model is the most accurate in estimating the SFRS(MSE=0.0182,R2=0.8319)and shows high agreement with the results from the LEM method.The results from the long-short-term memory(LSTM)model are the least accurate(MSE=0.037,R2=0.6618)of all the models tested.However,only the null space support vector regression(NuSVR)model performs accurately compared to the practice mode by altering the value of one parameter while maintaining the other parameters constant.It is suggested that this model would be the best one to use to calculate the SFRS.A graphical user interface for the proposed models is developed to further assist in the calculation of the SFRS for engineering difficulties.In this study,we attempt to bridge the gap between modern slope stability evaluation techniques and more conventional analysis methods. 展开更多
关键词 Rock slope stability Open-pit mines Machine learning(ml) Limit equilibrium method(LEM)
在线阅读 下载PDF
Multi-Agent Deep Reinforcement Learning-Based Resource Allocation in HPC/AI Converged Cluster 被引量:1
17
作者 Jargalsaikhan Narantuya Jun-Sik Shin +1 位作者 Sun Park JongWon Kim 《Computers, Materials & Continua》 SCIE EI 2022年第9期4375-4395,共21页
As the complexity of deep learning(DL)networks and training data grows enormously,methods that scale with computation are becoming the future of artificial intelligence(AI)development.In this regard,the interplay betw... As the complexity of deep learning(DL)networks and training data grows enormously,methods that scale with computation are becoming the future of artificial intelligence(AI)development.In this regard,the interplay between machine learning(ML)and high-performance computing(HPC)is an innovative paradigm to speed up the efficiency of AI research and development.However,building and operating an HPC/AI converged system require broad knowledge to leverage the latest computing,networking,and storage technologies.Moreover,an HPC-based AI computing environment needs an appropriate resource allocation and monitoring strategy to efficiently utilize the system resources.In this regard,we introduce a technique for building and operating a high-performance AI-computing environment with the latest technologies.Specifically,an HPC/AI converged system is configured inside Gwangju Institute of Science and Technology(GIST),called GIST AI-X computing cluster,which is built by leveraging the latest Nvidia DGX servers,high-performance storage and networking devices,and various open source tools.Therefore,it can be a good reference for building a small or middlesized HPC/AI converged system for research and educational institutes.In addition,we propose a resource allocation method for DL jobs to efficiently utilize the computing resources with multi-agent deep reinforcement learning(mDRL).Through extensive simulations and experiments,we validate that the proposed mDRL algorithm can help the HPC/AI converged cluster to achieve both system utilization and power consumption improvement.By deploying the proposed resource allocation method to the system,total job completion time is reduced by around 20%and inefficient power consumption is reduced by around 40%. 展开更多
关键词 Deep learning HPC/ai converged cluster reinforcement learning
在线阅读 下载PDF
Surrogate role of machine learning in motor-drive optimization for more-electric aircraft applications 被引量:2
18
作者 Yuan GAO Benjamin CHEONG +3 位作者 Serhiy BOZHKO Pat WHEELER Chris GERADA Tao YANG 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2023年第2期213-228,共16页
Motor drives form an essential part of the electric compressors,pumps,braking and actuation systems in the More-Electric Aircraft(MEA).In this paper,the application of Machine Learning(ML)in motor-drive design and opt... Motor drives form an essential part of the electric compressors,pumps,braking and actuation systems in the More-Electric Aircraft(MEA).In this paper,the application of Machine Learning(ML)in motor-drive design and optimization process is investigated.The general idea of using ML is to train surrogate models for the optimization.This training process is based on sample data collected from detailed simulation or experiment of motor drives.However,the Surrogate Role(SR)of ML may vary for different applications.This paper first introduces the principles of ML and then proposes two SRs(direct mapping approach and correction approach)of the ML in a motor-drive optimization process.Two different cases are given for the method comparison and validation of ML SRs.The first case is using the sample data from experiments to train the ML surrogate models.For the second case,the joint-simulation data is utilized for a multi-objective motor-drive optimization problem.It is found that both surrogate roles of ML can provide a good mapping model for the cases and in the second case,three feasible design schemes of ML are proposed and validated for the two SRs.Regarding the time consumption in optimizaiton,the proposed ML models can give one motor-drive design point up to 0.044 s while it takes more than 1.5 mins for the used simulation-based models. 展开更多
关键词 Artificial Neural Network(ANN) Design and Optimization Machine learning(ml) More-Electric aircraft(MEA) Motor drive Permanent Magnet Synchronous Motor(PMSM) Search Algorithm Surrogate Algorithm
原文传递
Exploring the Effectiveness of Machine Learning and Deep Learning Algorithms for Sentiment Analysis:A Systematic Literature Review
19
作者 Jungpil Shin Wahidur Rahman +5 位作者 Tanvir Ahmed Bakhtiar Mazrur Md.Mohsin Mia Romana Idress Ekfa Md.Sajib Rana Pankoo Kim 《Computers, Materials & Continua》 2025年第9期4105-4153,共49页
Sentiment Analysis,a significant domain within Natural Language Processing(NLP),focuses on extracting and interpreting subjective information-such as emotions,opinions,and attitudes-from textual data.With the increasi... Sentiment Analysis,a significant domain within Natural Language Processing(NLP),focuses on extracting and interpreting subjective information-such as emotions,opinions,and attitudes-from textual data.With the increasing volume of user-generated content on social media and digital platforms,sentiment analysis has become essential for deriving actionable insights across various sectors.This study presents a systematic literature review of sentiment analysis methodologies,encompassing traditional machine learning algorithms,lexicon-based approaches,and recent advancements in deep learning techniques.The review follows a structured protocol comprising three phases:planning,execution,and analysis/reporting.During the execution phase,67 peer-reviewed articles were initially retrieved,with 25 meeting predefined inclusion and exclusion criteria.The analysis phase involved a detailed examination of each study’s methodology,experimental setup,and key contributions.Among the deep learning models evaluated,Long Short-Term Memory(LSTM)networks were identified as the most frequently adopted architecture for sentiment classification tasks.This review highlights current trends,technical challenges,and emerging opportunities in the field,providing valuable guidance for future research and development in applications such as market analysis,public health monitoring,financial forecasting,and crisis management. 展开更多
关键词 Natural Language Processing(NLP) Machine learning(ml) sentiment analysis deep learning textual data
在线阅读 下载PDF
Federated Learning for Vision-Based Applications in 6G Networks: A Simulation-Based Performance Study
20
作者 Manuel J.C.S.Reis Nishu Gupta 《Computer Modeling in Engineering & Sciences》 2025年第12期4225-4243,共19页
The forthcoming sixth generation(6G)of mobile communication networks is envisioned to be AInative,supporting intelligent services and pervasive computing at unprecedented scale.Among the key paradigms enabling this vi... The forthcoming sixth generation(6G)of mobile communication networks is envisioned to be AInative,supporting intelligent services and pervasive computing at unprecedented scale.Among the key paradigms enabling this vision,Federated Learning(FL)has gained prominence as a distributed machine learning framework that allows multiple devices to collaboratively train models without sharing raw data,thereby preserving privacy and reducing the need for centralized storage.This capability is particularly attractive for vision-based applications,where image and video data are both sensitive and bandwidth-intensive.However,the integration of FL with 6G networks presents unique challenges,including communication bottlenecks,device heterogeneity,and trade-offs between model accuracy,latency,and energy consumption.In this paper,we developed a simulation-based framework to investigate the performance of FL in representative vision tasks under 6G-like environments.We formalize the system model,incorporating both the federated averaging(FedAvg)training process and a simplified communication costmodel that captures bandwidth constraints,packet loss,and variable latency across edge devices.Using standard image datasets(e.g.,MNIST,CIFAR-10)as benchmarks,we analyze how factors such as the number of participating clients,degree of data heterogeneity,and communication frequency influence convergence speed and model accuracy.Additionally,we evaluate the effectiveness of lightweight communication-efficient strategies,including local update tuning and gradient compression,in mitigating network overhead.The experimental results reveal several key insights:(i)communication limitations can significantly degrade FL convergence in vision tasks if not properly addressed;(ii)judicious tuning of local training epochs and client participation levels enables notable improvements in both efficiency and accuracy;and(iii)communication-efficient FL strategies provide a promising pathway to balance performance with the stringent latency and reliability requirements expected in 6G.These findings highlight the synergistic role of AI and nextgeneration networks in enabling privacy-preserving,real-time vision applications,and they provide concrete design guidelines for researchers and practitioners working at the intersection of FL and 6G. 展开更多
关键词 Federated learning 6G networks edge intelligence vision-based applications communication-efficient learning privacy-preserving ai
在线阅读 下载PDF
上一页 1 2 73 下一页 到第
使用帮助 返回顶部