As AI systems scale, the limitations of cloud-based architectures, including latency, bandwidth, and privacy concerns, demand decentralized alternatives. Federated learning (FL) and Edge AI provide a paradigm shift by...As AI systems scale, the limitations of cloud-based architectures, including latency, bandwidth, and privacy concerns, demand decentralized alternatives. Federated learning (FL) and Edge AI provide a paradigm shift by combining privacy preserving training with efficient, on device computation. This paper introduces a cutting-edge FL-edge integration framework, achieving a 10% to 15% increase in model accuracy and reducing communication costs by 25% in heterogeneous environments. Blockchain based secure aggregation ensures robust and tamper-proof model updates, while exploratory quantum AI techniques enhance computational efficiency. By addressing key challenges such as device variability and non-IID data, this work sets the stage for the next generation of adaptive, privacy-first AI systems, with applications in IoT, healthcare, and autonomous systems.展开更多
The fast increase of online communities has brought about an increase in cyber threats inclusive of cyberbullying, hate speech, misinformation, and online harassment, making content moderation a pressing necessity. Tr...The fast increase of online communities has brought about an increase in cyber threats inclusive of cyberbullying, hate speech, misinformation, and online harassment, making content moderation a pressing necessity. Traditional single-modal AI-based detection systems, which analyze both text, photos, or movies in isolation, have established useless at taking pictures multi-modal threats, in which malicious actors spread dangerous content throughout a couple of formats. To cope with these demanding situations, we advise a multi-modal deep mastering framework that integrates Natural Language Processing (NLP), Convolutional Neural Networks (CNNs), and Long Short-Term Memory (LSTM) networks to become aware of and mitigate online threats effectively. Our proposed model combines BERT for text class, ResNet50 for photograph processing, and a hybrid LSTM-3-d CNN community for video content material analysis. We constructed a large-scale dataset comprising 500,000 textual posts, 200,000 offensive images, and 50,000 annotated motion pictures from more than one platform, which includes Twitter, Reddit, YouTube, and online gaming forums. The system became carefully evaluated using trendy gadget mastering metrics which include accuracy, precision, remember, F1-score, and ROC-AUC curves. Experimental outcomes demonstrate that our multi-modal method extensively outperforms single-modal AI classifiers, achieving an accuracy of 92.3%, precision of 91.2%, do not forget of 90.1%, and an AUC rating of 0.95. The findings validate the necessity of integrating multi-modal AI for actual-time, high-accuracy online chance detection and moderation. Future paintings will have consciousness on improving hostile robustness, enhancing scalability for real-world deployment, and addressing ethical worries associated with AI-driven content moderation.展开更多
Membrane fouling is a persistent challenge in membrane-based technologies,significantly impacting efficiency,operational costs,and system lifespan in applications like water treatment,desalination,and industrial proce...Membrane fouling is a persistent challenge in membrane-based technologies,significantly impacting efficiency,operational costs,and system lifespan in applications like water treatment,desalination,and industrial processing.Foul-ing,caused by the accumulation of particulates,organic compounds,and microorganisms,leads to reduced permeability,increased energy demands,and frequent maintenance.Traditional fouling control approaches,relying on empirical models and reactive strategies,often fail to address these issues efficiently.In this context,artificial intelligence(AI)and machine learning(ML)have emerged as innovative tools offering predictive and proactive solutions for fouling man-agement.By utilizing historical and real-time data,AI/ML techniques such as artificial neural networks,support vector machines,and ensemble models enable accurate prediction of fouling onset,identification of fouling mechanisms,and optimization of control measures.This review provides a detailed examination of the integration of AI/ML in membrane fouling prediction and mitigation,discussing advanced algorithms,the role of sensor-based monitoring,and the importance of robust datasets in enhancing predictive accuracy.Case studies highlighting successful AI/ML applications across various membrane processes are presented,demonstrating their transformative potential in improving system performance.Emerging trends,such as hybrid modeling and IoT-enabled smart systems,are explored,alongside a criti-cal analysis of research gaps and opportunities.This review emphasizes AI/ML as a cornerstone for sustainable,cost-effective membrane operations.展开更多
Against the background of the continuous reform in medical education,biochemistry,as a fundamental medical course,maintains a close connection with clinical practice.However,under the traditional teaching model,the ef...Against the background of the continuous reform in medical education,biochemistry,as a fundamental medical course,maintains a close connection with clinical practice.However,under the traditional teaching model,the effectiveness of the“basic-clinical”connection is relatively poor,which hinders the improvement of educational outcomes.In the practical teaching of higher vocational medical education,the integration of the AI Case-Guided Learning System can enhance students’enthusiasm for knowledge exploration and effectively improve teaching quality.Starting from the perspective of the“basic-clinical”connection teaching in the biochemistry course,this paper analyzes the application value of the AI Case-Guided Learning System and proposes specific application strategies,aiming to accumulate experience for the innovation of biochemistry teaching.展开更多
This study explores a novel educational model of generative AI-empowered interdisciplinary project-based learning(PBL).By analyzing the current applications of generative AI technology in information technology curric...This study explores a novel educational model of generative AI-empowered interdisciplinary project-based learning(PBL).By analyzing the current applications of generative AI technology in information technology curricula,it elucidates its advantages and operational mechanisms in interdisciplinary PBL.Combining case studies and empirical research,the investigation proposes implementation pathways and strategies for the generative AI-enhanced interdisciplinary PBL model,detailing specific applications across three phases:project preparation,implementation,and evaluation.The research demonstrates that generative AI-enabled interdisciplinary project-based learning can effectively enhance students’learning motivation,interdisciplinary thinking capabilities,and innovative competencies,providing new conceptual frameworks and practical approaches for educational model innovation.展开更多
In the evolving landscape of cyber threats,phishing attacks pose significant challenges,particularly through deceptive webpages designed to extract sensitive information under the guise of legitimacy.Conventional and ...In the evolving landscape of cyber threats,phishing attacks pose significant challenges,particularly through deceptive webpages designed to extract sensitive information under the guise of legitimacy.Conventional and machine learning(ML)-based detection systems struggle to detect phishing websites owing to their constantly changing tactics.Furthermore,newer phishing websites exhibit subtle and expertly concealed indicators that are not readily detectable.Hence,effective detection depends on identifying the most critical features.Traditional feature selection(FS)methods often struggle to enhance ML model performance and instead decrease it.To combat these issues,we propose an innovative method using explainable AI(XAI)to enhance FS in ML models and improve the identification of phishing websites.Specifically,we employ SHapley Additive exPlanations(SHAP)for global perspective and aggregated local interpretable model-agnostic explanations(LIME)to deter-mine specific localized patterns.The proposed SHAP and LIME-aggregated FS(SLA-FS)framework pinpoints the most informative features,enabling more precise,swift,and adaptable phishing detection.Applying this approach to an up-to-date web phishing dataset,we evaluate the performance of three ML models before and after FS to assess their effectiveness.Our findings reveal that random forest(RF),with an accuracy of 97.41%and XGBoost(XGB)at 97.21%significantly benefit from the SLA-FS framework,while k-nearest neighbors lags.Our framework increases the accuracy of RF and XGB by 0.65%and 0.41%,respectively,outperforming traditional filter or wrapper methods and any prior methods evaluated on this dataset,showcasing its potential.展开更多
Predicting the health status of stroke patients at different stages of the disease is a critical clinical task.The onset and development of stroke are affected by an array of factors,encompassing genetic predispositio...Predicting the health status of stroke patients at different stages of the disease is a critical clinical task.The onset and development of stroke are affected by an array of factors,encompassing genetic predisposition,environmental exposure,unhealthy lifestyle habits,and existing medical conditions.Although existing machine learning-based methods for predicting stroke patients’health status have made significant progress,limitations remain in terms of prediction accuracy,model explainability,and system optimization.This paper proposes a multi-task learning approach based on Explainable Artificial Intelligence(XAI)for predicting the health status of stroke patients.First,we design a comprehensive multi-task learning framework that utilizes the task correlation of predicting various health status indicators in patients,enabling the parallel prediction of multiple health indicators.Second,we develop a multi-task Area Under Curve(AUC)optimization algorithm based on adaptive low-rank representation,which removes irrelevant information from the model structure to enhance the performance of multi-task AUC optimization.Additionally,the model’s explainability is analyzed through the stability analysis of SHAP values.Experimental results demonstrate that our approach outperforms comparison algorithms in key prognostic metrics F1 score and Efficiency.展开更多
Tungsten carbide-based(WC-based)cemented carbides are widely recognized as high-performance tool materials.Traditionally,single metals such as cobalt(Co)or nickel(Ni)serve as the binder phase,providing toughness and s...Tungsten carbide-based(WC-based)cemented carbides are widely recognized as high-performance tool materials.Traditionally,single metals such as cobalt(Co)or nickel(Ni)serve as the binder phase,providing toughness and structural integrity.Replacing this phase with high-entropy alloys(HEAs)offers a promising approach to enhancing mechanical properties and addressing sustainability challenges.However,the complex multi-element composition of HEAs complicates conventional experimental design,making it difficult to explore the vast compositional space efficiently.Traditional trial-and-error methods are time-consuming,resource-intensive,and often ineffective in identifying optimal compositions.In contrast,artificial intelligence(AI)-driven approaches enable rapid screening and optimization of alloy compositions,significantly improving predictive accuracy and interpretability.Feature selection techniques were employed to identify key alloying elements influencing hardness,toughness,and wear resistance.To enhance model interpretability,explainable artificial intelligence(XAI)techniques—SHapley Additive exPlanations(SHAP)and Local Interpretable Model-agnostic Explanations(LIME)—were applied to quantify the contributions of individual elements and uncover complex elemental interactions.Furthermore,a high-throughput machine learning(ML)–driven screening approach was implemented to optimize the binder phase composition,facilitating the discovery of HEAs with superiormechanical properties.Experimental validation demonstrated strong agreement between model predictions and measured performance,confirming the reliability of the ML framework.This study underscores the potential of integrating ML and XAI for data-driven materials design,providing a novel strategy for optimizing high-entropy cemented carbides.展开更多
With the rapid popularization of artificial intelligence technology in the field of higher education,college students are increasingly dependent on AI tools such as ChatGPT,automatic writing assistants,and intelligent...With the rapid popularization of artificial intelligence technology in the field of higher education,college students are increasingly dependent on AI tools such as ChatGPT,automatic writing assistants,and intelligent translators.Behind the convenience and efficiency,a decline trend in students’core learning abilities such as autonomous learning ability,critical thinking ability,and knowledge construction ability has gradually emerged.This study aims to explore the interactive logical mechanism between college students’reliance on AI tools and the weakening of their learning abilities,and on this basis,propose practical and feasible educational intervention strategies.Research has found that while AI tools lower the learning threshold,they also weaken students’cognitive investment and independent thinking abilities,further intensifying their reliance on technology.In this regard,this paper proposes a three-dimensional intervention path based on guided usage,ability compensation,and value reconstruction to achieve the collaborative improvement of students’technical usage ability and learning ability.This research has certain theoretical value and practical enlightenment significance for solving the structural predicament of higher education in the intelligent era.展开更多
In response to the pain points of rapid iteration of front-end education technology,large differences in learner foundations,and a lack of practical scenarios,this paper combines generative artificial intelligence and...In response to the pain points of rapid iteration of front-end education technology,large differences in learner foundations,and a lack of practical scenarios,this paper combines generative artificial intelligence and AI agents to analyze the empowerment logic from three dimensions:knowledge ecology reconstruction,cognitive collaborative upgrading,and teaching methodology innovation.It explores its application scenarios in teaching and learning,sorts out challenges such as technology adaptation and learning dependence,and proposes paths such as building an exclusive AI ecosystem and optimizing the guidance mechanism of intelligent agents to provide support for the digital transformation of front-end education.展开更多
Blood cell disorders are among the leading causes of serious diseases such as leukemia,anemia,blood clotting disorders,and immune-related conditions.The global incidence of hematological diseases is increasing,affecti...Blood cell disorders are among the leading causes of serious diseases such as leukemia,anemia,blood clotting disorders,and immune-related conditions.The global incidence of hematological diseases is increasing,affecting both children and adults.In clinical practice,blood smear analysis is still largely performed manually,relying heavily on the experience and expertise of laboratory technicians or hematologists.This manual process introduces risks of diagnostic errors,especially in cases with rare or morphologically ambiguous cells.The situation is more critical in developing countries,where there is a shortage of specialized medical personnel and limited access to modern diagnostic tools.High testing costs and delays in diagnosis hinder access to quality healthcare services.In this context,the integration of Artificial Intelligence(AI),particularly Explainable AI(XAI)based on deep learning,offers a promising solution for improving the accuracy,efficiency,and transparency of hematological diagnostics.In this study,we propose a Ghost Residual Network(GRsNet)integrated with XAI techniques such as Gradient-weighted Class Activation Mapping(Grad-CAM),Local Interpretable Model-Agnostic Explanations(LIME),and SHapley Additive exPlanations(SHAP)for automatic blood cell classification.These techniques provide visual explanations by highlighting important regions in the input images,thereby supporting clinical decision-making.The proposed model is evaluated on two public datasets:Naturalize 2K-PBC and Microscopic Blood Cell,achieving a classification accuracy of up to 95%.The results demonstrate the model’s strong potential for automated hematological diagnosis,particularly in resource-constrained settings.It not only enhances diagnostic reliability but also contributes to advancing digital transformation and equitable access to AI-driven healthcare in developing regions.展开更多
Voice, motion, and mimicry are naturalistic control modalities that have replaced text or display-driven control in human-computer communication (HCC). Specifically, the vocals contain a lot of knowledge, revealing de...Voice, motion, and mimicry are naturalistic control modalities that have replaced text or display-driven control in human-computer communication (HCC). Specifically, the vocals contain a lot of knowledge, revealing details about the speaker’s goals and desires, as well as their internal condition. Certain vocal characteristics reveal the speaker’s mood, intention, and motivation, while word study assists the speaker’s demand to be understood. Voice emotion recognition has become an essential component of modern HCC networks. Integrating findings from the various disciplines involved in identifying vocal emotions is also challenging. Many sound analysis techniques were developed in the past. Learning about the development of artificial intelligence (AI), and especially Deep Learning (DL) technology, research incorporating real data is becoming increasingly common these days. Thus, this research presents a novel selfish herd optimization-tuned long/short-term memory (SHO-LSTM) strategy to identify vocal emotions in human communication. The RAVDESS public dataset is used to train the suggested SHO-LSTM technique. Mel-frequency cepstral coefficient (MFCC) and wiener filter (WF) techniques are used, respectively, to remove noise and extract features from the data. LSTM and SHO are applied to the extracted data to optimize the LSTM network’s parameters for effective emotion recognition. Python Software was used to execute our proposed framework. In the finding assessment phase, Numerous metrics are used to evaluate the proposed model’s detection capability, Such as F1-score (95%), precision (95%), recall (96%), and accuracy (97%). The suggested approach is tested on a Python platform, and the SHO-LSTM’s outcomes are contrasted with those of other previously conducted research. Based on comparative assessments, our suggested approach outperforms the current approaches in vocal emotion recognition.展开更多
In this work,we have developed a novel machine(deep)learning computational framework to determine and identify damage loading parameters(conditions)for structures and materials based on the permanent or residual plast...In this work,we have developed a novel machine(deep)learning computational framework to determine and identify damage loading parameters(conditions)for structures and materials based on the permanent or residual plastic deformation distribution or damage state of the structure.We have shown that the developed machine learning algorithm can accurately and(practically)uniquely identify both prior static as well as impact loading conditions in an inverse manner,based on the residual plastic strain and plastic deformation as forensic signatures.The paper presents the detailed machine learning algorithm,data acquisition and learning processes,and validation/verification examples.This development may have significant impacts on forensic material analysis and structure failure analysis,and it provides a powerful tool for material and structure forensic diagnosis,determination,and identification of damage loading conditions in accidental failure events,such as car crashes and infrastructure or building structure collapses.展开更多
Fine-grained weather forecasting data,i.e.,the grid data with high-resolution,have attracted increasing attention in recent years,especially for some specific applications such as the Winter Olympic Games.Although Eur...Fine-grained weather forecasting data,i.e.,the grid data with high-resolution,have attracted increasing attention in recent years,especially for some specific applications such as the Winter Olympic Games.Although European Centre for Medium-Range Weather Forecasts(ECMWF)provides grid prediction up to 240 hours,the coarse data are unable to meet high requirements of these major events.In this paper,we propose a method,called model residual machine learning(MRML),to generate grid prediction with high-resolution based on high-precision stations forecasting.MRML applies model output machine learning(MOML)for stations forecasting.Subsequently,MRML utilizes these forecasts to improve the quality of the grid data by fitting a machine learning(ML)model to the residuals.We demonstrate that MRML achieves high capability at diverse meteorological elements,specifically,temperature,relative humidity,and wind speed.In addition,MRML could be easily extended to other post-processing methods by invoking different techniques.In our experiments,MRML outperforms the traditional downscaling methods such as piecewise linear interpolation(PLI)on the testing data.展开更多
In this study,twelve machine learning(ML)techniques are used to accurately estimate the safety factor of rock slopes(SFRS).The dataset used for developing these models consists of 344 rock slopes from various open-pit...In this study,twelve machine learning(ML)techniques are used to accurately estimate the safety factor of rock slopes(SFRS).The dataset used for developing these models consists of 344 rock slopes from various open-pit mines around Iran,evenly distributed between the training(80%)and testing(20%)datasets.The models are evaluated for accuracy using Janbu's limit equilibrium method(LEM)and commercial tool GeoStudio methods.Statistical assessment metrics show that the random forest model is the most accurate in estimating the SFRS(MSE=0.0182,R2=0.8319)and shows high agreement with the results from the LEM method.The results from the long-short-term memory(LSTM)model are the least accurate(MSE=0.037,R2=0.6618)of all the models tested.However,only the null space support vector regression(NuSVR)model performs accurately compared to the practice mode by altering the value of one parameter while maintaining the other parameters constant.It is suggested that this model would be the best one to use to calculate the SFRS.A graphical user interface for the proposed models is developed to further assist in the calculation of the SFRS for engineering difficulties.In this study,we attempt to bridge the gap between modern slope stability evaluation techniques and more conventional analysis methods.展开更多
As the complexity of deep learning(DL)networks and training data grows enormously,methods that scale with computation are becoming the future of artificial intelligence(AI)development.In this regard,the interplay betw...As the complexity of deep learning(DL)networks and training data grows enormously,methods that scale with computation are becoming the future of artificial intelligence(AI)development.In this regard,the interplay between machine learning(ML)and high-performance computing(HPC)is an innovative paradigm to speed up the efficiency of AI research and development.However,building and operating an HPC/AI converged system require broad knowledge to leverage the latest computing,networking,and storage technologies.Moreover,an HPC-based AI computing environment needs an appropriate resource allocation and monitoring strategy to efficiently utilize the system resources.In this regard,we introduce a technique for building and operating a high-performance AI-computing environment with the latest technologies.Specifically,an HPC/AI converged system is configured inside Gwangju Institute of Science and Technology(GIST),called GIST AI-X computing cluster,which is built by leveraging the latest Nvidia DGX servers,high-performance storage and networking devices,and various open source tools.Therefore,it can be a good reference for building a small or middlesized HPC/AI converged system for research and educational institutes.In addition,we propose a resource allocation method for DL jobs to efficiently utilize the computing resources with multi-agent deep reinforcement learning(mDRL).Through extensive simulations and experiments,we validate that the proposed mDRL algorithm can help the HPC/AI converged cluster to achieve both system utilization and power consumption improvement.By deploying the proposed resource allocation method to the system,total job completion time is reduced by around 20%and inefficient power consumption is reduced by around 40%.展开更多
Motor drives form an essential part of the electric compressors,pumps,braking and actuation systems in the More-Electric Aircraft(MEA).In this paper,the application of Machine Learning(ML)in motor-drive design and opt...Motor drives form an essential part of the electric compressors,pumps,braking and actuation systems in the More-Electric Aircraft(MEA).In this paper,the application of Machine Learning(ML)in motor-drive design and optimization process is investigated.The general idea of using ML is to train surrogate models for the optimization.This training process is based on sample data collected from detailed simulation or experiment of motor drives.However,the Surrogate Role(SR)of ML may vary for different applications.This paper first introduces the principles of ML and then proposes two SRs(direct mapping approach and correction approach)of the ML in a motor-drive optimization process.Two different cases are given for the method comparison and validation of ML SRs.The first case is using the sample data from experiments to train the ML surrogate models.For the second case,the joint-simulation data is utilized for a multi-objective motor-drive optimization problem.It is found that both surrogate roles of ML can provide a good mapping model for the cases and in the second case,three feasible design schemes of ML are proposed and validated for the two SRs.Regarding the time consumption in optimizaiton,the proposed ML models can give one motor-drive design point up to 0.044 s while it takes more than 1.5 mins for the used simulation-based models.展开更多
Sentiment Analysis,a significant domain within Natural Language Processing(NLP),focuses on extracting and interpreting subjective information-such as emotions,opinions,and attitudes-from textual data.With the increasi...Sentiment Analysis,a significant domain within Natural Language Processing(NLP),focuses on extracting and interpreting subjective information-such as emotions,opinions,and attitudes-from textual data.With the increasing volume of user-generated content on social media and digital platforms,sentiment analysis has become essential for deriving actionable insights across various sectors.This study presents a systematic literature review of sentiment analysis methodologies,encompassing traditional machine learning algorithms,lexicon-based approaches,and recent advancements in deep learning techniques.The review follows a structured protocol comprising three phases:planning,execution,and analysis/reporting.During the execution phase,67 peer-reviewed articles were initially retrieved,with 25 meeting predefined inclusion and exclusion criteria.The analysis phase involved a detailed examination of each study’s methodology,experimental setup,and key contributions.Among the deep learning models evaluated,Long Short-Term Memory(LSTM)networks were identified as the most frequently adopted architecture for sentiment classification tasks.This review highlights current trends,technical challenges,and emerging opportunities in the field,providing valuable guidance for future research and development in applications such as market analysis,public health monitoring,financial forecasting,and crisis management.展开更多
The forthcoming sixth generation(6G)of mobile communication networks is envisioned to be AInative,supporting intelligent services and pervasive computing at unprecedented scale.Among the key paradigms enabling this vi...The forthcoming sixth generation(6G)of mobile communication networks is envisioned to be AInative,supporting intelligent services and pervasive computing at unprecedented scale.Among the key paradigms enabling this vision,Federated Learning(FL)has gained prominence as a distributed machine learning framework that allows multiple devices to collaboratively train models without sharing raw data,thereby preserving privacy and reducing the need for centralized storage.This capability is particularly attractive for vision-based applications,where image and video data are both sensitive and bandwidth-intensive.However,the integration of FL with 6G networks presents unique challenges,including communication bottlenecks,device heterogeneity,and trade-offs between model accuracy,latency,and energy consumption.In this paper,we developed a simulation-based framework to investigate the performance of FL in representative vision tasks under 6G-like environments.We formalize the system model,incorporating both the federated averaging(FedAvg)training process and a simplified communication costmodel that captures bandwidth constraints,packet loss,and variable latency across edge devices.Using standard image datasets(e.g.,MNIST,CIFAR-10)as benchmarks,we analyze how factors such as the number of participating clients,degree of data heterogeneity,and communication frequency influence convergence speed and model accuracy.Additionally,we evaluate the effectiveness of lightweight communication-efficient strategies,including local update tuning and gradient compression,in mitigating network overhead.The experimental results reveal several key insights:(i)communication limitations can significantly degrade FL convergence in vision tasks if not properly addressed;(ii)judicious tuning of local training epochs and client participation levels enables notable improvements in both efficiency and accuracy;and(iii)communication-efficient FL strategies provide a promising pathway to balance performance with the stringent latency and reliability requirements expected in 6G.These findings highlight the synergistic role of AI and nextgeneration networks in enabling privacy-preserving,real-time vision applications,and they provide concrete design guidelines for researchers and practitioners working at the intersection of FL and 6G.展开更多
文摘As AI systems scale, the limitations of cloud-based architectures, including latency, bandwidth, and privacy concerns, demand decentralized alternatives. Federated learning (FL) and Edge AI provide a paradigm shift by combining privacy preserving training with efficient, on device computation. This paper introduces a cutting-edge FL-edge integration framework, achieving a 10% to 15% increase in model accuracy and reducing communication costs by 25% in heterogeneous environments. Blockchain based secure aggregation ensures robust and tamper-proof model updates, while exploratory quantum AI techniques enhance computational efficiency. By addressing key challenges such as device variability and non-IID data, this work sets the stage for the next generation of adaptive, privacy-first AI systems, with applications in IoT, healthcare, and autonomous systems.
文摘The fast increase of online communities has brought about an increase in cyber threats inclusive of cyberbullying, hate speech, misinformation, and online harassment, making content moderation a pressing necessity. Traditional single-modal AI-based detection systems, which analyze both text, photos, or movies in isolation, have established useless at taking pictures multi-modal threats, in which malicious actors spread dangerous content throughout a couple of formats. To cope with these demanding situations, we advise a multi-modal deep mastering framework that integrates Natural Language Processing (NLP), Convolutional Neural Networks (CNNs), and Long Short-Term Memory (LSTM) networks to become aware of and mitigate online threats effectively. Our proposed model combines BERT for text class, ResNet50 for photograph processing, and a hybrid LSTM-3-d CNN community for video content material analysis. We constructed a large-scale dataset comprising 500,000 textual posts, 200,000 offensive images, and 50,000 annotated motion pictures from more than one platform, which includes Twitter, Reddit, YouTube, and online gaming forums. The system became carefully evaluated using trendy gadget mastering metrics which include accuracy, precision, remember, F1-score, and ROC-AUC curves. Experimental outcomes demonstrate that our multi-modal method extensively outperforms single-modal AI classifiers, achieving an accuracy of 92.3%, precision of 91.2%, do not forget of 90.1%, and an AUC rating of 0.95. The findings validate the necessity of integrating multi-modal AI for actual-time, high-accuracy online chance detection and moderation. Future paintings will have consciousness on improving hostile robustness, enhancing scalability for real-world deployment, and addressing ethical worries associated with AI-driven content moderation.
文摘Membrane fouling is a persistent challenge in membrane-based technologies,significantly impacting efficiency,operational costs,and system lifespan in applications like water treatment,desalination,and industrial processing.Foul-ing,caused by the accumulation of particulates,organic compounds,and microorganisms,leads to reduced permeability,increased energy demands,and frequent maintenance.Traditional fouling control approaches,relying on empirical models and reactive strategies,often fail to address these issues efficiently.In this context,artificial intelligence(AI)and machine learning(ML)have emerged as innovative tools offering predictive and proactive solutions for fouling man-agement.By utilizing historical and real-time data,AI/ML techniques such as artificial neural networks,support vector machines,and ensemble models enable accurate prediction of fouling onset,identification of fouling mechanisms,and optimization of control measures.This review provides a detailed examination of the integration of AI/ML in membrane fouling prediction and mitigation,discussing advanced algorithms,the role of sensor-based monitoring,and the importance of robust datasets in enhancing predictive accuracy.Case studies highlighting successful AI/ML applications across various membrane processes are presented,demonstrating their transformative potential in improving system performance.Emerging trends,such as hybrid modeling and IoT-enabled smart systems,are explored,alongside a criti-cal analysis of research gaps and opportunities.This review emphasizes AI/ML as a cornerstone for sustainable,cost-effective membrane operations.
文摘Against the background of the continuous reform in medical education,biochemistry,as a fundamental medical course,maintains a close connection with clinical practice.However,under the traditional teaching model,the effectiveness of the“basic-clinical”connection is relatively poor,which hinders the improvement of educational outcomes.In the practical teaching of higher vocational medical education,the integration of the AI Case-Guided Learning System can enhance students’enthusiasm for knowledge exploration and effectively improve teaching quality.Starting from the perspective of the“basic-clinical”connection teaching in the biochemistry course,this paper analyzes the application value of the AI Case-Guided Learning System and proposes specific application strategies,aiming to accumulate experience for the innovation of biochemistry teaching.
文摘This study explores a novel educational model of generative AI-empowered interdisciplinary project-based learning(PBL).By analyzing the current applications of generative AI technology in information technology curricula,it elucidates its advantages and operational mechanisms in interdisciplinary PBL.Combining case studies and empirical research,the investigation proposes implementation pathways and strategies for the generative AI-enhanced interdisciplinary PBL model,detailing specific applications across three phases:project preparation,implementation,and evaluation.The research demonstrates that generative AI-enabled interdisciplinary project-based learning can effectively enhance students’learning motivation,interdisciplinary thinking capabilities,and innovative competencies,providing new conceptual frameworks and practical approaches for educational model innovation.
文摘In the evolving landscape of cyber threats,phishing attacks pose significant challenges,particularly through deceptive webpages designed to extract sensitive information under the guise of legitimacy.Conventional and machine learning(ML)-based detection systems struggle to detect phishing websites owing to their constantly changing tactics.Furthermore,newer phishing websites exhibit subtle and expertly concealed indicators that are not readily detectable.Hence,effective detection depends on identifying the most critical features.Traditional feature selection(FS)methods often struggle to enhance ML model performance and instead decrease it.To combat these issues,we propose an innovative method using explainable AI(XAI)to enhance FS in ML models and improve the identification of phishing websites.Specifically,we employ SHapley Additive exPlanations(SHAP)for global perspective and aggregated local interpretable model-agnostic explanations(LIME)to deter-mine specific localized patterns.The proposed SHAP and LIME-aggregated FS(SLA-FS)framework pinpoints the most informative features,enabling more precise,swift,and adaptable phishing detection.Applying this approach to an up-to-date web phishing dataset,we evaluate the performance of three ML models before and after FS to assess their effectiveness.Our findings reveal that random forest(RF),with an accuracy of 97.41%and XGBoost(XGB)at 97.21%significantly benefit from the SLA-FS framework,while k-nearest neighbors lags.Our framework increases the accuracy of RF and XGB by 0.65%and 0.41%,respectively,outperforming traditional filter or wrapper methods and any prior methods evaluated on this dataset,showcasing its potential.
基金funded by the Excellent Talent Training Funding Project in Dongcheng District,Beijing,with project number 2024-dchrcpyzz-9.
文摘Predicting the health status of stroke patients at different stages of the disease is a critical clinical task.The onset and development of stroke are affected by an array of factors,encompassing genetic predisposition,environmental exposure,unhealthy lifestyle habits,and existing medical conditions.Although existing machine learning-based methods for predicting stroke patients’health status have made significant progress,limitations remain in terms of prediction accuracy,model explainability,and system optimization.This paper proposes a multi-task learning approach based on Explainable Artificial Intelligence(XAI)for predicting the health status of stroke patients.First,we design a comprehensive multi-task learning framework that utilizes the task correlation of predicting various health status indicators in patients,enabling the parallel prediction of multiple health indicators.Second,we develop a multi-task Area Under Curve(AUC)optimization algorithm based on adaptive low-rank representation,which removes irrelevant information from the model structure to enhance the performance of multi-task AUC optimization.Additionally,the model’s explainability is analyzed through the stability analysis of SHAP values.Experimental results demonstrate that our approach outperforms comparison algorithms in key prognostic metrics F1 score and Efficiency.
文摘Tungsten carbide-based(WC-based)cemented carbides are widely recognized as high-performance tool materials.Traditionally,single metals such as cobalt(Co)or nickel(Ni)serve as the binder phase,providing toughness and structural integrity.Replacing this phase with high-entropy alloys(HEAs)offers a promising approach to enhancing mechanical properties and addressing sustainability challenges.However,the complex multi-element composition of HEAs complicates conventional experimental design,making it difficult to explore the vast compositional space efficiently.Traditional trial-and-error methods are time-consuming,resource-intensive,and often ineffective in identifying optimal compositions.In contrast,artificial intelligence(AI)-driven approaches enable rapid screening and optimization of alloy compositions,significantly improving predictive accuracy and interpretability.Feature selection techniques were employed to identify key alloying elements influencing hardness,toughness,and wear resistance.To enhance model interpretability,explainable artificial intelligence(XAI)techniques—SHapley Additive exPlanations(SHAP)and Local Interpretable Model-agnostic Explanations(LIME)—were applied to quantify the contributions of individual elements and uncover complex elemental interactions.Furthermore,a high-throughput machine learning(ML)–driven screening approach was implemented to optimize the binder phase composition,facilitating the discovery of HEAs with superiormechanical properties.Experimental validation demonstrated strong agreement between model predictions and measured performance,confirming the reliability of the ML framework.This study underscores the potential of integrating ML and XAI for data-driven materials design,providing a novel strategy for optimizing high-entropy cemented carbides.
基金The 2024 Higher Education Teaching Reform Project of Guangdong University of Science and Technology,“Teaching Practice of Human Resource Management Course Based on SPOC+FC Hybrid Teaching Mode”(GKZLGC2024024)。
文摘With the rapid popularization of artificial intelligence technology in the field of higher education,college students are increasingly dependent on AI tools such as ChatGPT,automatic writing assistants,and intelligent translators.Behind the convenience and efficiency,a decline trend in students’core learning abilities such as autonomous learning ability,critical thinking ability,and knowledge construction ability has gradually emerged.This study aims to explore the interactive logical mechanism between college students’reliance on AI tools and the weakening of their learning abilities,and on this basis,propose practical and feasible educational intervention strategies.Research has found that while AI tools lower the learning threshold,they also weaken students’cognitive investment and independent thinking abilities,further intensifying their reliance on technology.In this regard,this paper proposes a three-dimensional intervention path based on guided usage,ability compensation,and value reconstruction to achieve the collaborative improvement of students’technical usage ability and learning ability.This research has certain theoretical value and practical enlightenment significance for solving the structural predicament of higher education in the intelligent era.
基金funded by two 2024 Ministry of Education supply-demand docking employment and education projects(Grant No.2024101679202,Grant No.2024121116066)2024“Innovation Strong Institute Project of Guangdong Polytechnic Institute”(Grant No.2024CQ-29)2022 Guangdong Province Undergraduate Online Open Course Guidance Committee Research Project(Grant No.2022ZXKC612).
文摘In response to the pain points of rapid iteration of front-end education technology,large differences in learner foundations,and a lack of practical scenarios,this paper combines generative artificial intelligence and AI agents to analyze the empowerment logic from three dimensions:knowledge ecology reconstruction,cognitive collaborative upgrading,and teaching methodology innovation.It explores its application scenarios in teaching and learning,sorts out challenges such as technology adaptation and learning dependence,and proposes paths such as building an exclusive AI ecosystem and optimizing the guidance mechanism of intelligent agents to provide support for the digital transformation of front-end education.
文摘Blood cell disorders are among the leading causes of serious diseases such as leukemia,anemia,blood clotting disorders,and immune-related conditions.The global incidence of hematological diseases is increasing,affecting both children and adults.In clinical practice,blood smear analysis is still largely performed manually,relying heavily on the experience and expertise of laboratory technicians or hematologists.This manual process introduces risks of diagnostic errors,especially in cases with rare or morphologically ambiguous cells.The situation is more critical in developing countries,where there is a shortage of specialized medical personnel and limited access to modern diagnostic tools.High testing costs and delays in diagnosis hinder access to quality healthcare services.In this context,the integration of Artificial Intelligence(AI),particularly Explainable AI(XAI)based on deep learning,offers a promising solution for improving the accuracy,efficiency,and transparency of hematological diagnostics.In this study,we propose a Ghost Residual Network(GRsNet)integrated with XAI techniques such as Gradient-weighted Class Activation Mapping(Grad-CAM),Local Interpretable Model-Agnostic Explanations(LIME),and SHapley Additive exPlanations(SHAP)for automatic blood cell classification.These techniques provide visual explanations by highlighting important regions in the input images,thereby supporting clinical decision-making.The proposed model is evaluated on two public datasets:Naturalize 2K-PBC and Microscopic Blood Cell,achieving a classification accuracy of up to 95%.The results demonstrate the model’s strong potential for automated hematological diagnosis,particularly in resource-constrained settings.It not only enhances diagnostic reliability but also contributes to advancing digital transformation and equitable access to AI-driven healthcare in developing regions.
基金The author Dr.Arshiya S.Ansari extends the appreciation to the Deanship of Postgraduate Studies and Scientific Research at Majmaah University for funding this research work through the project number(R-2025-1538).
文摘Voice, motion, and mimicry are naturalistic control modalities that have replaced text or display-driven control in human-computer communication (HCC). Specifically, the vocals contain a lot of knowledge, revealing details about the speaker’s goals and desires, as well as their internal condition. Certain vocal characteristics reveal the speaker’s mood, intention, and motivation, while word study assists the speaker’s demand to be understood. Voice emotion recognition has become an essential component of modern HCC networks. Integrating findings from the various disciplines involved in identifying vocal emotions is also challenging. Many sound analysis techniques were developed in the past. Learning about the development of artificial intelligence (AI), and especially Deep Learning (DL) technology, research incorporating real data is becoming increasingly common these days. Thus, this research presents a novel selfish herd optimization-tuned long/short-term memory (SHO-LSTM) strategy to identify vocal emotions in human communication. The RAVDESS public dataset is used to train the suggested SHO-LSTM technique. Mel-frequency cepstral coefficient (MFCC) and wiener filter (WF) techniques are used, respectively, to remove noise and extract features from the data. LSTM and SHO are applied to the extracted data to optimize the LSTM network’s parameters for effective emotion recognition. Python Software was used to execute our proposed framework. In the finding assessment phase, Numerous metrics are used to evaluate the proposed model’s detection capability, Such as F1-score (95%), precision (95%), recall (96%), and accuracy (97%). The suggested approach is tested on a Python platform, and the SHO-LSTM’s outcomes are contrasted with those of other previously conducted research. Based on comparative assessments, our suggested approach outperforms the current approaches in vocal emotion recognition.
文摘In this work,we have developed a novel machine(deep)learning computational framework to determine and identify damage loading parameters(conditions)for structures and materials based on the permanent or residual plastic deformation distribution or damage state of the structure.We have shown that the developed machine learning algorithm can accurately and(practically)uniquely identify both prior static as well as impact loading conditions in an inverse manner,based on the residual plastic strain and plastic deformation as forensic signatures.The paper presents the detailed machine learning algorithm,data acquisition and learning processes,and validation/verification examples.This development may have significant impacts on forensic material analysis and structure failure analysis,and it provides a powerful tool for material and structure forensic diagnosis,determination,and identification of damage loading conditions in accidental failure events,such as car crashes and infrastructure or building structure collapses.
基金Project supported by the National Natural Science Foundation of China(Nos.12101072 and 11421101)the National Key Research and Development Program of China(No.2018YFF0300104)+1 种基金the Beijing Municipal Science and Technology Project(No.Z201100005820002)the Open Research Fund of Shenzhen Research Institute of Big Data(No.2019ORF01001)。
文摘Fine-grained weather forecasting data,i.e.,the grid data with high-resolution,have attracted increasing attention in recent years,especially for some specific applications such as the Winter Olympic Games.Although European Centre for Medium-Range Weather Forecasts(ECMWF)provides grid prediction up to 240 hours,the coarse data are unable to meet high requirements of these major events.In this paper,we propose a method,called model residual machine learning(MRML),to generate grid prediction with high-resolution based on high-precision stations forecasting.MRML applies model output machine learning(MOML)for stations forecasting.Subsequently,MRML utilizes these forecasts to improve the quality of the grid data by fitting a machine learning(ML)model to the residuals.We demonstrate that MRML achieves high capability at diverse meteorological elements,specifically,temperature,relative humidity,and wind speed.In addition,MRML could be easily extended to other post-processing methods by invoking different techniques.In our experiments,MRML outperforms the traditional downscaling methods such as piecewise linear interpolation(PLI)on the testing data.
基金supported via funding from Prince Satam bin Abdulaziz University project number (PSAU/2024/R/1445)The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through large Group Research Project (Grant No.RGP.2/357/44).
文摘In this study,twelve machine learning(ML)techniques are used to accurately estimate the safety factor of rock slopes(SFRS).The dataset used for developing these models consists of 344 rock slopes from various open-pit mines around Iran,evenly distributed between the training(80%)and testing(20%)datasets.The models are evaluated for accuracy using Janbu's limit equilibrium method(LEM)and commercial tool GeoStudio methods.Statistical assessment metrics show that the random forest model is the most accurate in estimating the SFRS(MSE=0.0182,R2=0.8319)and shows high agreement with the results from the LEM method.The results from the long-short-term memory(LSTM)model are the least accurate(MSE=0.037,R2=0.6618)of all the models tested.However,only the null space support vector regression(NuSVR)model performs accurately compared to the practice mode by altering the value of one parameter while maintaining the other parameters constant.It is suggested that this model would be the best one to use to calculate the SFRS.A graphical user interface for the proposed models is developed to further assist in the calculation of the SFRS for engineering difficulties.In this study,we attempt to bridge the gap between modern slope stability evaluation techniques and more conventional analysis methods.
文摘As the complexity of deep learning(DL)networks and training data grows enormously,methods that scale with computation are becoming the future of artificial intelligence(AI)development.In this regard,the interplay between machine learning(ML)and high-performance computing(HPC)is an innovative paradigm to speed up the efficiency of AI research and development.However,building and operating an HPC/AI converged system require broad knowledge to leverage the latest computing,networking,and storage technologies.Moreover,an HPC-based AI computing environment needs an appropriate resource allocation and monitoring strategy to efficiently utilize the system resources.In this regard,we introduce a technique for building and operating a high-performance AI-computing environment with the latest technologies.Specifically,an HPC/AI converged system is configured inside Gwangju Institute of Science and Technology(GIST),called GIST AI-X computing cluster,which is built by leveraging the latest Nvidia DGX servers,high-performance storage and networking devices,and various open source tools.Therefore,it can be a good reference for building a small or middlesized HPC/AI converged system for research and educational institutes.In addition,we propose a resource allocation method for DL jobs to efficiently utilize the computing resources with multi-agent deep reinforcement learning(mDRL).Through extensive simulations and experiments,we validate that the proposed mDRL algorithm can help the HPC/AI converged cluster to achieve both system utilization and power consumption improvement.By deploying the proposed resource allocation method to the system,total job completion time is reduced by around 20%and inefficient power consumption is reduced by around 40%.
基金funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and Innovation Programme No.807081。
文摘Motor drives form an essential part of the electric compressors,pumps,braking and actuation systems in the More-Electric Aircraft(MEA).In this paper,the application of Machine Learning(ML)in motor-drive design and optimization process is investigated.The general idea of using ML is to train surrogate models for the optimization.This training process is based on sample data collected from detailed simulation or experiment of motor drives.However,the Surrogate Role(SR)of ML may vary for different applications.This paper first introduces the principles of ML and then proposes two SRs(direct mapping approach and correction approach)of the ML in a motor-drive optimization process.Two different cases are given for the method comparison and validation of ML SRs.The first case is using the sample data from experiments to train the ML surrogate models.For the second case,the joint-simulation data is utilized for a multi-objective motor-drive optimization problem.It is found that both surrogate roles of ML can provide a good mapping model for the cases and in the second case,three feasible design schemes of ML are proposed and validated for the two SRs.Regarding the time consumption in optimizaiton,the proposed ML models can give one motor-drive design point up to 0.044 s while it takes more than 1.5 mins for the used simulation-based models.
基金supported by the“Technology Commercialization Collaboration Platform Construction”project of the Innopolis Foundation(Project Number:2710033536)the Competitive Research Fund of The University of Aizu,Japan.
文摘Sentiment Analysis,a significant domain within Natural Language Processing(NLP),focuses on extracting and interpreting subjective information-such as emotions,opinions,and attitudes-from textual data.With the increasing volume of user-generated content on social media and digital platforms,sentiment analysis has become essential for deriving actionable insights across various sectors.This study presents a systematic literature review of sentiment analysis methodologies,encompassing traditional machine learning algorithms,lexicon-based approaches,and recent advancements in deep learning techniques.The review follows a structured protocol comprising three phases:planning,execution,and analysis/reporting.During the execution phase,67 peer-reviewed articles were initially retrieved,with 25 meeting predefined inclusion and exclusion criteria.The analysis phase involved a detailed examination of each study’s methodology,experimental setup,and key contributions.Among the deep learning models evaluated,Long Short-Term Memory(LSTM)networks were identified as the most frequently adopted architecture for sentiment classification tasks.This review highlights current trends,technical challenges,and emerging opportunities in the field,providing valuable guidance for future research and development in applications such as market analysis,public health monitoring,financial forecasting,and crisis management.
文摘The forthcoming sixth generation(6G)of mobile communication networks is envisioned to be AInative,supporting intelligent services and pervasive computing at unprecedented scale.Among the key paradigms enabling this vision,Federated Learning(FL)has gained prominence as a distributed machine learning framework that allows multiple devices to collaboratively train models without sharing raw data,thereby preserving privacy and reducing the need for centralized storage.This capability is particularly attractive for vision-based applications,where image and video data are both sensitive and bandwidth-intensive.However,the integration of FL with 6G networks presents unique challenges,including communication bottlenecks,device heterogeneity,and trade-offs between model accuracy,latency,and energy consumption.In this paper,we developed a simulation-based framework to investigate the performance of FL in representative vision tasks under 6G-like environments.We formalize the system model,incorporating both the federated averaging(FedAvg)training process and a simplified communication costmodel that captures bandwidth constraints,packet loss,and variable latency across edge devices.Using standard image datasets(e.g.,MNIST,CIFAR-10)as benchmarks,we analyze how factors such as the number of participating clients,degree of data heterogeneity,and communication frequency influence convergence speed and model accuracy.Additionally,we evaluate the effectiveness of lightweight communication-efficient strategies,including local update tuning and gradient compression,in mitigating network overhead.The experimental results reveal several key insights:(i)communication limitations can significantly degrade FL convergence in vision tasks if not properly addressed;(ii)judicious tuning of local training epochs and client participation levels enables notable improvements in both efficiency and accuracy;and(iii)communication-efficient FL strategies provide a promising pathway to balance performance with the stringent latency and reliability requirements expected in 6G.These findings highlight the synergistic role of AI and nextgeneration networks in enabling privacy-preserving,real-time vision applications,and they provide concrete design guidelines for researchers and practitioners working at the intersection of FL and 6G.