Self-Explaining Autonomous Systems(SEAS)have emerged as a strategic frontier within Artificial Intelligence(AI),responding to growing demands for transparency and interpretability in autonomous decisionmaking.This stu...Self-Explaining Autonomous Systems(SEAS)have emerged as a strategic frontier within Artificial Intelligence(AI),responding to growing demands for transparency and interpretability in autonomous decisionmaking.This study presents a comprehensive bibliometric analysis of SEAS research published between 2020 and February 2025,drawing upon 1380 documents indexed in Scopus.The analysis applies co-citation mapping,keyword co-occurrence,and author collaboration networks using VOSviewer,MASHA,and Python to examine scientific production,intellectual structure,and global collaboration patterns.The results indicate a sustained annual growth rate of 41.38%,with an h-index of 57 and an average of 21.97 citations per document.A normalized citation rate was computed to address temporal bias,enabling balanced evaluation across publication cohorts.Thematic analysis reveals four consolidated research fronts:interpretability in machine learning,explainability in deep neural networks,transparency in generative models,and optimization strategies in autonomous control.Author co-citation analysis identifies four distinct research communities,and keyword evolution shows growing interdisciplinary links with medicine,cybersecurity,and industrial automation.The United States leads in scientific output and citation impact at the geographical level,while countries like India and China show high productivity with varied influence.However,international collaboration remains limited at 7.39%,reflecting a fragmented research landscape.As discussed in this study,SEAS research is expanding rapidly yet remains epistemologically dispersed,with uneven integration of ethical and human-centered perspectives.This work offers a structured and data-driven perspective on SEAS development,highlights key contributors and thematic trends,and outlines critical directions for advancing responsible and transparent autonomous systems.展开更多
Machine learning models are increasingly used to correct the vertical biases(mainly due to vegetation and buildings)in global Digital Elevation Models(DEMs),for downstream applications which need‘‘bare earth"el...Machine learning models are increasingly used to correct the vertical biases(mainly due to vegetation and buildings)in global Digital Elevation Models(DEMs),for downstream applications which need‘‘bare earth"elevations.The predictive accuracy of these models has improved significantly as more flexible model architectures are developed and new explanatory datasets produced,leading to the recent release of three model-corrected DEMs(FABDEM,DiluviumDEM and FathomDEM).However,there has been relatively little focus so far on explaining or interrogating these models,especially important in this context given their downstream impact on many other applications(including natural hazard simulations).In this study we train five separate models(by land cover environment)to correct vertical biases in the Copernicus DEM and then explain them using SHapley Additive exPlanation(SHAP)values.Comparing the models,we find significant variation in terms of the specific input variables selected and their relative importance,suggesting that an ensemble of models(specialising by land cover)is likely preferable to a general model applied everywhere.Visualising the patterns learned by the models(using SHAP dependence plots)provides further insights,building confidence in some cases(where patterns are consistent with domain knowledge and past studies)and highlighting potentially problematic variables in others(such as proxy relationships which may not apply in new application sites).Our results have implications for future DEM error prediction studies,particularly in evaluating a very wide range of potential input variables(160 candidates)drawn from topographic,multispectral,Synthetic Aperture Radar,vegetation,climate and urbanisation datasets.展开更多
With the ongoing digitalization and intelligence of power systems,there is an increasing reliance on large-scale data-driven intelligent technologies for tasks such as scheduling optimization and load forecasting.Neve...With the ongoing digitalization and intelligence of power systems,there is an increasing reliance on large-scale data-driven intelligent technologies for tasks such as scheduling optimization and load forecasting.Nevertheless,power data often contains sensitive information,making it a critical industry challenge to efficiently utilize this data while ensuring privacy.Traditional Federated Learning(FL)methods can mitigate data leakage by training models locally instead of transmitting raw data.Despite this,FL still has privacy concerns,especially gradient leakage,which might expose users’sensitive information.Therefore,integrating Differential Privacy(DP)techniques is essential for stronger privacy protection.Even so,the noise from DP may reduce the performance of federated learning models.To address this challenge,this paper presents an explainability-driven power data privacy federated learning framework.It incorporates DP technology and,based on model explainability,adaptively adjusts privacy budget allocation and model aggregation,thus balancing privacy protection and model performance.The key innovations of this paper are as follows:(1)We propose an explainability-driven power data privacy federated learning framework.(2)We detail a privacy budget allocation strategy:assigning budgets per training round by gradient effectiveness and at model granularity by layer importance.(3)We design a weighted aggregation strategy that considers the SHAP value and model accuracy for quality knowledge sharing.(4)Experiments show the proposed framework outperforms traditional methods in balancing privacy protection and model performance in power load forecasting tasks.展开更多
Fingerprint classification is a biometric method for crime prevention.For the successful completion of various tasks,such as official attendance,banking transactions,andmembership requirements,fingerprint classificati...Fingerprint classification is a biometric method for crime prevention.For the successful completion of various tasks,such as official attendance,banking transactions,andmembership requirements,fingerprint classification methods require improvement in terms of accuracy,speed,and the interpretability of non-linear demographic features.Researchers have introduced several CNN-based fingerprint classification models with improved accuracy,but these models often lack effective feature extractionmechanisms and complex multineural architectures.In addition,existing literature primarily focuses on gender classification rather than accurately,efficiently,and confidently classifying hands and fingers through the interpretability of prominent features.This research seeks to improve a compact,robust,explainable,and non-linear feature extraction-based CNN model for robust fingerprint pattern analysis and accurate yet efficient fingerprint classification.The proposed model(a)recognizes gender,hands,and fingers correctly through an advanced channel-wise attention-based feature extraction procedure,(b)accelerates the fingerprints identification process by applying an innovative fractional optimizer within a simple,but effective classification architecture,and(c)interprets prominent features through an explainable artificial intelligence technique.The encapsulated dependencies among distinct complex features are captured through a non-linear activation operation within a customized CNN model.The proposed fractionally optimized convolutional neural network(FOCNN)model demonstrates improved performance compared to some existing models,achieving high accuracies of 97.85%,99.10%,and 99.29%for finger,gender,and hand classification,respectively,utilizing the benchmark Sokoto Coventry Fingerprint Dataset.展开更多
Feature selection(FS)plays a crucial role in medical imaging by reducing dimensionality,improving computational efficiency,and enhancing diagnostic accuracy.Traditional FS techniques,including filter,wrapper,and embed...Feature selection(FS)plays a crucial role in medical imaging by reducing dimensionality,improving computational efficiency,and enhancing diagnostic accuracy.Traditional FS techniques,including filter,wrapper,and embedded methods,have been widely used but often struggle with high-dimensional and heterogeneous medical imaging data.Deep learning-based FS methods,particularly Convolutional Neural Networks(CNNs)and autoencoders,have demonstrated superior performance but lack interpretability.Hybrid approaches that combine classical and deep learning techniques have emerged as a promising solution,offering improved accuracy and explainability.Furthermore,integratingmulti-modal imaging data(e.g.,MagneticResonance Imaging(MRI),ComputedTomography(CT),Positron Emission Tomography(PET),and Ultrasound(US))poses additional challenges in FS,necessitating advanced feature fusion strategies.Multi-modal feature fusion combines information fromdifferent imagingmodalities to improve diagnostic accuracy.Recently,quantum computing has gained attention as a revolutionary approach for FS,providing the potential to handle high-dimensional medical data more efficiently.This systematic literature review comprehensively examines classical,Deep Learning(DL),hybrid,and quantum-based FS techniques inmedical imaging.Key outcomes include a structured taxonomy of FS methods,a critical evaluation of their performance across modalities,and identification of core challenges such as computational burden,interpretability,and ethical considerations.Future research directions—such as explainable AI(XAI),federated learning,and quantum-enhanced FS—are also emphasized to bridge the current gaps.This review provides actionable insights for developing scalable,interpretable,and clinically applicable FS methods in the evolving landscape of medical imaging.展开更多
Although digital changes in power systems have added more ways to monitor and control them,these changes have also led to new cyber-attack risks,mainly from False Data Injection(FDI)attacks.If this happens,the sensors...Although digital changes in power systems have added more ways to monitor and control them,these changes have also led to new cyber-attack risks,mainly from False Data Injection(FDI)attacks.If this happens,the sensors and operations are compromised,which can lead to big problems,disruptions,failures and blackouts.In response to this challenge,this paper presents a reliable and innovative detection framework that leverages Bidirectional Long Short-Term Memory(Bi-LSTM)networks and employs explanatory methods from Artificial Intelligence(AI).Not only does the suggested architecture detect potential fraud with high accuracy,but it also makes its decisions transparent,enabling operators to take appropriate action.Themethod developed here utilizesmodel-free,interpretable tools to identify essential input elements,thereby making predictions more understandable and usable.Enhancing detection performance is made possible by correcting class imbalance using Synthetic Minority Over-sampling Technique(SMOTE)-based data balancing.Benchmark power system data confirms that the model functions correctly through detailed experiments.Experimental results showed that Bi-LSTM+Explainable AI(XAI)achieved an average accuracy of 94%,surpassing XGBoost(89%)and Bagging(84%),while ensuring explainability and a high level of robustness across various operating scenarios.By conducting an ablation study,we find that bidirectional recursive modeling and ReLU activation help improve generalization and model predictability.Additionally,examining model decisions through LIME enables us to identify which features are crucial for making smart grid operational decisions in real time.The research offers a practical and flexible approach for detecting FDI attacks,improving the security of cyber-physical systems,and facilitating the deployment of AI in energy infrastructure.展开更多
A network intrusion detection system is critical for cyber security against llegitimate attacks.In terms of feature perspectives,network traffic may include a variety of elements such as attack reference,attack type,a...A network intrusion detection system is critical for cyber security against llegitimate attacks.In terms of feature perspectives,network traffic may include a variety of elements such as attack reference,attack type,a subcategory of attack,host information,malicious scripts,etc.In terms of network perspectives,network traffic may contain an imbalanced number of harmful attacks when compared to normal traffic.It is challenging to identify a specific attack due to complex features and data imbalance issues.To address these issues,this paper proposes an Intrusion Detection System using transformer-based transfer learning for Imbalanced Network Traffic(IDS-INT).IDS-INT uses transformer-based transfer learning to learn feature interactions in both network feature representation and imbalanced data.First,detailed information about each type of attack is gathered from network interaction descriptions,which include network nodes,attack type,reference,host information,etc.Second,the transformer-based transfer learning approach is developed to learn detailed feature representation using their semantic anchors.Third,the Synthetic Minority Oversampling Technique(SMOTE)is implemented to balance abnormal traffic and detect minority attacks.Fourth,the Convolution Neural Network(CNN)model is designed to extract deep features from the balanced network traffic.Finally,the hybrid approach of the CNN-Long Short-Term Memory(CNN-LSTM)model is developed to detect different types of attacks from the deep features.Detailed experiments are conducted to test the proposed approach using three standard datasets,i.e.,UNsWNB15,CIC-IDS2017,and NSL-KDD.An explainable AI approach is implemented to interpret the proposed method and develop a trustable model.展开更多
With increasing world population the demand of food production has increased exponentially.Internet of Things(IoT)based smart agriculture system can play a vital role in optimising crop yield by managing crop requirem...With increasing world population the demand of food production has increased exponentially.Internet of Things(IoT)based smart agriculture system can play a vital role in optimising crop yield by managing crop requirements in real-time.Interpretability can be an important factor to make such systems trusted and easily adopted by farmers.In this paper,we propose a novel artificial intelligence-based agriculture system that uses IoT data to monitor the environment and alerts farmers to take the required actions for maintaining ideal conditions for crop production.The strength of the proposed system is in its interpretability which makes it easy for farmers to understand,trust and use it.The use of fuzzy logic makes the system customisable in terms of types/number of sensors,type of crop,and adaptable for any soil types and weather conditions.The proposed system can identify anomalous data due to security breaches or hardware malfunction using machine learning algorithms.To ensure the viability of the system we have conducted thorough research related to agricultural factors such as soil type,soil moisture,soil temperature,plant life cycle,irrigation requirement and water application timing for Maize as our target crop.The experimental results show that our proposed system is interpretable,can detect anomalous data,and triggers actions accurately based on crop requirements.展开更多
Ensemble forecasting has become the prevailing method in current operational weather forecasting. Although ensemble mean forecast skill has been studied for many ensemble prediction systems(EPSs) and different cases...Ensemble forecasting has become the prevailing method in current operational weather forecasting. Although ensemble mean forecast skill has been studied for many ensemble prediction systems(EPSs) and different cases, theoretical analysis regarding ensemble mean forecast skill has rarely been investigated, especially quantitative analysis without any assumptions of ensemble members. This paper investigates fundamental questions about the ensemble mean, such as the advantage of the ensemble mean over individual members, the potential skill of the ensemble mean, and the skill gain of the ensemble mean with increasing ensemble size. The average error coefficient between each pair of ensemble members is the most important factor in ensemble mean forecast skill, which determines the mean-square error of ensemble mean forecasts and the skill gain with increasing ensemble size. More members are useful if the errors of the members have lower correlations with each other, and vice versa. The theoretical investigation in this study is verified by application with the T213 EPS. A typical EPS has an average error coefficient of between 0.5 and 0.8; the 15-member T213 EPS used here reaches a saturation degree of 95%(i.e., maximum 5% skill gain by adding new members with similar skill to the existing members) for 1–10-day lead time predictions, as far as the mean-square error is concerned.展开更多
As more medical data become digitalized,machine learning is regarded as a promising tool for constructing medical decision support systems.Even with vast medical data volumes,machine learning is still not fully exploi...As more medical data become digitalized,machine learning is regarded as a promising tool for constructing medical decision support systems.Even with vast medical data volumes,machine learning is still not fully exploiting its potential because the data usually sits in data silos,and privacy and security regulations restrict their access and use.To address these issues,we built a secured and explainable machine learning framework,called explainable federated XGBoost(EXPERTS),which can share valuable information among different medical institutions to improve the learning results without sharing the patients’ data.It also reveals how the machine makes a decision through eigenvalues to offer a more insightful answer to medical professionals.To study the performance,we evaluate our approach by real-world datasets,and our approach outperforms the benchmark algorithms under both federated learning and non-federated learning frameworks.展开更多
NICFED expert system-a rule-based"non-invasive cardiac function evaluationand cardiac diseases diagnosing expert system"-is discussed in this paper.The sys-tem can be regarded as an interpretation expert sys...NICFED expert system-a rule-based"non-invasive cardiac function evaluationand cardiac diseases diagnosing expert system"-is discussed in this paper.The sys-tem can be regarded as an interpretation expert system and as a diagnostic expertsystem.When it is applied to evaluate cardiac function,it can explain more than onehundred parameters detected by"MCA-Ⅲ cardiac function device of multi-domainand multidemension".With these parameters the cardiac function in time domain,fre-展开更多
The exponential use of artificial intelligence(AI)to solve and automated complex tasks has catapulted its popularity generating some challenges that need to be addressed.While AI is a powerfulmeans to discover interes...The exponential use of artificial intelligence(AI)to solve and automated complex tasks has catapulted its popularity generating some challenges that need to be addressed.While AI is a powerfulmeans to discover interesting patterns and obtain predictive models,the use of these algorithms comes with a great responsibility,as an incomplete or unbalanced set of training data or an unproper interpretation of the models’outcomes could result in misleading conclusions that ultimately could become very dangerous.For these reasons,it is important to rely on expert knowledge when applying these methods.However,not every user can count on this specific expertise;non-AIexpert users could also benefit from applying these powerful algorithms to their domain problems,but they need basic guidelines to obtain themost out of AI models.The goal of this work is to present a systematic review of the literature to analyze studies whose outcomes are explainable rules and heuristics to select suitable AI algorithms given a set of input features.The systematic review follows the methodology proposed by Kitchenham and other authors in the field of software engineering.As a result,9 papers that tackle AI algorithmrecommendation through tangible and traceable rules and heuristics were collected.The reduced number of retrieved papers suggests a lack of reporting explicit rules and heuristics when testing the suitability and performance of AI algorithms.展开更多
Cyber-attacks on cyber-physical systems(CPSs)resulted to sensing and actuation misbehavior,severe damage to physical object,and safety risk.Machine learning(ML)models have been presented to hinder cyberattacks on the ...Cyber-attacks on cyber-physical systems(CPSs)resulted to sensing and actuation misbehavior,severe damage to physical object,and safety risk.Machine learning(ML)models have been presented to hinder cyberattacks on the CPS environment;however,the non-existence of labelled data from new attacks makes their detection quite interesting.Intrusion Detection System(IDS)is a commonly utilized to detect and classify the existence of intrusions in the CPS environment,which acts as an important part in secure CPS environment.Latest developments in deep learning(DL)and explainable artificial intelligence(XAI)stimulate new IDSs to manage cyberattacks with minimum complexity and high sophistication.In this aspect,this paper presents an XAI based IDS using feature selection with Dirichlet Variational Autoencoder(XAIIDS-FSDVAE)model for CPS.The proposed model encompasses the design of coyote optimization algorithm(COA)based feature selection(FS)model is derived to select an optimal subset of features.Next,an intelligent Dirichlet Variational Autoencoder(DVAE)technique is employed for the anomaly detection process in the CPS environment.Finally,the parameter optimization of the DVAE takes place using a manta ray foraging optimization(MRFO)model to tune the parameter of the DVAE.In order to determine the enhanced intrusion detection efficiency of the XAIIDS-FSDVAE technique,a wide range of simulations take place using the benchmark datasets.The experimental results reported the better performance of the XAIIDSFSDVAE technique over the recent methods in terms of several evaluation parameters.展开更多
In the Internet of Things(IoT)based system,the multi-level client’s requirements can be fulfilled by incorporating communication technologies with distributed homogeneous networks called ubiquitous computing systems(...In the Internet of Things(IoT)based system,the multi-level client’s requirements can be fulfilled by incorporating communication technologies with distributed homogeneous networks called ubiquitous computing systems(UCS).The UCS necessitates heterogeneity,management level,and data transmission for distributed users.Simultaneously,security remains a major issue in the IoT-driven UCS.Besides,energy-limited IoT devices need an effective clustering strategy for optimal energy utilization.The recent developments of explainable artificial intelligence(XAI)concepts can be employed to effectively design intrusion detection systems(IDS)for accomplishing security in UCS.In this view,this study designs a novel Blockchain with Explainable Artificial Intelligence Driven Intrusion Detection for IoT Driven Ubiquitous Computing System(BXAI-IDCUCS)model.The major intention of the BXAI-IDCUCS model is to accomplish energy efficacy and security in the IoT environment.The BXAI-IDCUCS model initially clusters the IoT nodes using an energy-aware duck swarm optimization(EADSO)algorithm to accomplish this.Besides,deep neural network(DNN)is employed for detecting and classifying intrusions in the IoT network.Lastly,blockchain technology is exploited for secure inter-cluster data transmission processes.To ensure the productive performance of the BXAI-IDCUCS model,a comprehensive experimentation study is applied,and the outcomes are assessed under different aspects.The comparison study emphasized the superiority of the BXAI-IDCUCS model over the current state-of-the-art approaches with a packet delivery ratio of 99.29%,a packet loss rate of 0.71%,a throughput of 92.95 Mbps,energy consumption of 0.0891 mJ,a lifetime of 3529 rounds,and accuracy of 99.38%.展开更多
Clinical simulated experiment (CSE) is an intermediate experiment in clinic withmodern functional simulation.Traditional Chinese Medicine (TCM) has a long histo-ry with an unique system of medical theory which has not...Clinical simulated experiment (CSE) is an intermediate experiment in clinic withmodern functional simulation.Traditional Chinese Medicine (TCM) has a long histo-ry with an unique system of medical theory which has not been standardized and cannot be fully explained by modern natural sciences.CSE on the syndromic standardsof pulmonary system diseases (SSOPSD) was carried out by following TCM’s theo-展开更多
In a convolutional neural network (CNN) classification model for diagnosing medical images, transparency and interpretability of the model’s behavior are crucial in addition to high classification accuracy, and it is...In a convolutional neural network (CNN) classification model for diagnosing medical images, transparency and interpretability of the model’s behavior are crucial in addition to high classification accuracy, and it is highly important to explicitly demonstrate them. In this study, we constructed an interpretable CNN-based model for breast density classification using spectral information from mammograms. We evaluated whether the model’s prediction scores provided reliable probability values using a reliability diagram and visualized the basis for the final prediction. In constructing the classification model, we modified ResNet50 and introduced algorithms for extracting and inputting image spectra, visualizing network behavior, and quantifying prediction ambiguity. From the experimental results, our proposed model demonstrated not only high classification accuracy but also higher reliability and interpretability compared to the conventional CNN models that use pixel information from images. Furthermore, our proposed model can detect misclassified data and indicate explicit basis for prediction. The results demonstrated the effectiveness and usefulness of our proposed model from the perspective of credibility and transparency.展开更多
E. Stone in the article “18 Mysteries and Unanswered Questions About Our Solar System. Little Astronomy” wrote: One of the great things about astronomy is that there is still so much out there for us to discover. Th...E. Stone in the article “18 Mysteries and Unanswered Questions About Our Solar System. Little Astronomy” wrote: One of the great things about astronomy is that there is still so much out there for us to discover. There are so many unanswered questions and mysteries about the universe. There is always a puzzle to solve and that is part of beauty. Even in our own neighborhood, the Solar System, there are many questions we still have not been able to answer [1]. In the present paper, we explain the majority of these Mysteries and some other unexplained phenomena in the Solar System (SS) in frames of the developed Hypersphere World-Universe Model (WUM) [2].展开更多
Wildfires significantly disrupt the physical and hydrologic conditions of the environment,leading to vegetation loss and altered surface geo-material properties.These complex dynamics promote post-fire gully erosion,y...Wildfires significantly disrupt the physical and hydrologic conditions of the environment,leading to vegetation loss and altered surface geo-material properties.These complex dynamics promote post-fire gully erosion,yet the key conditioning factors(e.g.,topography,hydrology)remain insufficiently understood.This study proposes a novel artificial intelligence(AI)framework that integrates four machine learning(ML)models with Shapley Additive Explanations(SHAP)method,offering a hierarchical perspective from global to local on the dominant factors controlling gully distribution in wildfireaffected areas.In a case study of Xiangjiao catchment burned on March 28,2020,in Muli County in Sichuan Province of Southwest China,we derived 21 geoenvironmental factors to assess the susceptibility of post-fire gully erosion using logistic regression(LR),support vector machine(SVM),random forest(RF),and convolutional neural network(CNN)models.SHAP-based model interpretation revealed eight key conditioning factors:topographic position index(TPI),topographic wetness index(TWI),distance to stream,mean annual precipitation,differenced normalized burn ratio(d NBR),land use/cover,soil type,and distance to road.Comparative model evaluation demonstrated that reduced-variable models incorporating these dominant factors achieved accuracy comparable to that of the initial-variable models,with AUC values exceeding 0.868 across all ML algorithms.These findings provide critical insights into gully erosion behavior in wildfire-affected areas,supporting the decision-making process behind environmental management and hazard mitigation.展开更多
The high porosity and tunable chemical functionality of metal-organic frameworks(MOFs)make it a promising catalyst design platform.High-throughput screening of catalytic performance is feasible since the large MOF str...The high porosity and tunable chemical functionality of metal-organic frameworks(MOFs)make it a promising catalyst design platform.High-throughput screening of catalytic performance is feasible since the large MOF structure database is available.In this study,we report a machine learning model for high-throughput screening of MOF catalysts for the CO_(2) cycloaddition reaction.The descriptors for model training were judiciously chosen according to the reaction mechanism,which leads to high accuracy up to 97%for the 75%quantile of the training set as the classification criterion.The feature contribution was further evaluated with SHAP and PDP analysis to provide a certain physical understanding.12,415 hypothetical MOF structures and 100 reported MOFs were evaluated under 100℃ and 1 bar within one day using the model,and 239 potentially efficient catalysts were discovered.Among them,MOF-76(Y)achieved the top performance experimentally among reported MOFs,in good agreement with the prediction.展开更多
The increasing use of cloud-based devices has reached the critical point of cybersecurity and unwanted network traffic.Cloud environments pose significant challenges in maintaining privacy and security.Global approach...The increasing use of cloud-based devices has reached the critical point of cybersecurity and unwanted network traffic.Cloud environments pose significant challenges in maintaining privacy and security.Global approaches,such as IDS,have been developed to tackle these issues.However,most conventional Intrusion Detection System(IDS)models struggle with unseen cyberattacks and complex high-dimensional data.In fact,this paper introduces the idea of a novel distributed explainable and heterogeneous transformer-based intrusion detection system,named INTRUMER,which offers balanced accuracy,reliability,and security in cloud settings bymultiplemodulesworking together within it.The traffic captured from cloud devices is first passed to the TC&TM module in which the Falcon Optimization Algorithm optimizes the feature selection process,and Naie Bayes algorithm performs the classification of features.The selected features are classified further and are forwarded to the Heterogeneous Attention Transformer(HAT)module.In this module,the contextual interactions of the network traffic are taken into account to classify them as normal or malicious traffic.The classified results are further analyzed by the Explainable Prevention Module(XPM)to ensure trustworthiness by providing interpretable decisions.With the explanations fromthe classifier,emergency alarms are transmitted to nearby IDSmodules,servers,and underlying cloud devices for the enhancement of preventive measures.Extensive experiments on benchmark IDS datasets CICIDS 2017,Honeypots,and NSL-KDD were conducted to demonstrate the efficiency of the INTRUMER model in detecting network trafficwith high accuracy for different types.Theproposedmodel outperforms state-of-the-art approaches,obtaining better performance metrics:98.7%accuracy,97.5%precision,96.3%recall,and 97.8%F1-score.Such results validate the robustness and effectiveness of INTRUMER in securing diverse cloud environments against sophisticated cyber threats.展开更多
基金partially funded by the Programa Nacional de Becas y Crédito Educativo of Peru and the Universitat de València,Spain.
文摘Self-Explaining Autonomous Systems(SEAS)have emerged as a strategic frontier within Artificial Intelligence(AI),responding to growing demands for transparency and interpretability in autonomous decisionmaking.This study presents a comprehensive bibliometric analysis of SEAS research published between 2020 and February 2025,drawing upon 1380 documents indexed in Scopus.The analysis applies co-citation mapping,keyword co-occurrence,and author collaboration networks using VOSviewer,MASHA,and Python to examine scientific production,intellectual structure,and global collaboration patterns.The results indicate a sustained annual growth rate of 41.38%,with an h-index of 57 and an average of 21.97 citations per document.A normalized citation rate was computed to address temporal bias,enabling balanced evaluation across publication cohorts.Thematic analysis reveals four consolidated research fronts:interpretability in machine learning,explainability in deep neural networks,transparency in generative models,and optimization strategies in autonomous control.Author co-citation analysis identifies four distinct research communities,and keyword evolution shows growing interdisciplinary links with medicine,cybersecurity,and industrial automation.The United States leads in scientific output and citation impact at the geographical level,while countries like India and China show high productivity with varied influence.However,international collaboration remains limited at 7.39%,reflecting a fragmented research landscape.As discussed in this study,SEAS research is expanding rapidly yet remains epistemologically dispersed,with uneven integration of ethical and human-centered perspectives.This work offers a structured and data-driven perspective on SEAS development,highlights key contributors and thematic trends,and outlines critical directions for advancing responsible and transparent autonomous systems.
文摘Machine learning models are increasingly used to correct the vertical biases(mainly due to vegetation and buildings)in global Digital Elevation Models(DEMs),for downstream applications which need‘‘bare earth"elevations.The predictive accuracy of these models has improved significantly as more flexible model architectures are developed and new explanatory datasets produced,leading to the recent release of three model-corrected DEMs(FABDEM,DiluviumDEM and FathomDEM).However,there has been relatively little focus so far on explaining or interrogating these models,especially important in this context given their downstream impact on many other applications(including natural hazard simulations).In this study we train five separate models(by land cover environment)to correct vertical biases in the Copernicus DEM and then explain them using SHapley Additive exPlanation(SHAP)values.Comparing the models,we find significant variation in terms of the specific input variables selected and their relative importance,suggesting that an ensemble of models(specialising by land cover)is likely preferable to a general model applied everywhere.Visualising the patterns learned by the models(using SHAP dependence plots)provides further insights,building confidence in some cases(where patterns are consistent with domain knowledge and past studies)and highlighting potentially problematic variables in others(such as proxy relationships which may not apply in new application sites).Our results have implications for future DEM error prediction studies,particularly in evaluating a very wide range of potential input variables(160 candidates)drawn from topographic,multispectral,Synthetic Aperture Radar,vegetation,climate and urbanisation datasets.
文摘With the ongoing digitalization and intelligence of power systems,there is an increasing reliance on large-scale data-driven intelligent technologies for tasks such as scheduling optimization and load forecasting.Nevertheless,power data often contains sensitive information,making it a critical industry challenge to efficiently utilize this data while ensuring privacy.Traditional Federated Learning(FL)methods can mitigate data leakage by training models locally instead of transmitting raw data.Despite this,FL still has privacy concerns,especially gradient leakage,which might expose users’sensitive information.Therefore,integrating Differential Privacy(DP)techniques is essential for stronger privacy protection.Even so,the noise from DP may reduce the performance of federated learning models.To address this challenge,this paper presents an explainability-driven power data privacy federated learning framework.It incorporates DP technology and,based on model explainability,adaptively adjusts privacy budget allocation and model aggregation,thus balancing privacy protection and model performance.The key innovations of this paper are as follows:(1)We propose an explainability-driven power data privacy federated learning framework.(2)We detail a privacy budget allocation strategy:assigning budgets per training round by gradient effectiveness and at model granularity by layer importance.(3)We design a weighted aggregation strategy that considers the SHAP value and model accuracy for quality knowledge sharing.(4)Experiments show the proposed framework outperforms traditional methods in balancing privacy protection and model performance in power load forecasting tasks.
文摘Fingerprint classification is a biometric method for crime prevention.For the successful completion of various tasks,such as official attendance,banking transactions,andmembership requirements,fingerprint classification methods require improvement in terms of accuracy,speed,and the interpretability of non-linear demographic features.Researchers have introduced several CNN-based fingerprint classification models with improved accuracy,but these models often lack effective feature extractionmechanisms and complex multineural architectures.In addition,existing literature primarily focuses on gender classification rather than accurately,efficiently,and confidently classifying hands and fingers through the interpretability of prominent features.This research seeks to improve a compact,robust,explainable,and non-linear feature extraction-based CNN model for robust fingerprint pattern analysis and accurate yet efficient fingerprint classification.The proposed model(a)recognizes gender,hands,and fingers correctly through an advanced channel-wise attention-based feature extraction procedure,(b)accelerates the fingerprints identification process by applying an innovative fractional optimizer within a simple,but effective classification architecture,and(c)interprets prominent features through an explainable artificial intelligence technique.The encapsulated dependencies among distinct complex features are captured through a non-linear activation operation within a customized CNN model.The proposed fractionally optimized convolutional neural network(FOCNN)model demonstrates improved performance compared to some existing models,achieving high accuracies of 97.85%,99.10%,and 99.29%for finger,gender,and hand classification,respectively,utilizing the benchmark Sokoto Coventry Fingerprint Dataset.
文摘Feature selection(FS)plays a crucial role in medical imaging by reducing dimensionality,improving computational efficiency,and enhancing diagnostic accuracy.Traditional FS techniques,including filter,wrapper,and embedded methods,have been widely used but often struggle with high-dimensional and heterogeneous medical imaging data.Deep learning-based FS methods,particularly Convolutional Neural Networks(CNNs)and autoencoders,have demonstrated superior performance but lack interpretability.Hybrid approaches that combine classical and deep learning techniques have emerged as a promising solution,offering improved accuracy and explainability.Furthermore,integratingmulti-modal imaging data(e.g.,MagneticResonance Imaging(MRI),ComputedTomography(CT),Positron Emission Tomography(PET),and Ultrasound(US))poses additional challenges in FS,necessitating advanced feature fusion strategies.Multi-modal feature fusion combines information fromdifferent imagingmodalities to improve diagnostic accuracy.Recently,quantum computing has gained attention as a revolutionary approach for FS,providing the potential to handle high-dimensional medical data more efficiently.This systematic literature review comprehensively examines classical,Deep Learning(DL),hybrid,and quantum-based FS techniques inmedical imaging.Key outcomes include a structured taxonomy of FS methods,a critical evaluation of their performance across modalities,and identification of core challenges such as computational burden,interpretability,and ethical considerations.Future research directions—such as explainable AI(XAI),federated learning,and quantum-enhanced FS—are also emphasized to bridge the current gaps.This review provides actionable insights for developing scalable,interpretable,and clinically applicable FS methods in the evolving landscape of medical imaging.
基金the Deanship of Scientific Research and Libraries in Princess Nourah bint Abdulrahman University for funding this research work through the Research Group project,Grant No.(RG-1445-0064).
文摘Although digital changes in power systems have added more ways to monitor and control them,these changes have also led to new cyber-attack risks,mainly from False Data Injection(FDI)attacks.If this happens,the sensors and operations are compromised,which can lead to big problems,disruptions,failures and blackouts.In response to this challenge,this paper presents a reliable and innovative detection framework that leverages Bidirectional Long Short-Term Memory(Bi-LSTM)networks and employs explanatory methods from Artificial Intelligence(AI).Not only does the suggested architecture detect potential fraud with high accuracy,but it also makes its decisions transparent,enabling operators to take appropriate action.Themethod developed here utilizesmodel-free,interpretable tools to identify essential input elements,thereby making predictions more understandable and usable.Enhancing detection performance is made possible by correcting class imbalance using Synthetic Minority Over-sampling Technique(SMOTE)-based data balancing.Benchmark power system data confirms that the model functions correctly through detailed experiments.Experimental results showed that Bi-LSTM+Explainable AI(XAI)achieved an average accuracy of 94%,surpassing XGBoost(89%)and Bagging(84%),while ensuring explainability and a high level of robustness across various operating scenarios.By conducting an ablation study,we find that bidirectional recursive modeling and ReLU activation help improve generalization and model predictability.Additionally,examining model decisions through LIME enables us to identify which features are crucial for making smart grid operational decisions in real time.The research offers a practical and flexible approach for detecting FDI attacks,improving the security of cyber-physical systems,and facilitating the deployment of AI in energy infrastructure.
文摘A network intrusion detection system is critical for cyber security against llegitimate attacks.In terms of feature perspectives,network traffic may include a variety of elements such as attack reference,attack type,a subcategory of attack,host information,malicious scripts,etc.In terms of network perspectives,network traffic may contain an imbalanced number of harmful attacks when compared to normal traffic.It is challenging to identify a specific attack due to complex features and data imbalance issues.To address these issues,this paper proposes an Intrusion Detection System using transformer-based transfer learning for Imbalanced Network Traffic(IDS-INT).IDS-INT uses transformer-based transfer learning to learn feature interactions in both network feature representation and imbalanced data.First,detailed information about each type of attack is gathered from network interaction descriptions,which include network nodes,attack type,reference,host information,etc.Second,the transformer-based transfer learning approach is developed to learn detailed feature representation using their semantic anchors.Third,the Synthetic Minority Oversampling Technique(SMOTE)is implemented to balance abnormal traffic and detect minority attacks.Fourth,the Convolution Neural Network(CNN)model is designed to extract deep features from the balanced network traffic.Finally,the hybrid approach of the CNN-Long Short-Term Memory(CNN-LSTM)model is developed to detect different types of attacks from the deep features.Detailed experiments are conducted to test the proposed approach using three standard datasets,i.e.,UNsWNB15,CIC-IDS2017,and NSL-KDD.An explainable AI approach is implemented to interpret the proposed method and develop a trustable model.
基金This work was supported by the Central Queensland University Research Grant RSH5345(partially)and the Open Access Journal Scheme.
文摘With increasing world population the demand of food production has increased exponentially.Internet of Things(IoT)based smart agriculture system can play a vital role in optimising crop yield by managing crop requirements in real-time.Interpretability can be an important factor to make such systems trusted and easily adopted by farmers.In this paper,we propose a novel artificial intelligence-based agriculture system that uses IoT data to monitor the environment and alerts farmers to take the required actions for maintaining ideal conditions for crop production.The strength of the proposed system is in its interpretability which makes it easy for farmers to understand,trust and use it.The use of fuzzy logic makes the system customisable in terms of types/number of sensors,type of crop,and adaptable for any soil types and weather conditions.The proposed system can identify anomalous data due to security breaches or hardware malfunction using machine learning algorithms.To ensure the viability of the system we have conducted thorough research related to agricultural factors such as soil type,soil moisture,soil temperature,plant life cycle,irrigation requirement and water application timing for Maize as our target crop.The experimental results show that our proposed system is interpretable,can detect anomalous data,and triggers actions accurately based on crop requirements.
基金supported by the National Basic Research (973) Program of China (Grant No. 2013CB430106)the R&D Special Fund for Public Welfare Industry (Meteorology) (Grant Nos. GYHY201306002 and GYHY201206005)+2 种基金the National Natural Science Foundation of China (Grant Nos. 40830958 and 41175087)the Jiangsu Collaborative Innovation Center for Climate Changethe High Performance Computing Center of Nanjing University
文摘Ensemble forecasting has become the prevailing method in current operational weather forecasting. Although ensemble mean forecast skill has been studied for many ensemble prediction systems(EPSs) and different cases, theoretical analysis regarding ensemble mean forecast skill has rarely been investigated, especially quantitative analysis without any assumptions of ensemble members. This paper investigates fundamental questions about the ensemble mean, such as the advantage of the ensemble mean over individual members, the potential skill of the ensemble mean, and the skill gain of the ensemble mean with increasing ensemble size. The average error coefficient between each pair of ensemble members is the most important factor in ensemble mean forecast skill, which determines the mean-square error of ensemble mean forecasts and the skill gain with increasing ensemble size. More members are useful if the errors of the members have lower correlations with each other, and vice versa. The theoretical investigation in this study is verified by application with the T213 EPS. A typical EPS has an average error coefficient of between 0.5 and 0.8; the 15-member T213 EPS used here reaches a saturation degree of 95%(i.e., maximum 5% skill gain by adding new members with similar skill to the existing members) for 1–10-day lead time predictions, as far as the mean-square error is concerned.
文摘As more medical data become digitalized,machine learning is regarded as a promising tool for constructing medical decision support systems.Even with vast medical data volumes,machine learning is still not fully exploiting its potential because the data usually sits in data silos,and privacy and security regulations restrict their access and use.To address these issues,we built a secured and explainable machine learning framework,called explainable federated XGBoost(EXPERTS),which can share valuable information among different medical institutions to improve the learning results without sharing the patients’ data.It also reveals how the machine makes a decision through eigenvalues to offer a more insightful answer to medical professionals.To study the performance,we evaluate our approach by real-world datasets,and our approach outperforms the benchmark algorithms under both federated learning and non-federated learning frameworks.
文摘NICFED expert system-a rule-based"non-invasive cardiac function evaluationand cardiac diseases diagnosing expert system"-is discussed in this paper.The sys-tem can be regarded as an interpretation expert system and as a diagnostic expertsystem.When it is applied to evaluate cardiac function,it can explain more than onehundred parameters detected by"MCA-Ⅲ cardiac function device of multi-domainand multidemension".With these parameters the cardiac function in time domain,fre-
基金funded by the Spanish Government Ministry of Economy and Competitiveness through the DEFINES Project Grant No. (TIN2016-80172-R)the Ministry of Science and Innovation through the AVisSA Project Grant No. (PID2020-118345RBI00)supported by the Spanish Ministry of Education and Vocational Training under an FPU Fellowship (FPU17/03276).
文摘The exponential use of artificial intelligence(AI)to solve and automated complex tasks has catapulted its popularity generating some challenges that need to be addressed.While AI is a powerfulmeans to discover interesting patterns and obtain predictive models,the use of these algorithms comes with a great responsibility,as an incomplete or unbalanced set of training data or an unproper interpretation of the models’outcomes could result in misleading conclusions that ultimately could become very dangerous.For these reasons,it is important to rely on expert knowledge when applying these methods.However,not every user can count on this specific expertise;non-AIexpert users could also benefit from applying these powerful algorithms to their domain problems,but they need basic guidelines to obtain themost out of AI models.The goal of this work is to present a systematic review of the literature to analyze studies whose outcomes are explainable rules and heuristics to select suitable AI algorithms given a set of input features.The systematic review follows the methodology proposed by Kitchenham and other authors in the field of software engineering.As a result,9 papers that tackle AI algorithmrecommendation through tangible and traceable rules and heuristics were collected.The reduced number of retrieved papers suggests a lack of reporting explicit rules and heuristics when testing the suitability and performance of AI algorithms.
文摘Cyber-attacks on cyber-physical systems(CPSs)resulted to sensing and actuation misbehavior,severe damage to physical object,and safety risk.Machine learning(ML)models have been presented to hinder cyberattacks on the CPS environment;however,the non-existence of labelled data from new attacks makes their detection quite interesting.Intrusion Detection System(IDS)is a commonly utilized to detect and classify the existence of intrusions in the CPS environment,which acts as an important part in secure CPS environment.Latest developments in deep learning(DL)and explainable artificial intelligence(XAI)stimulate new IDSs to manage cyberattacks with minimum complexity and high sophistication.In this aspect,this paper presents an XAI based IDS using feature selection with Dirichlet Variational Autoencoder(XAIIDS-FSDVAE)model for CPS.The proposed model encompasses the design of coyote optimization algorithm(COA)based feature selection(FS)model is derived to select an optimal subset of features.Next,an intelligent Dirichlet Variational Autoencoder(DVAE)technique is employed for the anomaly detection process in the CPS environment.Finally,the parameter optimization of the DVAE takes place using a manta ray foraging optimization(MRFO)model to tune the parameter of the DVAE.In order to determine the enhanced intrusion detection efficiency of the XAIIDS-FSDVAE technique,a wide range of simulations take place using the benchmark datasets.The experimental results reported the better performance of the XAIIDSFSDVAE technique over the recent methods in terms of several evaluation parameters.
基金This research work was funded by Institutional Fund Projects under grant no.(IFPIP:624-611-1443)。
文摘In the Internet of Things(IoT)based system,the multi-level client’s requirements can be fulfilled by incorporating communication technologies with distributed homogeneous networks called ubiquitous computing systems(UCS).The UCS necessitates heterogeneity,management level,and data transmission for distributed users.Simultaneously,security remains a major issue in the IoT-driven UCS.Besides,energy-limited IoT devices need an effective clustering strategy for optimal energy utilization.The recent developments of explainable artificial intelligence(XAI)concepts can be employed to effectively design intrusion detection systems(IDS)for accomplishing security in UCS.In this view,this study designs a novel Blockchain with Explainable Artificial Intelligence Driven Intrusion Detection for IoT Driven Ubiquitous Computing System(BXAI-IDCUCS)model.The major intention of the BXAI-IDCUCS model is to accomplish energy efficacy and security in the IoT environment.The BXAI-IDCUCS model initially clusters the IoT nodes using an energy-aware duck swarm optimization(EADSO)algorithm to accomplish this.Besides,deep neural network(DNN)is employed for detecting and classifying intrusions in the IoT network.Lastly,blockchain technology is exploited for secure inter-cluster data transmission processes.To ensure the productive performance of the BXAI-IDCUCS model,a comprehensive experimentation study is applied,and the outcomes are assessed under different aspects.The comparison study emphasized the superiority of the BXAI-IDCUCS model over the current state-of-the-art approaches with a packet delivery ratio of 99.29%,a packet loss rate of 0.71%,a throughput of 92.95 Mbps,energy consumption of 0.0891 mJ,a lifetime of 3529 rounds,and accuracy of 99.38%.
文摘Clinical simulated experiment (CSE) is an intermediate experiment in clinic withmodern functional simulation.Traditional Chinese Medicine (TCM) has a long histo-ry with an unique system of medical theory which has not been standardized and cannot be fully explained by modern natural sciences.CSE on the syndromic standardsof pulmonary system diseases (SSOPSD) was carried out by following TCM’s theo-
文摘In a convolutional neural network (CNN) classification model for diagnosing medical images, transparency and interpretability of the model’s behavior are crucial in addition to high classification accuracy, and it is highly important to explicitly demonstrate them. In this study, we constructed an interpretable CNN-based model for breast density classification using spectral information from mammograms. We evaluated whether the model’s prediction scores provided reliable probability values using a reliability diagram and visualized the basis for the final prediction. In constructing the classification model, we modified ResNet50 and introduced algorithms for extracting and inputting image spectra, visualizing network behavior, and quantifying prediction ambiguity. From the experimental results, our proposed model demonstrated not only high classification accuracy but also higher reliability and interpretability compared to the conventional CNN models that use pixel information from images. Furthermore, our proposed model can detect misclassified data and indicate explicit basis for prediction. The results demonstrated the effectiveness and usefulness of our proposed model from the perspective of credibility and transparency.
文摘E. Stone in the article “18 Mysteries and Unanswered Questions About Our Solar System. Little Astronomy” wrote: One of the great things about astronomy is that there is still so much out there for us to discover. There are so many unanswered questions and mysteries about the universe. There is always a puzzle to solve and that is part of beauty. Even in our own neighborhood, the Solar System, there are many questions we still have not been able to answer [1]. In the present paper, we explain the majority of these Mysteries and some other unexplained phenomena in the Solar System (SS) in frames of the developed Hypersphere World-Universe Model (WUM) [2].
基金the National Natural Science Foundation of China(42377170,42407212)the National Funded Postdoctoral Researcher Program(GZB20230606)+3 种基金the Postdoctoral Research Foundation of China(2024M752679)the Sichuan Natural Science Foundation(2025ZNSFSC1205)the National Key R&D Program of China(2022YFC3005704)the Sichuan Province Science and Technology Support Program(2024NSFSC0100)。
文摘Wildfires significantly disrupt the physical and hydrologic conditions of the environment,leading to vegetation loss and altered surface geo-material properties.These complex dynamics promote post-fire gully erosion,yet the key conditioning factors(e.g.,topography,hydrology)remain insufficiently understood.This study proposes a novel artificial intelligence(AI)framework that integrates four machine learning(ML)models with Shapley Additive Explanations(SHAP)method,offering a hierarchical perspective from global to local on the dominant factors controlling gully distribution in wildfireaffected areas.In a case study of Xiangjiao catchment burned on March 28,2020,in Muli County in Sichuan Province of Southwest China,we derived 21 geoenvironmental factors to assess the susceptibility of post-fire gully erosion using logistic regression(LR),support vector machine(SVM),random forest(RF),and convolutional neural network(CNN)models.SHAP-based model interpretation revealed eight key conditioning factors:topographic position index(TPI),topographic wetness index(TWI),distance to stream,mean annual precipitation,differenced normalized burn ratio(d NBR),land use/cover,soil type,and distance to road.Comparative model evaluation demonstrated that reduced-variable models incorporating these dominant factors achieved accuracy comparable to that of the initial-variable models,with AUC values exceeding 0.868 across all ML algorithms.These findings provide critical insights into gully erosion behavior in wildfire-affected areas,supporting the decision-making process behind environmental management and hazard mitigation.
基金financial support from the National Key Research and Development Program of China(2021YFB 3501501)the National Natural Science Foundation of China(No.22225803,22038001,22108007 and 22278011)+1 种基金Beijing Natural Science Foundation(No.Z230023)Beijing Science and Technology Commission(No.Z211100004321001).
文摘The high porosity and tunable chemical functionality of metal-organic frameworks(MOFs)make it a promising catalyst design platform.High-throughput screening of catalytic performance is feasible since the large MOF structure database is available.In this study,we report a machine learning model for high-throughput screening of MOF catalysts for the CO_(2) cycloaddition reaction.The descriptors for model training were judiciously chosen according to the reaction mechanism,which leads to high accuracy up to 97%for the 75%quantile of the training set as the classification criterion.The feature contribution was further evaluated with SHAP and PDP analysis to provide a certain physical understanding.12,415 hypothetical MOF structures and 100 reported MOFs were evaluated under 100℃ and 1 bar within one day using the model,and 239 potentially efficient catalysts were discovered.Among them,MOF-76(Y)achieved the top performance experimentally among reported MOFs,in good agreement with the prediction.
文摘The increasing use of cloud-based devices has reached the critical point of cybersecurity and unwanted network traffic.Cloud environments pose significant challenges in maintaining privacy and security.Global approaches,such as IDS,have been developed to tackle these issues.However,most conventional Intrusion Detection System(IDS)models struggle with unseen cyberattacks and complex high-dimensional data.In fact,this paper introduces the idea of a novel distributed explainable and heterogeneous transformer-based intrusion detection system,named INTRUMER,which offers balanced accuracy,reliability,and security in cloud settings bymultiplemodulesworking together within it.The traffic captured from cloud devices is first passed to the TC&TM module in which the Falcon Optimization Algorithm optimizes the feature selection process,and Naie Bayes algorithm performs the classification of features.The selected features are classified further and are forwarded to the Heterogeneous Attention Transformer(HAT)module.In this module,the contextual interactions of the network traffic are taken into account to classify them as normal or malicious traffic.The classified results are further analyzed by the Explainable Prevention Module(XPM)to ensure trustworthiness by providing interpretable decisions.With the explanations fromthe classifier,emergency alarms are transmitted to nearby IDSmodules,servers,and underlying cloud devices for the enhancement of preventive measures.Extensive experiments on benchmark IDS datasets CICIDS 2017,Honeypots,and NSL-KDD were conducted to demonstrate the efficiency of the INTRUMER model in detecting network trafficwith high accuracy for different types.Theproposedmodel outperforms state-of-the-art approaches,obtaining better performance metrics:98.7%accuracy,97.5%precision,96.3%recall,and 97.8%F1-score.Such results validate the robustness and effectiveness of INTRUMER in securing diverse cloud environments against sophisticated cyber threats.