期刊文献+
共找到493篇文章
< 1 2 25 >
每页显示 20 50 100
Influence and Local Influence for Explained Variation in Survival Analysis
1
作者 Refah Alotaibi 《Journal of Mathematics and System Science》 2014年第8期523-545,共23页
The amount of explained variation R2 is an overall measure used to quantify the information in a model and especially how useful the model might be when predicting future observations, explained variation is useful in... The amount of explained variation R2 is an overall measure used to quantify the information in a model and especially how useful the model might be when predicting future observations, explained variation is useful in guiding model choice for all types of predictive regression models, including linear and generalized linear models and survival analysis. In this work we consider how individual observations in a data set can influence the value of various R2 measures proposed for survival analysis including local influence to assess mathematically the effect of small changes. We discuss methodologies for assessing influence on Graf et al.'s R2G measure, Harrell's C-index and Nagelkerke's R2N. The ideas are illustrated on data on 1391 patients diagnosed with Diffuse Large B-cell Lymphoma (DLBCL), a major subtype ofNon-Hodgkin's Lymphoma (NHL). 展开更多
关键词 INFLUENCE local influence C-index explained variation.
在线阅读 下载PDF
Resolution explained
2
《计算机光盘软件与应用(COMPUTER ARTS数码艺术)》 2004年第12期70-77,共8页
当你在为你的输出作品准备数码画面时,你必须要掌握像素,点和线之间容易让人混淆的关系。
关键词 RESOLUTION explained 数码画面 图像处理 计算机
在线阅读 下载PDF
Mysteries of Solar System Explained by WUM
3
作者 Vladimir S. Netchitailo 《Journal of High Energy Physics, Gravitation and Cosmology》 2023年第3期775-799,共25页
E. Stone in the article “18 Mysteries and Unanswered Questions About Our Solar System. Little Astronomy” wrote: One of the great things about astronomy is that there is still so much out there for us to discover. Th... E. Stone in the article “18 Mysteries and Unanswered Questions About Our Solar System. Little Astronomy” wrote: One of the great things about astronomy is that there is still so much out there for us to discover. There are so many unanswered questions and mysteries about the universe. There is always a puzzle to solve and that is part of beauty. Even in our own neighborhood, the Solar System, there are many questions we still have not been able to answer [1]. In the present paper, we explain the majority of these Mysteries and some other unexplained phenomena in the Solar System (SS) in frames of the developed Hypersphere World-Universe Model (WUM) [2]. 展开更多
关键词 World-Universe Model Solar System Formation Structure of Solar System Mysteries of Solar System explained Problems of Solar System
在线阅读 下载PDF
Module 6 Unexplained Mysteries of the Natural World 跟踪导练(一)
4
《时代英语(高一版)》 2022年第2期66-75,80,共11页
阅读理解One large dinosaur hid in the thick jungle(热带雨林).With small,hungry eyes he watched a larger dinosaur,about 90 feet long!It was eating grass by a lake.The one in the jungle stood 20 feet tall on his powerfu... 阅读理解One large dinosaur hid in the thick jungle(热带雨林).With small,hungry eyes he watched a larger dinosaur,about 90 feet long!It was eating grass by a lake.The one in the jungle stood 20 feet tall on his powerful back legs.His two front legs were short,with sharp claws on the feet.His teeth were like long knives. 展开更多
关键词 导练 explained SHARP
原文传递
Unveiling dominant factors for gully distribution in wildfire-affected areas using explainable AI:A case study of Xiangjiao catchment,Southwest China 被引量:1
5
作者 ZHOU Ruichen HU Xiewen +3 位作者 XI Chuanjie HE Kun DENG Lin LUO Gang 《Journal of Mountain Science》 2025年第8期2765-2792,共28页
Wildfires significantly disrupt the physical and hydrologic conditions of the environment,leading to vegetation loss and altered surface geo-material properties.These complex dynamics promote post-fire gully erosion,y... Wildfires significantly disrupt the physical and hydrologic conditions of the environment,leading to vegetation loss and altered surface geo-material properties.These complex dynamics promote post-fire gully erosion,yet the key conditioning factors(e.g.,topography,hydrology)remain insufficiently understood.This study proposes a novel artificial intelligence(AI)framework that integrates four machine learning(ML)models with Shapley Additive Explanations(SHAP)method,offering a hierarchical perspective from global to local on the dominant factors controlling gully distribution in wildfireaffected areas.In a case study of Xiangjiao catchment burned on March 28,2020,in Muli County in Sichuan Province of Southwest China,we derived 21 geoenvironmental factors to assess the susceptibility of post-fire gully erosion using logistic regression(LR),support vector machine(SVM),random forest(RF),and convolutional neural network(CNN)models.SHAP-based model interpretation revealed eight key conditioning factors:topographic position index(TPI),topographic wetness index(TWI),distance to stream,mean annual precipitation,differenced normalized burn ratio(d NBR),land use/cover,soil type,and distance to road.Comparative model evaluation demonstrated that reduced-variable models incorporating these dominant factors achieved accuracy comparable to that of the initial-variable models,with AUC values exceeding 0.868 across all ML algorithms.These findings provide critical insights into gully erosion behavior in wildfire-affected areas,supporting the decision-making process behind environmental management and hazard mitigation. 展开更多
关键词 Gully erosion susceptibility Explainable AI WILDFIRE Geo-environmental factor Machine learning
原文传递
High-throughput screening of CO_(2) cycloaddition MOF catalyst with an explainable machine learning model
6
作者 Xuefeng Bai Yi Li +3 位作者 Yabo Xie Qiancheng Chen Xin Zhang Jian-Rong Li 《Green Energy & Environment》 SCIE EI CAS 2025年第1期132-138,共7页
The high porosity and tunable chemical functionality of metal-organic frameworks(MOFs)make it a promising catalyst design platform.High-throughput screening of catalytic performance is feasible since the large MOF str... The high porosity and tunable chemical functionality of metal-organic frameworks(MOFs)make it a promising catalyst design platform.High-throughput screening of catalytic performance is feasible since the large MOF structure database is available.In this study,we report a machine learning model for high-throughput screening of MOF catalysts for the CO_(2) cycloaddition reaction.The descriptors for model training were judiciously chosen according to the reaction mechanism,which leads to high accuracy up to 97%for the 75%quantile of the training set as the classification criterion.The feature contribution was further evaluated with SHAP and PDP analysis to provide a certain physical understanding.12,415 hypothetical MOF structures and 100 reported MOFs were evaluated under 100℃ and 1 bar within one day using the model,and 239 potentially efficient catalysts were discovered.Among them,MOF-76(Y)achieved the top performance experimentally among reported MOFs,in good agreement with the prediction. 展开更多
关键词 Metal-organic frameworks High-throughput screening Machine learning Explainable model CO_(2)cycloaddition
在线阅读 下载PDF
Intrumer:A Multi Module Distributed Explainable IDS/IPS for Securing Cloud Environment
7
作者 Nazreen Banu A S.K.B.Sangeetha 《Computers, Materials & Continua》 SCIE EI 2025年第1期579-607,共29页
The increasing use of cloud-based devices has reached the critical point of cybersecurity and unwanted network traffic.Cloud environments pose significant challenges in maintaining privacy and security.Global approach... The increasing use of cloud-based devices has reached the critical point of cybersecurity and unwanted network traffic.Cloud environments pose significant challenges in maintaining privacy and security.Global approaches,such as IDS,have been developed to tackle these issues.However,most conventional Intrusion Detection System(IDS)models struggle with unseen cyberattacks and complex high-dimensional data.In fact,this paper introduces the idea of a novel distributed explainable and heterogeneous transformer-based intrusion detection system,named INTRUMER,which offers balanced accuracy,reliability,and security in cloud settings bymultiplemodulesworking together within it.The traffic captured from cloud devices is first passed to the TC&TM module in which the Falcon Optimization Algorithm optimizes the feature selection process,and Naie Bayes algorithm performs the classification of features.The selected features are classified further and are forwarded to the Heterogeneous Attention Transformer(HAT)module.In this module,the contextual interactions of the network traffic are taken into account to classify them as normal or malicious traffic.The classified results are further analyzed by the Explainable Prevention Module(XPM)to ensure trustworthiness by providing interpretable decisions.With the explanations fromthe classifier,emergency alarms are transmitted to nearby IDSmodules,servers,and underlying cloud devices for the enhancement of preventive measures.Extensive experiments on benchmark IDS datasets CICIDS 2017,Honeypots,and NSL-KDD were conducted to demonstrate the efficiency of the INTRUMER model in detecting network trafficwith high accuracy for different types.Theproposedmodel outperforms state-of-the-art approaches,obtaining better performance metrics:98.7%accuracy,97.5%precision,96.3%recall,and 97.8%F1-score.Such results validate the robustness and effectiveness of INTRUMER in securing diverse cloud environments against sophisticated cyber threats. 展开更多
关键词 Cloud computing intrusion detection system TRANSFORMERS and explainable artificial intelligence(XAI)
在线阅读 下载PDF
An AI-Enabled Framework for Transparency and Interpretability in Cardiovascular Disease Risk Prediction
8
作者 Isha Kiran Shahzad Ali +3 位作者 Sajawal ur Rehman Khan Musaed Alhussein Sheraz Aslam Khursheed Aurangzeb 《Computers, Materials & Continua》 2025年第3期5057-5078,共22页
Cardiovascular disease(CVD)remains a leading global health challenge due to its high mortality rate and the complexity of early diagnosis,driven by risk factors such as hypertension,high cholesterol,and irregular puls... Cardiovascular disease(CVD)remains a leading global health challenge due to its high mortality rate and the complexity of early diagnosis,driven by risk factors such as hypertension,high cholesterol,and irregular pulse rates.Traditional diagnostic methods often struggle with the nuanced interplay of these risk factors,making early detection difficult.In this research,we propose a novel artificial intelligence-enabled(AI-enabled)framework for CVD risk prediction that integrates machine learning(ML)with eXplainable AI(XAI)to provide both high-accuracy predictions and transparent,interpretable insights.Compared to existing studies that typically focus on either optimizing ML performance or using XAI separately for local or global explanations,our approach uniquely combines both local and global interpretability using Local Interpretable Model-Agnostic Explanations(LIME)and SHapley Additive exPlanations(SHAP).This dual integration enhances the interpretability of the model and facilitates clinicians to comprehensively understand not just what the model predicts but also why those predictions are made by identifying the contribution of different risk factors,which is crucial for transparent and informed decision-making in healthcare.The framework uses ML techniques such as K-nearest neighbors(KNN),gradient boosting,random forest,and decision tree,trained on a cardiovascular dataset.Additionally,the integration of LIME and SHAP provides patient-specific insights alongside global trends,ensuring that clinicians receive comprehensive and actionable information.Our experimental results achieve 98%accuracy with the Random Forest model,with precision,recall,and F1-scores of 97%,98%,and 98%,respectively.The innovative combination of SHAP and LIME sets a new benchmark in CVD prediction by integrating advanced ML accuracy with robust interpretability,fills a critical gap in existing approaches.This framework paves the way for more explainable and transparent decision-making in healthcare,ensuring that the model is not only accurate but also trustworthy and actionable for clinicians. 展开更多
关键词 Artificial Intelligence cardiovascular disease(CVD) explainability eXplainable AI(XAI) INTERPRETABILITY LIME machine learning(ML) SHAP
在线阅读 下载PDF
Explainable artificial intelligence and ensemble learning for hepatocellular carcinoma classification:State of the art,performance,and clinical implications
9
作者 Sami Akbulut Cemil Colak 《World Journal of Hepatology》 2025年第11期11-25,共15页
Hepatocellular carcinoma(HCC)remains a leading cause of cancer-related mortality globally,necessitating advanced diagnostic tools to improve early detection and personalized targeted therapy.This review synthesizes ev... Hepatocellular carcinoma(HCC)remains a leading cause of cancer-related mortality globally,necessitating advanced diagnostic tools to improve early detection and personalized targeted therapy.This review synthesizes evidence on explainable ensemble learning approaches for HCC classification,emphasizing their integration with clinical workflows and multi-omics data.A systematic analysis[including datasets such as The Cancer Genome Atlas,Gene Expression Omnibus,and the Surveillance,Epidemiology,and End Results(SEER)datasets]revealed that explainable ensemble learning models achieve high diagnostic accuracy by combining clinical features,serum biomarkers such as alpha-fetoprotein,imaging features such as computed tomography and magnetic resonance imaging,and genomic data.For instance,SHapley Additive exPlanations(SHAP)-based random forests trained on NCBI GSE14520 microarray data(n=445)achieved 96.53%accuracy,while stacking ensembles applied to the SEER program data(n=1897)demonstrated an area under the receiver operating characteristic curve of 0.779 for mortality prediction.Despite promising results,challenges persist,including the computational costs of SHAP and local interpretable model-agnostic explanations analyses(e.g.,TreeSHAP requiring distributed computing for metabolomics datasets)and dataset biases(e.g.,SEER’s Western population dominance limiting generalizability).Future research must address inter-cohort heterogeneity,standardize explainability metrics,and prioritize lightweight surrogate models for resource-limited settings.This review presents the potential of explainable ensemble learning frameworks to bridge the gap between predictive accuracy and clinical interpretability,though rigorous validation in independent,multi-center cohorts is critical for real-world deployment. 展开更多
关键词 Hepatocellular carcinoma Artificial intelligence Explainable artificial intelligence Ensemble learning Explainable ensemble learning
在线阅读 下载PDF
Explaining machine learning models trained to predict Copernicus DEM errors in different land cover environments
10
作者 Michael Meadows Karin Reinke Simon Jones 《Artificial Intelligence in Geosciences》 2025年第2期113-130,共18页
Machine learning models are increasingly used to correct the vertical biases(mainly due to vegetation and buildings)in global Digital Elevation Models(DEMs),for downstream applications which need‘‘bare earth"el... Machine learning models are increasingly used to correct the vertical biases(mainly due to vegetation and buildings)in global Digital Elevation Models(DEMs),for downstream applications which need‘‘bare earth"elevations.The predictive accuracy of these models has improved significantly as more flexible model architectures are developed and new explanatory datasets produced,leading to the recent release of three model-corrected DEMs(FABDEM,DiluviumDEM and FathomDEM).However,there has been relatively little focus so far on explaining or interrogating these models,especially important in this context given their downstream impact on many other applications(including natural hazard simulations).In this study we train five separate models(by land cover environment)to correct vertical biases in the Copernicus DEM and then explain them using SHapley Additive exPlanation(SHAP)values.Comparing the models,we find significant variation in terms of the specific input variables selected and their relative importance,suggesting that an ensemble of models(specialising by land cover)is likely preferable to a general model applied everywhere.Visualising the patterns learned by the models(using SHAP dependence plots)provides further insights,building confidence in some cases(where patterns are consistent with domain knowledge and past studies)and highlighting potentially problematic variables in others(such as proxy relationships which may not apply in new application sites).Our results have implications for future DEM error prediction studies,particularly in evaluating a very wide range of potential input variables(160 candidates)drawn from topographic,multispectral,Synthetic Aperture Radar,vegetation,climate and urbanisation datasets. 展开更多
关键词 TOPOGRAPHY Explainability INTERPRETABILITY XAI SHAP ENSEMBLE
在线阅读 下载PDF
X-OODM:Leveraging Explainable Object-Oriented Design Methodology for Multi-Domain Sentiment Analysis
11
作者 Abqa Javed Muhammad Shoaib +2 位作者 Abdul Jaleel Mohamed Deriche Sharjeel Nawaz 《Computers, Materials & Continua》 2025年第3期4977-4994,共18页
Incorporation of explainability features in the decision-making web-based systems is considered a primary concern to enhance accountability,transparency,and trust in the community.Multi-domain Sentiment Analysis is a ... Incorporation of explainability features in the decision-making web-based systems is considered a primary concern to enhance accountability,transparency,and trust in the community.Multi-domain Sentiment Analysis is a significant web-based system where the explainability feature is essential for achieving user satisfaction.Conventional design methodologies such as object-oriented design methodology(OODM)have been proposed for web-based application development,which facilitates code reuse,quantification,and security at the design level.However,OODM did not provide the feature of explainability in web-based decision-making systems.X-OODM modifies the OODM with added explainable models to introduce the explainability feature for such systems.This research introduces an explainable model leveraging X-OODM for designing transparent applications for multidomain sentiment analysis.The proposed design is evaluated using the design quality metrics defined for the evaluation of the X-OODM explainable model under user context.The design quality metrics,transferability,simulatability,informativeness,and decomposability were introduced one after another over time to the evaluation of the X-OODM user context.Auxiliary metrics of accessibility and algorithmic transparency were added to increase the degree of explainability for the design.The study results reveal that introducing such explainability parameters with X-OODM appropriately increases system transparency,trustworthiness,and user understanding.The experimental results validate the enhancement of decision-making for multi-domain sentiment analysis with integration at the design level of explainability.Future work can be built in this direction by extending this work to apply the proposed X-OODM framework over different datasets and sentiment analysis applications to further scrutinize its effectiveness in real-world scenarios. 展开更多
关键词 Measurable explainable web-based application object-oriented design sentiment analysis MULTI-DOMAIN
在线阅读 下载PDF
Explainable AI for epileptic seizure detection in Internet of Medical Things
12
作者 Faiq Ahmad Khan Zainab Umar +1 位作者 Alireza Jolfaei Muhammad Tariq 《Digital Communications and Networks》 2025年第3期587-593,共7页
In the field of precision healthcare,where accurate decision-making is paramount,this study underscores the indispensability of eXplainable Artificial Intelligence(XAI)in the context of epilepsy management within the ... In the field of precision healthcare,where accurate decision-making is paramount,this study underscores the indispensability of eXplainable Artificial Intelligence(XAI)in the context of epilepsy management within the Internet of Medical Things(IoMT).The methodology entails meticulous preprocessing,involving the application of a band-pass filter and epoch segmentation to optimize the quality of Electroencephalograph(EEG)data.The subsequent extraction of statistical features facilitates the differentiation between seizure and non-seizure patterns.The classification phase integrates Support Vector Machine(SVM),K-Nearest Neighbor(KNN),and Random Forest classifiers.Notably,SVM attains an accuracy of 97.26%,excelling in the precision,recall,specificity,and F1 score for identifying seizures and non-seizure instances.Conversely,KNN achieves an accuracy of 72.69%,accompanied by certain trade-offs.The Random Forest classifierstands out with a remarkable accuracy of 99.89%,coupled with an exceptional precision(99.73%),recall(100%),specificity(99.80%),and F1 score(99.86%),surpassing both SVM and KNN performances.XAI techniques,namely Local Interpretable ModelAgnostic Explanations(LIME)and SHapley Additive exPlanation(SHAP),enhance the system’s transparency.This combination of machine learning and XAI not only improves the reliability and accuracy of the seizure detection system but also enhances trust and interpretability.Healthcare professionals can leverage the identified important features and their dependencies to gain deeper insights into the decision-making process,aiding in informed diagnosis and treatment decisions for patients with epilepsy. 展开更多
关键词 Epileptic seizure EPILEPSY EEG Explainable AI Machine learning
暂未订购
A transformer-based model for predicting and analyzing light olefin yields in methanol-to-olefins process
13
作者 Yuping Luo Wenyang Wang +2 位作者 Yuyan Zhang Muxin Chen Peng Shao 《Chinese Journal of Chemical Engineering》 2025年第7期266-276,共11页
This study introduces an innovative computational framework leveraging the transformer architecture to address a critical challenge in chemical process engineering:predicting and optimizing light olefin yields in indu... This study introduces an innovative computational framework leveraging the transformer architecture to address a critical challenge in chemical process engineering:predicting and optimizing light olefin yields in industrial methanol-to-olefins(MTO)processes.Our approach integrates advanced machine learning techniques with chemical engineering principles to tackle the complexities of non-stationary,highly volatile production data in large-scale chemical manufacturing.The framework employs the maximal information coefficient(MIC)algorithm to analyze and select the significant variables from MTO process parameters,forming a robust dataset for model development.We implement a transformer-based time series forecasting model,enhanced through positional encoding and hyperparameter optimization,significantly improving predictive accuracy for ethylene and propylene yields.The model's interpretability is augmented by applying SHapley additive exPlanations(SHAP)to quantify and visualize the impact of reaction control variables on olefin yields,providing valuable insights for process optimization.Experimental results demonstrate that our model outperforms traditional statistical and machine learning methods in accuracy and interpretability,effectively handling nonlinear,non-stationary,highvolatility,and long-sequence data challenges in olefin yield prediction.This research contributes to chemical engineering by providing a novel computerized methodology for solving complex production optimization problems in the chemical industry,offering significant potential for enhancing decisionmaking in MTO system production control and fostering the intelligent transformation of manufacturing processes. 展开更多
关键词 Methanol-to-Olefins TRANSFORMER Explainable AI Mathematical modeling Model-predictive control Numerical analysis
在线阅读 下载PDF
Interpretable Federated Learning Model for Cyber Intrusion Detection in Smart Cities with Privacy-Preserving Feature Selection
14
作者 Muhammad Sajid Farooq Muhammad Saleem +4 位作者 M.A.Khan Muhammad Farrukh Khan Shahan Yamin Siddiqui Muhammad Shoukat Aslam Khan M.Adnan 《Computers, Materials & Continua》 2025年第12期5183-5206,共24页
The rapid evolution of smart cities through IoT,cloud computing,and connected infrastructures has significantly enhanced sectors such as transportation,healthcare,energy,and public safety,but also increased exposure t... The rapid evolution of smart cities through IoT,cloud computing,and connected infrastructures has significantly enhanced sectors such as transportation,healthcare,energy,and public safety,but also increased exposure to sophisticated cyber threats.The diversity of devices,high data volumes,and real-time operational demands complicate security,requiring not just robust intrusion detection but also effective feature selection for relevance and scalability.Traditional Machine Learning(ML)based Intrusion Detection System(IDS)improves detection but often lacks interpretability,limiting stakeholder trust and timely responses.Moreover,centralized feature selection in conventional IDS compromises data privacy and fails to accommodate the decentralized nature of smart city infrastructures.To address these limitations,this research introduces an Interpretable Federated Learning(FL)based Cyber Intrusion Detection model tailored for smart city applications.The proposed system leverages privacy-preserving feature selection,where each client node independently identifies top-ranked features using ML models integrated with SHAP-based explainability.These local feature subsets are then aggregated at a central server to construct a global model without compromising sensitive data.Furthermore,the global model is enhanced with Explainable AI(XAI)techniques such as SHAP and LIME,offering both global interpretability and instance-level transparency for cyber threat decisions.Experimental results demonstrate that the proposed global model achieves a high detection accuracy of 98.51%,with a significantly low miss rate of 1.49%,outperforming existing models while ensuring explainability,privacy,and scalability across smart city infrastructures. 展开更多
关键词 Explainable AI SHAP LIME federated learning feature selection
在线阅读 下载PDF
Research Trends and Networks in Self-Explaining Autonomous Systems:A Bibliometric Study
15
作者 Oscar Peña-Cáceres Elvis Garay-Silupu +1 位作者 Darwin Aguilar-Chuquizuta Henry Silva-Marchan 《Computers, Materials & Continua》 2025年第8期2151-2188,共38页
Self-Explaining Autonomous Systems(SEAS)have emerged as a strategic frontier within Artificial Intelligence(AI),responding to growing demands for transparency and interpretability in autonomous decisionmaking.This stu... Self-Explaining Autonomous Systems(SEAS)have emerged as a strategic frontier within Artificial Intelligence(AI),responding to growing demands for transparency and interpretability in autonomous decisionmaking.This study presents a comprehensive bibliometric analysis of SEAS research published between 2020 and February 2025,drawing upon 1380 documents indexed in Scopus.The analysis applies co-citation mapping,keyword co-occurrence,and author collaboration networks using VOSviewer,MASHA,and Python to examine scientific production,intellectual structure,and global collaboration patterns.The results indicate a sustained annual growth rate of 41.38%,with an h-index of 57 and an average of 21.97 citations per document.A normalized citation rate was computed to address temporal bias,enabling balanced evaluation across publication cohorts.Thematic analysis reveals four consolidated research fronts:interpretability in machine learning,explainability in deep neural networks,transparency in generative models,and optimization strategies in autonomous control.Author co-citation analysis identifies four distinct research communities,and keyword evolution shows growing interdisciplinary links with medicine,cybersecurity,and industrial automation.The United States leads in scientific output and citation impact at the geographical level,while countries like India and China show high productivity with varied influence.However,international collaboration remains limited at 7.39%,reflecting a fragmented research landscape.As discussed in this study,SEAS research is expanding rapidly yet remains epistemologically dispersed,with uneven integration of ethical and human-centered perspectives.This work offers a structured and data-driven perspective on SEAS development,highlights key contributors and thematic trends,and outlines critical directions for advancing responsible and transparent autonomous systems. 展开更多
关键词 Self-explaining autonomous systems explainable AI machine learning deep learning artificial intelligence
在线阅读 下载PDF
An Explainable Autoencoder-Based Feature Extraction Combined with CNN-LSTM-PSO Model for Improved Predictive Maintenance
16
作者 Ishaani Priyadarshini 《Computers, Materials & Continua》 2025年第4期635-659,共25页
Predictive maintenance plays a crucial role in preventing equipment failures and minimizing operational downtime in modern industries.However,traditional predictive maintenance methods often face challenges in adaptin... Predictive maintenance plays a crucial role in preventing equipment failures and minimizing operational downtime in modern industries.However,traditional predictive maintenance methods often face challenges in adapting to diverse industrial environments and ensuring the transparency and fairness of their predictions.This paper presents a novel predictive maintenance framework that integrates deep learning and optimization techniques while addressing key ethical considerations,such as transparency,fairness,and explainability,in artificial intelligence driven decision-making.The framework employs an Autoencoder for feature reduction,a Convolutional Neural Network for pattern recognition,and a Long Short-Term Memory network for temporal analysis.To enhance transparency,the decision-making process of the framework is made interpretable,allowing stakeholders to understand and trust the model’s predictions.Additionally,Particle Swarm Optimization is used to refine hyperparameters for optimal performance and mitigate potential biases in the model.Experiments are conducted on multiple datasets from different industrial scenarios,with performance validated using accuracy,precision,recall,F1-score,and training time metrics.The results demonstrate an impressive accuracy of up to 99.92%and 99.45%across different datasets,highlighting the framework’s effectiveness in enhancing predictive maintenance strategies.Furthermore,the model’s explainability ensures that the decisions can be audited for fairness and accountability,aligning with ethical standards for critical systems.By addressing transparency and reducing potential biases,this framework contributes to the responsible and trustworthy deployment of artificial intelligence in industrial environments,particularly in safety-critical applications.The results underscore its potential for wide application across various industrial contexts,enhancing both performance and ethical decision-making. 展开更多
关键词 Explainability feature reduction predictive maintenance OPTIMIZATION
在线阅读 下载PDF
Towards Fault Diagnosis Interpretability:Gradient Boosting Framework for Vibration-Based Detection of Experimental Gear Failures
17
作者 Auday Shaker Hadi Luttfi A.Al-Haddad 《Journal of Dynamics, Monitoring and Diagnostics》 2025年第3期160-169,共10页
Accurate and interpretable fault diagnosis in industrial gear systems is essential for ensuring safety,reliability,and predictive maintenance.This study presents an intelligent diagnostic framework utilizing Gradient ... Accurate and interpretable fault diagnosis in industrial gear systems is essential for ensuring safety,reliability,and predictive maintenance.This study presents an intelligent diagnostic framework utilizing Gradient Boosting(GB)for fault detection in gear systems,applied to the Aalto Gear Fault Dataset,which features a wide range of synthetic and realistic gear failure modes under varied operating conditions.The dataset was preprocessed and analyzed using an ensemble GB classifier,yielding high performance across multiple metrics:accuracy of 96.77%,precision of 95.44%,recall of 97.11%,and an F1-score of 96.22%.To enhance trust in model predictions,the study integrates an explainable AI(XAI)framework using SHAP(SHapley Additive exPlanations)to visualize feature contributions and support diagnostic transparency.A flowchart-based architecture is proposed to guide real-world deployment of interpretable fault detection pipelines.The results demonstrate the feasibility of combining predictive performance with interpretability,offering a robust approach for condition monitoring in safety-critical systems. 展开更多
关键词 explainable AI GEARS Gradient Boosting vibration signals
在线阅读 下载PDF
Explainable AI Based Multi-Task Learning Method for Stroke Prognosis
18
作者 Nan Ding Xingyu Zeng +1 位作者 Jianping Wu Liutao Zhao 《Computers, Materials & Continua》 2025年第9期5299-5315,共17页
Predicting the health status of stroke patients at different stages of the disease is a critical clinical task.The onset and development of stroke are affected by an array of factors,encompassing genetic predispositio... Predicting the health status of stroke patients at different stages of the disease is a critical clinical task.The onset and development of stroke are affected by an array of factors,encompassing genetic predisposition,environmental exposure,unhealthy lifestyle habits,and existing medical conditions.Although existing machine learning-based methods for predicting stroke patients’health status have made significant progress,limitations remain in terms of prediction accuracy,model explainability,and system optimization.This paper proposes a multi-task learning approach based on Explainable Artificial Intelligence(XAI)for predicting the health status of stroke patients.First,we design a comprehensive multi-task learning framework that utilizes the task correlation of predicting various health status indicators in patients,enabling the parallel prediction of multiple health indicators.Second,we develop a multi-task Area Under Curve(AUC)optimization algorithm based on adaptive low-rank representation,which removes irrelevant information from the model structure to enhance the performance of multi-task AUC optimization.Additionally,the model’s explainability is analyzed through the stability analysis of SHAP values.Experimental results demonstrate that our approach outperforms comparison algorithms in key prognostic metrics F1 score and Efficiency. 展开更多
关键词 Explainable AI stroke prognosis multi-task learning AUC optimization
在线阅读 下载PDF
Deciphering influential features in the seismic catalog for large earthquake occurrence from a machine learning perspective
19
作者 Jinsu Jang Byung-Dal So +1 位作者 David A.Yuen Sung-Joon Chang 《Artificial Intelligence in Geosciences》 2025年第2期334-347,共14页
The spatiotemporal distribution and magnitude of seismicity collected over decades are crucial for understanding the stress interactions underlying large earthquakes.In this study,machine learning(ML)explainers identi... The spatiotemporal distribution and magnitude of seismicity collected over decades are crucial for understanding the stress interactions underlying large earthquakes.In this study,machine learning(ML)explainers identify and rank the features that distinguish Large Earthquake Occurrence(LEO)from non-LEO spatiotemporal windows.Seventy-eight statistics related to time,latitude,longitude,depth,and magnitude were extracted from the earthquake catalog(Global Centroid Moment Tensor)to produce 202,706 spatiotemporally discretized windows.ML explainers trained on these windows revealed the maximum magnitude(Mmax)as the most influential feature.Classification performance improved when the maximum inter-event time,the average interevent time,and the minimum ratio of focal depth to magnitude were jointly trained with Mmax.The top five features showed weak-to-moderate correlations,providing complementary information to the ML explainers.Our explainable ML framework can be extended to different earthquake catalogs,including those with focal mechanisms and smallmagnitude events. 展开更多
关键词 Earthquake catalog Explainable machine learning Feature importance XGBoost classifiers SHAP values
在线阅读 下载PDF
Differential Privacy Integrated Federated Learning for Power Systems:An Explainability-Driven Approach
20
作者 Zekun Liu Junwei Ma +3 位作者 Xin Gong Xiu Liu Bingbing Liu Long An 《Computers, Materials & Continua》 2025年第10期983-999,共17页
With the ongoing digitalization and intelligence of power systems,there is an increasing reliance on large-scale data-driven intelligent technologies for tasks such as scheduling optimization and load forecasting.Neve... With the ongoing digitalization and intelligence of power systems,there is an increasing reliance on large-scale data-driven intelligent technologies for tasks such as scheduling optimization and load forecasting.Nevertheless,power data often contains sensitive information,making it a critical industry challenge to efficiently utilize this data while ensuring privacy.Traditional Federated Learning(FL)methods can mitigate data leakage by training models locally instead of transmitting raw data.Despite this,FL still has privacy concerns,especially gradient leakage,which might expose users’sensitive information.Therefore,integrating Differential Privacy(DP)techniques is essential for stronger privacy protection.Even so,the noise from DP may reduce the performance of federated learning models.To address this challenge,this paper presents an explainability-driven power data privacy federated learning framework.It incorporates DP technology and,based on model explainability,adaptively adjusts privacy budget allocation and model aggregation,thus balancing privacy protection and model performance.The key innovations of this paper are as follows:(1)We propose an explainability-driven power data privacy federated learning framework.(2)We detail a privacy budget allocation strategy:assigning budgets per training round by gradient effectiveness and at model granularity by layer importance.(3)We design a weighted aggregation strategy that considers the SHAP value and model accuracy for quality knowledge sharing.(4)Experiments show the proposed framework outperforms traditional methods in balancing privacy protection and model performance in power load forecasting tasks. 展开更多
关键词 Power data federated learning differential privacy explainability
在线阅读 下载PDF
上一页 1 2 25 下一页 到第
使用帮助 返回顶部