期刊文献+
共找到466篇文章
< 1 2 24 >
每页显示 20 50 100
Influence and Local Influence for Explained Variation in Survival Analysis
1
作者 Refah Alotaibi 《Journal of Mathematics and System Science》 2014年第8期523-545,共23页
The amount of explained variation R2 is an overall measure used to quantify the information in a model and especially how useful the model might be when predicting future observations, explained variation is useful in... The amount of explained variation R2 is an overall measure used to quantify the information in a model and especially how useful the model might be when predicting future observations, explained variation is useful in guiding model choice for all types of predictive regression models, including linear and generalized linear models and survival analysis. In this work we consider how individual observations in a data set can influence the value of various R2 measures proposed for survival analysis including local influence to assess mathematically the effect of small changes. We discuss methodologies for assessing influence on Graf et al.'s R2G measure, Harrell's C-index and Nagelkerke's R2N. The ideas are illustrated on data on 1391 patients diagnosed with Diffuse Large B-cell Lymphoma (DLBCL), a major subtype ofNon-Hodgkin's Lymphoma (NHL). 展开更多
关键词 INFLUENCE local influence C-index explained variation.
在线阅读 下载PDF
Resolution explained
2
《计算机光盘软件与应用(COMPUTER ARTS数码艺术)》 2004年第12期70-77,共8页
当你在为你的输出作品准备数码画面时,你必须要掌握像素,点和线之间容易让人混淆的关系。
关键词 RESOLUTION explained 数码画面 图像处理 计算机
在线阅读 下载PDF
Mysteries of Solar System Explained by WUM
3
作者 Vladimir S. Netchitailo 《Journal of High Energy Physics, Gravitation and Cosmology》 2023年第3期775-799,共25页
E. Stone in the article “18 Mysteries and Unanswered Questions About Our Solar System. Little Astronomy” wrote: One of the great things about astronomy is that there is still so much out there for us to discover. Th... E. Stone in the article “18 Mysteries and Unanswered Questions About Our Solar System. Little Astronomy” wrote: One of the great things about astronomy is that there is still so much out there for us to discover. There are so many unanswered questions and mysteries about the universe. There is always a puzzle to solve and that is part of beauty. Even in our own neighborhood, the Solar System, there are many questions we still have not been able to answer [1]. In the present paper, we explain the majority of these Mysteries and some other unexplained phenomena in the Solar System (SS) in frames of the developed Hypersphere World-Universe Model (WUM) [2]. 展开更多
关键词 World-Universe Model Solar System Formation Structure of Solar System Mysteries of Solar System explained Problems of Solar System
在线阅读 下载PDF
High-throughput screening of CO_(2) cycloaddition MOF catalyst with an explainable machine learning model
4
作者 Xuefeng Bai Yi Li +3 位作者 Yabo Xie Qiancheng Chen Xin Zhang Jian-Rong Li 《Green Energy & Environment》 SCIE EI CAS 2025年第1期132-138,共7页
The high porosity and tunable chemical functionality of metal-organic frameworks(MOFs)make it a promising catalyst design platform.High-throughput screening of catalytic performance is feasible since the large MOF str... The high porosity and tunable chemical functionality of metal-organic frameworks(MOFs)make it a promising catalyst design platform.High-throughput screening of catalytic performance is feasible since the large MOF structure database is available.In this study,we report a machine learning model for high-throughput screening of MOF catalysts for the CO_(2) cycloaddition reaction.The descriptors for model training were judiciously chosen according to the reaction mechanism,which leads to high accuracy up to 97%for the 75%quantile of the training set as the classification criterion.The feature contribution was further evaluated with SHAP and PDP analysis to provide a certain physical understanding.12,415 hypothetical MOF structures and 100 reported MOFs were evaluated under 100℃ and 1 bar within one day using the model,and 239 potentially efficient catalysts were discovered.Among them,MOF-76(Y)achieved the top performance experimentally among reported MOFs,in good agreement with the prediction. 展开更多
关键词 Metal-organic frameworks High-throughput screening Machine learning Explainable model CO_(2)cycloaddition
在线阅读 下载PDF
Intrumer:A Multi Module Distributed Explainable IDS/IPS for Securing Cloud Environment
5
作者 Nazreen Banu A S.K.B.Sangeetha 《Computers, Materials & Continua》 SCIE EI 2025年第1期579-607,共29页
The increasing use of cloud-based devices has reached the critical point of cybersecurity and unwanted network traffic.Cloud environments pose significant challenges in maintaining privacy and security.Global approach... The increasing use of cloud-based devices has reached the critical point of cybersecurity and unwanted network traffic.Cloud environments pose significant challenges in maintaining privacy and security.Global approaches,such as IDS,have been developed to tackle these issues.However,most conventional Intrusion Detection System(IDS)models struggle with unseen cyberattacks and complex high-dimensional data.In fact,this paper introduces the idea of a novel distributed explainable and heterogeneous transformer-based intrusion detection system,named INTRUMER,which offers balanced accuracy,reliability,and security in cloud settings bymultiplemodulesworking together within it.The traffic captured from cloud devices is first passed to the TC&TM module in which the Falcon Optimization Algorithm optimizes the feature selection process,and Naie Bayes algorithm performs the classification of features.The selected features are classified further and are forwarded to the Heterogeneous Attention Transformer(HAT)module.In this module,the contextual interactions of the network traffic are taken into account to classify them as normal or malicious traffic.The classified results are further analyzed by the Explainable Prevention Module(XPM)to ensure trustworthiness by providing interpretable decisions.With the explanations fromthe classifier,emergency alarms are transmitted to nearby IDSmodules,servers,and underlying cloud devices for the enhancement of preventive measures.Extensive experiments on benchmark IDS datasets CICIDS 2017,Honeypots,and NSL-KDD were conducted to demonstrate the efficiency of the INTRUMER model in detecting network trafficwith high accuracy for different types.Theproposedmodel outperforms state-of-the-art approaches,obtaining better performance metrics:98.7%accuracy,97.5%precision,96.3%recall,and 97.8%F1-score.Such results validate the robustness and effectiveness of INTRUMER in securing diverse cloud environments against sophisticated cyber threats. 展开更多
关键词 Cloud computing intrusion detection system TRANSFORMERS and explainable artificial intelligence(XAI)
在线阅读 下载PDF
An AI-Enabled Framework for Transparency and Interpretability in Cardiovascular Disease Risk Prediction
6
作者 Isha Kiran Shahzad Ali +3 位作者 Sajawal ur Rehman Khan Musaed Alhussein Sheraz Aslam Khursheed Aurangzeb 《Computers, Materials & Continua》 2025年第3期5057-5078,共22页
Cardiovascular disease(CVD)remains a leading global health challenge due to its high mortality rate and the complexity of early diagnosis,driven by risk factors such as hypertension,high cholesterol,and irregular puls... Cardiovascular disease(CVD)remains a leading global health challenge due to its high mortality rate and the complexity of early diagnosis,driven by risk factors such as hypertension,high cholesterol,and irregular pulse rates.Traditional diagnostic methods often struggle with the nuanced interplay of these risk factors,making early detection difficult.In this research,we propose a novel artificial intelligence-enabled(AI-enabled)framework for CVD risk prediction that integrates machine learning(ML)with eXplainable AI(XAI)to provide both high-accuracy predictions and transparent,interpretable insights.Compared to existing studies that typically focus on either optimizing ML performance or using XAI separately for local or global explanations,our approach uniquely combines both local and global interpretability using Local Interpretable Model-Agnostic Explanations(LIME)and SHapley Additive exPlanations(SHAP).This dual integration enhances the interpretability of the model and facilitates clinicians to comprehensively understand not just what the model predicts but also why those predictions are made by identifying the contribution of different risk factors,which is crucial for transparent and informed decision-making in healthcare.The framework uses ML techniques such as K-nearest neighbors(KNN),gradient boosting,random forest,and decision tree,trained on a cardiovascular dataset.Additionally,the integration of LIME and SHAP provides patient-specific insights alongside global trends,ensuring that clinicians receive comprehensive and actionable information.Our experimental results achieve 98%accuracy with the Random Forest model,with precision,recall,and F1-scores of 97%,98%,and 98%,respectively.The innovative combination of SHAP and LIME sets a new benchmark in CVD prediction by integrating advanced ML accuracy with robust interpretability,fills a critical gap in existing approaches.This framework paves the way for more explainable and transparent decision-making in healthcare,ensuring that the model is not only accurate but also trustworthy and actionable for clinicians. 展开更多
关键词 Artificial Intelligence cardiovascular disease(CVD) explainability eXplainable AI(XAI) INTERPRETABILITY LIME machine learning(ML) SHAP
在线阅读 下载PDF
X-OODM:Leveraging Explainable Object-Oriented Design Methodology for Multi-Domain Sentiment Analysis
7
作者 Abqa Javed Muhammad Shoaib +2 位作者 Abdul Jaleel Mohamed Deriche Sharjeel Nawaz 《Computers, Materials & Continua》 2025年第3期4977-4994,共18页
Incorporation of explainability features in the decision-making web-based systems is considered a primary concern to enhance accountability,transparency,and trust in the community.Multi-domain Sentiment Analysis is a ... Incorporation of explainability features in the decision-making web-based systems is considered a primary concern to enhance accountability,transparency,and trust in the community.Multi-domain Sentiment Analysis is a significant web-based system where the explainability feature is essential for achieving user satisfaction.Conventional design methodologies such as object-oriented design methodology(OODM)have been proposed for web-based application development,which facilitates code reuse,quantification,and security at the design level.However,OODM did not provide the feature of explainability in web-based decision-making systems.X-OODM modifies the OODM with added explainable models to introduce the explainability feature for such systems.This research introduces an explainable model leveraging X-OODM for designing transparent applications for multidomain sentiment analysis.The proposed design is evaluated using the design quality metrics defined for the evaluation of the X-OODM explainable model under user context.The design quality metrics,transferability,simulatability,informativeness,and decomposability were introduced one after another over time to the evaluation of the X-OODM user context.Auxiliary metrics of accessibility and algorithmic transparency were added to increase the degree of explainability for the design.The study results reveal that introducing such explainability parameters with X-OODM appropriately increases system transparency,trustworthiness,and user understanding.The experimental results validate the enhancement of decision-making for multi-domain sentiment analysis with integration at the design level of explainability.Future work can be built in this direction by extending this work to apply the proposed X-OODM framework over different datasets and sentiment analysis applications to further scrutinize its effectiveness in real-world scenarios. 展开更多
关键词 Measurable explainable web-based application object-oriented design sentiment analysis MULTI-DOMAIN
在线阅读 下载PDF
Explainable AI for epileptic seizure detection in Internet of Medical Things
8
作者 Faiq Ahmad Khan Zainab Umar +1 位作者 Alireza Jolfaei Muhammad Tariq 《Digital Communications and Networks》 2025年第3期587-593,共7页
In the field of precision healthcare,where accurate decision-making is paramount,this study underscores the indispensability of eXplainable Artificial Intelligence(XAI)in the context of epilepsy management within the ... In the field of precision healthcare,where accurate decision-making is paramount,this study underscores the indispensability of eXplainable Artificial Intelligence(XAI)in the context of epilepsy management within the Internet of Medical Things(IoMT).The methodology entails meticulous preprocessing,involving the application of a band-pass filter and epoch segmentation to optimize the quality of Electroencephalograph(EEG)data.The subsequent extraction of statistical features facilitates the differentiation between seizure and non-seizure patterns.The classification phase integrates Support Vector Machine(SVM),K-Nearest Neighbor(KNN),and Random Forest classifiers.Notably,SVM attains an accuracy of 97.26%,excelling in the precision,recall,specificity,and F1 score for identifying seizures and non-seizure instances.Conversely,KNN achieves an accuracy of 72.69%,accompanied by certain trade-offs.The Random Forest classifierstands out with a remarkable accuracy of 99.89%,coupled with an exceptional precision(99.73%),recall(100%),specificity(99.80%),and F1 score(99.86%),surpassing both SVM and KNN performances.XAI techniques,namely Local Interpretable ModelAgnostic Explanations(LIME)and SHapley Additive exPlanation(SHAP),enhance the system’s transparency.This combination of machine learning and XAI not only improves the reliability and accuracy of the seizure detection system but also enhances trust and interpretability.Healthcare professionals can leverage the identified important features and their dependencies to gain deeper insights into the decision-making process,aiding in informed diagnosis and treatment decisions for patients with epilepsy. 展开更多
关键词 Epileptic seizure EPILEPSY EEG Explainable AI Machine learning
暂未订购
A transformer-based model for predicting and analyzing light olefin yields in methanol-to-olefins process
9
作者 Yuping Luo Wenyang Wang +2 位作者 Yuyan Zhang Muxin Chen Peng Shao 《Chinese Journal of Chemical Engineering》 2025年第7期266-276,共11页
This study introduces an innovative computational framework leveraging the transformer architecture to address a critical challenge in chemical process engineering:predicting and optimizing light olefin yields in indu... This study introduces an innovative computational framework leveraging the transformer architecture to address a critical challenge in chemical process engineering:predicting and optimizing light olefin yields in industrial methanol-to-olefins(MTO)processes.Our approach integrates advanced machine learning techniques with chemical engineering principles to tackle the complexities of non-stationary,highly volatile production data in large-scale chemical manufacturing.The framework employs the maximal information coefficient(MIC)algorithm to analyze and select the significant variables from MTO process parameters,forming a robust dataset for model development.We implement a transformer-based time series forecasting model,enhanced through positional encoding and hyperparameter optimization,significantly improving predictive accuracy for ethylene and propylene yields.The model's interpretability is augmented by applying SHapley additive exPlanations(SHAP)to quantify and visualize the impact of reaction control variables on olefin yields,providing valuable insights for process optimization.Experimental results demonstrate that our model outperforms traditional statistical and machine learning methods in accuracy and interpretability,effectively handling nonlinear,non-stationary,highvolatility,and long-sequence data challenges in olefin yield prediction.This research contributes to chemical engineering by providing a novel computerized methodology for solving complex production optimization problems in the chemical industry,offering significant potential for enhancing decisionmaking in MTO system production control and fostering the intelligent transformation of manufacturing processes. 展开更多
关键词 Methanol-to-Olefins TRANSFORMER Explainable AI Mathematical modeling Model-predictive control Numerical analysis
在线阅读 下载PDF
Research Trends and Networks in Self-Explaining Autonomous Systems:A Bibliometric Study
10
作者 Oscar Peña-Cáceres Elvis Garay-Silupu +1 位作者 Darwin Aguilar-Chuquizuta Henry Silva-Marchan 《Computers, Materials & Continua》 2025年第8期2151-2188,共38页
Self-Explaining Autonomous Systems(SEAS)have emerged as a strategic frontier within Artificial Intelligence(AI),responding to growing demands for transparency and interpretability in autonomous decisionmaking.This stu... Self-Explaining Autonomous Systems(SEAS)have emerged as a strategic frontier within Artificial Intelligence(AI),responding to growing demands for transparency and interpretability in autonomous decisionmaking.This study presents a comprehensive bibliometric analysis of SEAS research published between 2020 and February 2025,drawing upon 1380 documents indexed in Scopus.The analysis applies co-citation mapping,keyword co-occurrence,and author collaboration networks using VOSviewer,MASHA,and Python to examine scientific production,intellectual structure,and global collaboration patterns.The results indicate a sustained annual growth rate of 41.38%,with an h-index of 57 and an average of 21.97 citations per document.A normalized citation rate was computed to address temporal bias,enabling balanced evaluation across publication cohorts.Thematic analysis reveals four consolidated research fronts:interpretability in machine learning,explainability in deep neural networks,transparency in generative models,and optimization strategies in autonomous control.Author co-citation analysis identifies four distinct research communities,and keyword evolution shows growing interdisciplinary links with medicine,cybersecurity,and industrial automation.The United States leads in scientific output and citation impact at the geographical level,while countries like India and China show high productivity with varied influence.However,international collaboration remains limited at 7.39%,reflecting a fragmented research landscape.As discussed in this study,SEAS research is expanding rapidly yet remains epistemologically dispersed,with uneven integration of ethical and human-centered perspectives.This work offers a structured and data-driven perspective on SEAS development,highlights key contributors and thematic trends,and outlines critical directions for advancing responsible and transparent autonomous systems. 展开更多
关键词 Self-explaining autonomous systems explainable AI machine learning deep learning artificial intelligence
在线阅读 下载PDF
An Explainable Autoencoder-Based Feature Extraction Combined with CNN-LSTM-PSO Model for Improved Predictive Maintenance
11
作者 Ishaani Priyadarshini 《Computers, Materials & Continua》 2025年第4期635-659,共25页
Predictive maintenance plays a crucial role in preventing equipment failures and minimizing operational downtime in modern industries.However,traditional predictive maintenance methods often face challenges in adaptin... Predictive maintenance plays a crucial role in preventing equipment failures and minimizing operational downtime in modern industries.However,traditional predictive maintenance methods often face challenges in adapting to diverse industrial environments and ensuring the transparency and fairness of their predictions.This paper presents a novel predictive maintenance framework that integrates deep learning and optimization techniques while addressing key ethical considerations,such as transparency,fairness,and explainability,in artificial intelligence driven decision-making.The framework employs an Autoencoder for feature reduction,a Convolutional Neural Network for pattern recognition,and a Long Short-Term Memory network for temporal analysis.To enhance transparency,the decision-making process of the framework is made interpretable,allowing stakeholders to understand and trust the model’s predictions.Additionally,Particle Swarm Optimization is used to refine hyperparameters for optimal performance and mitigate potential biases in the model.Experiments are conducted on multiple datasets from different industrial scenarios,with performance validated using accuracy,precision,recall,F1-score,and training time metrics.The results demonstrate an impressive accuracy of up to 99.92%and 99.45%across different datasets,highlighting the framework’s effectiveness in enhancing predictive maintenance strategies.Furthermore,the model’s explainability ensures that the decisions can be audited for fairness and accountability,aligning with ethical standards for critical systems.By addressing transparency and reducing potential biases,this framework contributes to the responsible and trustworthy deployment of artificial intelligence in industrial environments,particularly in safety-critical applications.The results underscore its potential for wide application across various industrial contexts,enhancing both performance and ethical decision-making. 展开更多
关键词 Explainability feature reduction predictive maintenance OPTIMIZATION
在线阅读 下载PDF
Unveiling dominant factors for gully distribution in wildfire-affected areas using explainable AI:A case study of Xiangjiao catchment,Southwest China
12
作者 ZHOU Ruichen HU Xiewen +3 位作者 XI Chuanjie HE Kun DENG Lin LUO Gang 《Journal of Mountain Science》 2025年第8期2765-2792,共28页
Wildfires significantly disrupt the physical and hydrologic conditions of the environment,leading to vegetation loss and altered surface geo-material properties.These complex dynamics promote post-fire gully erosion,y... Wildfires significantly disrupt the physical and hydrologic conditions of the environment,leading to vegetation loss and altered surface geo-material properties.These complex dynamics promote post-fire gully erosion,yet the key conditioning factors(e.g.,topography,hydrology)remain insufficiently understood.This study proposes a novel artificial intelligence(AI)framework that integrates four machine learning(ML)models with Shapley Additive Explanations(SHAP)method,offering a hierarchical perspective from global to local on the dominant factors controlling gully distribution in wildfireaffected areas.In a case study of Xiangjiao catchment burned on March 28,2020,in Muli County in Sichuan Province of Southwest China,we derived 21 geoenvironmental factors to assess the susceptibility of post-fire gully erosion using logistic regression(LR),support vector machine(SVM),random forest(RF),and convolutional neural network(CNN)models.SHAP-based model interpretation revealed eight key conditioning factors:topographic position index(TPI),topographic wetness index(TWI),distance to stream,mean annual precipitation,differenced normalized burn ratio(d NBR),land use/cover,soil type,and distance to road.Comparative model evaluation demonstrated that reduced-variable models incorporating these dominant factors achieved accuracy comparable to that of the initial-variable models,with AUC values exceeding 0.868 across all ML algorithms.These findings provide critical insights into gully erosion behavior in wildfire-affected areas,supporting the decision-making process behind environmental management and hazard mitigation. 展开更多
关键词 Gully erosion susceptibility Explainable AI WILDFIRE Geo-environmental factor Machine learning
原文传递
Explainable AI Based Multi-Task Learning Method for Stroke Prognosis
13
作者 Nan Ding Xingyu Zeng +1 位作者 Jianping Wu Liutao Zhao 《Computers, Materials & Continua》 2025年第9期5299-5315,共17页
Predicting the health status of stroke patients at different stages of the disease is a critical clinical task.The onset and development of stroke are affected by an array of factors,encompassing genetic predispositio... Predicting the health status of stroke patients at different stages of the disease is a critical clinical task.The onset and development of stroke are affected by an array of factors,encompassing genetic predisposition,environmental exposure,unhealthy lifestyle habits,and existing medical conditions.Although existing machine learning-based methods for predicting stroke patients’health status have made significant progress,limitations remain in terms of prediction accuracy,model explainability,and system optimization.This paper proposes a multi-task learning approach based on Explainable Artificial Intelligence(XAI)for predicting the health status of stroke patients.First,we design a comprehensive multi-task learning framework that utilizes the task correlation of predicting various health status indicators in patients,enabling the parallel prediction of multiple health indicators.Second,we develop a multi-task Area Under Curve(AUC)optimization algorithm based on adaptive low-rank representation,which removes irrelevant information from the model structure to enhance the performance of multi-task AUC optimization.Additionally,the model’s explainability is analyzed through the stability analysis of SHAP values.Experimental results demonstrate that our approach outperforms comparison algorithms in key prognostic metrics F1 score and Efficiency. 展开更多
关键词 Explainable AI stroke prognosis multi-task learning AUC optimization
在线阅读 下载PDF
xCViT:Improved Vision Transformer Network with Fusion of CNN and Xception for Skin Disease Recognition with Explainable AI
14
作者 Armughan Ali Hooria Shahbaz Robertas Damaševicius 《Computers, Materials & Continua》 2025年第4期1367-1398,共32页
Skin cancer is the most prevalent cancer globally,primarily due to extensive exposure to Ultraviolet(UV)radiation.Early identification of skin cancer enhances the likelihood of effective treatment,as delays may lead t... Skin cancer is the most prevalent cancer globally,primarily due to extensive exposure to Ultraviolet(UV)radiation.Early identification of skin cancer enhances the likelihood of effective treatment,as delays may lead to severe tumor advancement.This study proposes a novel hybrid deep learning strategy to address the complex issue of skin cancer diagnosis,with an architecture that integrates a Vision Transformer,a bespoke convolutional neural network(CNN),and an Xception module.They were evaluated using two benchmark datasets,HAM10000 and Skin Cancer ISIC.On the HAM10000,the model achieves a precision of 95.46%,an accuracy of 96.74%,a recall of 96.27%,specificity of 96.00%and an F1-Score of 95.86%.It obtains an accuracy of 93.19%,a precision of 93.25%,a recall of 92.80%,a specificity of 92.89%and an F1-Score of 93.19%on the Skin Cancer ISIC dataset.The findings demonstrate that the model that was proposed is robust and trustworthy when it comes to the classification of skin lesions.In addition,the utilization of Explainable AI techniques,such as Grad-CAM visualizations,assists in highlighting the most significant lesion areas that have an impact on the decisions that are made by the model. 展开更多
关键词 Skin lesions vision transformer CNN Xception deep learning network fusion explainable AI Grad-CAM skin cancer detection
在线阅读 下载PDF
Differential Privacy Integrated Federated Learning for Power Systems:An Explainability-Driven Approach
15
作者 Zekun Liu Junwei Ma +3 位作者 Xin Gong Xiu Liu Bingbing Liu Long An 《Computers, Materials & Continua》 2025年第10期983-999,共17页
With the ongoing digitalization and intelligence of power systems,there is an increasing reliance on large-scale data-driven intelligent technologies for tasks such as scheduling optimization and load forecasting.Neve... With the ongoing digitalization and intelligence of power systems,there is an increasing reliance on large-scale data-driven intelligent technologies for tasks such as scheduling optimization and load forecasting.Nevertheless,power data often contains sensitive information,making it a critical industry challenge to efficiently utilize this data while ensuring privacy.Traditional Federated Learning(FL)methods can mitigate data leakage by training models locally instead of transmitting raw data.Despite this,FL still has privacy concerns,especially gradient leakage,which might expose users’sensitive information.Therefore,integrating Differential Privacy(DP)techniques is essential for stronger privacy protection.Even so,the noise from DP may reduce the performance of federated learning models.To address this challenge,this paper presents an explainability-driven power data privacy federated learning framework.It incorporates DP technology and,based on model explainability,adaptively adjusts privacy budget allocation and model aggregation,thus balancing privacy protection and model performance.The key innovations of this paper are as follows:(1)We propose an explainability-driven power data privacy federated learning framework.(2)We detail a privacy budget allocation strategy:assigning budgets per training round by gradient effectiveness and at model granularity by layer importance.(3)We design a weighted aggregation strategy that considers the SHAP value and model accuracy for quality knowledge sharing.(4)Experiments show the proposed framework outperforms traditional methods in balancing privacy protection and model performance in power load forecasting tasks. 展开更多
关键词 Power data federated learning differential privacy explainability
在线阅读 下载PDF
An explainable feature selection framework for web phishing detection with machine learning
16
作者 Sakib Shahriar Shafin 《Data Science and Management》 2025年第2期127-136,共10页
In the evolving landscape of cyber threats,phishing attacks pose significant challenges,particularly through deceptive webpages designed to extract sensitive information under the guise of legitimacy.Conventional and ... In the evolving landscape of cyber threats,phishing attacks pose significant challenges,particularly through deceptive webpages designed to extract sensitive information under the guise of legitimacy.Conventional and machine learning(ML)-based detection systems struggle to detect phishing websites owing to their constantly changing tactics.Furthermore,newer phishing websites exhibit subtle and expertly concealed indicators that are not readily detectable.Hence,effective detection depends on identifying the most critical features.Traditional feature selection(FS)methods often struggle to enhance ML model performance and instead decrease it.To combat these issues,we propose an innovative method using explainable AI(XAI)to enhance FS in ML models and improve the identification of phishing websites.Specifically,we employ SHapley Additive exPlanations(SHAP)for global perspective and aggregated local interpretable model-agnostic explanations(LIME)to deter-mine specific localized patterns.The proposed SHAP and LIME-aggregated FS(SLA-FS)framework pinpoints the most informative features,enabling more precise,swift,and adaptable phishing detection.Applying this approach to an up-to-date web phishing dataset,we evaluate the performance of three ML models before and after FS to assess their effectiveness.Our findings reveal that random forest(RF),with an accuracy of 97.41%and XGBoost(XGB)at 97.21%significantly benefit from the SLA-FS framework,while k-nearest neighbors lags.Our framework increases the accuracy of RF and XGB by 0.65%and 0.41%,respectively,outperforming traditional filter or wrapper methods and any prior methods evaluated on this dataset,showcasing its potential. 展开更多
关键词 Webpage phishing Explainable AI Feature selection Machine learning
在线阅读 下载PDF
Leveraging Neural Networks and Explainable AI for Cost-Effective Retaining Wall Design
17
作者 Gebrail Bekdas Yaren Aydin +1 位作者 Celal Cakiroglu Umit Isikdag 《Computer Modeling in Engineering & Sciences》 2025年第5期1763-1787,共25页
Retaining walls are utilized to support the earth and prevent the soil from spreading with natural slope angles where there are differences in the elevation of ground surfaces.As the need for retaining structures incr... Retaining walls are utilized to support the earth and prevent the soil from spreading with natural slope angles where there are differences in the elevation of ground surfaces.As the need for retaining structures increases,the use of retaining walls is increasing.The retaining walls,which increase the stability of levels,are economical and meet existing adverse conditions.A considerable amount of retaining walls is made from steel-reinforced concrete.The construction of reinforced concrete retaining walls can be costly due to its components.For this reason,the optimum cost should be targeted in the design of retaining walls.This study presents an artificial neural network(ANN)model developed to predict the optimum dimensions of a retaining wall using soil properties,material properties,and external loading conditions.The dataset utilized to train the ANN model is generated with the Flower Pollination Algorithm.The target variables in the dataset are the length of the heel(y1),length of the toe(y2),thickness of the stem(top)(y3),thickness of the stem(bottom)(y4),foundation base thickness(y5)and cost(y6)and these are estimated by utilizing an ANN model based on the height of the wall(x1),material unit weight(x2),wall friction angle(x3),surcharge load(x4),concrete cost per m3(x5),steel cost per ton(x6)and the soil class(x7).The model is formulated and trained as a multi-output regression model,as all outputs are numeric and continuous.The training and evaluation of the model results in a high prediction performance(R20.99).In addition,the impacts of different input features on the model>predictions are revealed using the SHapley Additive exPlanations(SHAP)algorithm.The study demonstrates that when trained with a large dataset,ANN models perform very well by predicting the optimal cost with high performance. 展开更多
关键词 Retaining wall neural networks optimum design explainable machine learning
在线阅读 下载PDF
Utilizing Machine Learning and SHAP Values for Improved and Transparent Energy Usage Predictions
18
作者 Faisal Ghazi Beshaw Thamir Hassan Atyia +2 位作者 Mohd Fadzli Mohd Salleh Mohamad Khairi Ishak Abdul Sattar Din 《Computers, Materials & Continua》 2025年第5期3553-3583,共31页
The significance of precise energy usage forecasts has been highlighted by the increasing need for sustainability and energy efficiency across a range of industries.In order to improve the precision and openness of en... The significance of precise energy usage forecasts has been highlighted by the increasing need for sustainability and energy efficiency across a range of industries.In order to improve the precision and openness of energy consumption projections,this study investigates the combination of machine learning(ML)methods with Shapley additive explanations(SHAP)values.The study evaluates three distinct models:the first is a Linear Regressor,the second is a Support Vector Regressor,and the third is a Decision Tree Regressor,which was scaled up to a Random Forest Regressor/Additions made were the third one which was Regressor which was extended to a Random Forest Regressor.These models were deployed with the use of Shareable,Plot-interpretable Explainable Artificial Intelligence techniques,to improve trust in the AI.The findings suggest that our developedmodels are superior to the conventional models discussed in prior studies;with high Mean Absolute Error(MAE)and Root Mean Squared Error(RMSE)values being close to perfection.In detail,the Random Forest Regressor shows the MAE of 0.001 for predicting the house prices whereas the SVR gives 0.21 of MAE and 0.24 RMSE.Such outcomes reflect the possibility of optimizing the use of the promoted advanced AI models with the use of Explainable AI for more accurate prediction of energy consumption and at the same time for the models’decision-making procedures’explanation.In addition to increasing prediction accuracy,this strategy gives stakeholders comprehensible insights,which facilitates improved decision-making and fosters confidence in AI-powered energy solutions.The outcomes show how well ML and SHAP work together to enhance prediction performance and guarantee transparency in energy usage projections. 展开更多
关键词 Renewable energy consumption machine learning explainable AI random forest support vector machine decision trees forecasting energy modeling
在线阅读 下载PDF
Skillful bias correction of offshore near-surface wind field forecasting based on a multi-task machine learning model
19
作者 Qiyang Liu Anboyu Guo +5 位作者 Fengxue Qiao Xinjian Ma Yan-An Liu Yong Huang Rui Wang Chunyan Sheng 《Atmospheric and Oceanic Science Letters》 2025年第5期28-35,共8页
Accurate short-term forecast of offshore wind fields is still challenging for numerical weather prediction models.Based on three years of 48-hour forecast data from the European Centre for Medium-Range Weather Forecas... Accurate short-term forecast of offshore wind fields is still challenging for numerical weather prediction models.Based on three years of 48-hour forecast data from the European Centre for Medium-Range Weather Forecasts Integrated Forecasting System global model(ECMWF-IFS)over 14 offshore weather stations along the coast of Shandong Province,this study introduces a multi-task learning(MTL)model(TabNet-MTL),which significantly improves the forecast bias of near-surface wind direction and speed simultaneously.TabNet-MTL adopts the feature engineering method,utilizes mean square error as the loss function,and employs the 5-fold cross validation method to ensure the generalization ability of the trained model.It demonstrates superior skills in wind field correction across different forecast lead times over all stations compared to its single-task version(TabNet-STL)and three other popular single-task learning models(Random Forest,LightGBM,and XGBoost).Results show that it significantly reduces root mean square error of the ECMWF-IFS wind speed forecast from 2.20 to 1.25 m s−1,and increases the forecast accuracy of wind direction from 50%to 65%.As an explainable deep learning model,the weather stations and long-term temporal statistics of near-surface wind speed are identified as the most influential variables for TabNet-MTL in constructing its feature engineering. 展开更多
关键词 Forecast bias correction Wind field Multi-task learning Feature engineering Explainable AI
在线阅读 下载PDF
Advanced Feature Selection Techniques in Medical Imaging--A Systematic Literature Review
20
作者 Sunawar Khan Tehseen Mazhar +5 位作者 Naila Sammar Naz Fahed Ahmed Tariq Shahzad Atif Ali Muhammad Adnan Khan Habib Hamam 《Computers, Materials & Continua》 2025年第11期2347-2401,共55页
Feature selection(FS)plays a crucial role in medical imaging by reducing dimensionality,improving computational efficiency,and enhancing diagnostic accuracy.Traditional FS techniques,including filter,wrapper,and embed... Feature selection(FS)plays a crucial role in medical imaging by reducing dimensionality,improving computational efficiency,and enhancing diagnostic accuracy.Traditional FS techniques,including filter,wrapper,and embedded methods,have been widely used but often struggle with high-dimensional and heterogeneous medical imaging data.Deep learning-based FS methods,particularly Convolutional Neural Networks(CNNs)and autoencoders,have demonstrated superior performance but lack interpretability.Hybrid approaches that combine classical and deep learning techniques have emerged as a promising solution,offering improved accuracy and explainability.Furthermore,integratingmulti-modal imaging data(e.g.,MagneticResonance Imaging(MRI),ComputedTomography(CT),Positron Emission Tomography(PET),and Ultrasound(US))poses additional challenges in FS,necessitating advanced feature fusion strategies.Multi-modal feature fusion combines information fromdifferent imagingmodalities to improve diagnostic accuracy.Recently,quantum computing has gained attention as a revolutionary approach for FS,providing the potential to handle high-dimensional medical data more efficiently.This systematic literature review comprehensively examines classical,Deep Learning(DL),hybrid,and quantum-based FS techniques inmedical imaging.Key outcomes include a structured taxonomy of FS methods,a critical evaluation of their performance across modalities,and identification of core challenges such as computational burden,interpretability,and ethical considerations.Future research directions—such as explainable AI(XAI),federated learning,and quantum-enhanced FS—are also emphasized to bridge the current gaps.This review provides actionable insights for developing scalable,interpretable,and clinically applicable FS methods in the evolving landscape of medical imaging. 展开更多
关键词 Feature selection medical imaging deep learning hybrid approaches multi-modal imaging quantum computing explainable AI computational efficiency dimensionality reduction
在线阅读 下载PDF
上一页 1 2 24 下一页 到第
使用帮助 返回顶部