期刊文献+
共找到6篇文章
< 1 >
每页显示 20 50 100
Integrating explainable deep learning with multi-omics for screening progressive diagnostic biomarkers of hepatocellular carcinoma covering the“inflammation-cancer”transformation
1
作者 Saiyu Li Yiwen Zhang +8 位作者 Lifang Guan Yijing Dong Mingzhe Zhang Qian Zhang Huarong Xu Wei Xiao Zhenzhong Wang Yan Cui Qing Li 《Journal of Pharmaceutical Analysis》 2025年第9期2199-2202,共4页
Chronic uncontrolled inflammation is a major risk factor driving the occurrence of hepatocellular carcinoma(HCC),with over half of global cases attributed to hepatitis B virus(HBV)infection.Persistent inflammation fre... Chronic uncontrolled inflammation is a major risk factor driving the occurrence of hepatocellular carcinoma(HCC),with over half of global cases attributed to hepatitis B virus(HBV)infection.Persistent inflammation frequently progresses to cirrhosis and,ultimately,malignancy[1].Monitoring the key risk factors involved in the inflammatory-to-cancerous transformation in HCC is crucial for enabling timely intervention and improving patient survival rates.To address this challenge,we analyzed plasma samples collected from healthy volunteers and patients at various stages of HCC progression. 展开更多
关键词 plasma samples chronic uncontrolled inflammation multi omics explainable deep learning hepatocellular carcinoma key risk factors inflammation cancer transformation hepatocellular carcinoma hcc
暂未订购
Explainable deep learning identifies patterns and drivers of freshwater harmful algal blooms
2
作者 Shengyue Chen Jinliang Huang +4 位作者 Jiacong Huang Peng Wang Changyang Sun Zhenyu Zhang Shijie Jiang 《Environmental Science and Ecotechnology》 2025年第1期262-271,共10页
The escalating magnitude,frequency,and duration of harmful algal blooms(HABs)pose significant challenges to freshwater ecosystems worldwide.However,the mechanisms driving HABs remain poorly understood,in part due to t... The escalating magnitude,frequency,and duration of harmful algal blooms(HABs)pose significant challenges to freshwater ecosystems worldwide.However,the mechanisms driving HABs remain poorly understood,in part due to the strong regional specificity of algal processes and the uneven data availability.These complexities make it difficult to generalize HAB dynamics and effectively predict their occurrence using traditional models.To address these challenges,we developed an explainable deep learning approach using long short-term memory(LSTM)models combined with explanation techniques that can capture complex patterns and provide explainable insights into key HAB drivers.We applied this approach for algal density modeling at 102 sites in China's lakes and reservoirs over three years.LSTMs effectively captured daily algal dynamics,achieving mean and maximum Nash-Sutcliffe efficiency coefficients of 0.48 and 0.95 during testing phase.Moreover,water temperature emerged as the primary driver of HABs both nationally and in over 30%of localities,with stronger water temperature sensitivity observed in mid-to low-latitudes.We also identified regional similarities that allow for the successful transferability in modeling algal dynamics.Specifically,using fine-tuned transfer learning,we improved the prediction accuracy in over 75%of poorly gauged areas.Overall,LSTM-based explainable deep learning approach effectively addresses key challenges in HAB modeling by tackling both regional specificity and data limitations.By accurately predicting algal dynamics and identifying critical drivers,this approach provides actionable insights into the mechanisms of HABs,ultimately aids in the implementation of effective mitigation measures for nationwide and regional freshwater ecosystems. 展开更多
关键词 Harmful algal blooms China's lakes and reservoirs explainable deep learning Sensitivity analysis Regional transferability
原文传递
Atherosclerotic plaque classification in carotid ultrasound images using machine learning and explainable deep learning 被引量:1
3
作者 Soni Singh Pankaj K.Jain +2 位作者 Neeraj Sharma Mausumi Pohit Sudipta Roy 《Intelligent Medicine》 EI CSCD 2024年第2期83-95,共13页
Objective The incidence of cardiovascular diseases(CVD)is rising rapidly worldwide.Some forms of CVD,such as stroke and heart attack,are more common among patients with certain conditions.Atherosclerosis development i... Objective The incidence of cardiovascular diseases(CVD)is rising rapidly worldwide.Some forms of CVD,such as stroke and heart attack,are more common among patients with certain conditions.Atherosclerosis development is a major factor underlying cardiovascular events,such as heart attack and stroke,and its early detection may prevent such events.Ultrasound imaging of carotid arteries is a useful method for diagnosis of atherosclerotic plaques;however,an automated method to classify atherosclerotic plaques for evaluation of early-stage CVD is needed.Here,we propose an automated method for classification of high-risk atherosclerotic plaque ultrasound images.Methods Five deep learning(DL)models(VGG16,ResNet-50,GoogLeNet,XceptionNet,and SqueezeNet)were used for automated classification and the results compared with those of a machine learning(ML)-based technique,involving extraction of 23 texture features from ultrasound images and classification using a Support Vector Machine classifier.To enhance model interpretability,output gradient-weighted convolutional activation maps(GradCAMs)were generated and overlayed on original images.Results A series of indices,including accuracy,sensitivity,specificity,F1-score,Cohen-kappa index,and area under the curve values,were calculated to evaluate model performance.GradCAM output images allowed visualization of the most significant ultrasound image regions.The GoogLeNet model yielded the highest accuracy(98.20%).Conclusion ML models may be also suitable for applications requiring low computational resource.Further,DL models could be more completely automated than ML models. 展开更多
关键词 explainable deep learning Carotid artery CLASSIFICATION VGG 16 ResNet-50 GoogLeNet XceptionNet SqueezeNet
原文传递
Diagnosing health in composite battery electrodes with explainable deep learning and partial charging data 被引量:1
4
作者 Haijun Ruan Niall Kirkaldy +1 位作者 Gregory J.Offer Billy Wu 《Energy and AI》 EI 2024年第2期256-268,共13页
Lithium-ion batteries with composite anodes of graphite and silicon are increasingly being used.However,their degradation pathways are complicated due to the blended nature of the electrodes,with graphite and silicon ... Lithium-ion batteries with composite anodes of graphite and silicon are increasingly being used.However,their degradation pathways are complicated due to the blended nature of the electrodes,with graphite and silicon degrading at different rates.Here,we develop a deep learning health diagnostic framework to rapidly quantify and separate the different degradation rates of graphite and silicon in composite anodes using partial charging data.The convolutional neural network(CNN),trained with synthetic data,uses experimental partial charging data to diagnose electrode-level health of tested batteries,with errors of less than 3.1%(corresponding to the loss of active material reaching∼75%).Sensitivity analysis of the capacity-voltage curve under different degradation modes is performed to provide a physically informed voltage window for diagnostics with partial charging data.By using the gradient-weighted class activation mapping approach,we provide explainable insights into how these CNNs work;highlighting regions of the voltage-curve to which they are most sensitive.Robustness is validated by introducing noise to the data,with no significant negative impact on the diagnostic accuracy for noise levels below 10 mV,thus highlighting the potential for deep learning approaches in the diagnostics of lithium-ion battery performance under real-world conditions.The framework presented here can be generalised to other cell formats and chemistries,providing robust and explainable battery diagnostics for both conventional single material electrodes,but also the more challenging composite electrodes. 展开更多
关键词 Lithium-ion battery Composite electrode Silicon Degradation diagnostic explainable deep learning Partial charging
在线阅读 下载PDF
AutoSHARC: Feedback Driven Explainable Intrusion Detection with SHAP-Guided Post-Hoc Retraining for QoS Sensitive IoT Networks
5
作者 Muhammad Saad Farooqui Aizaz Ahmad Khattak +4 位作者 Bakri Hossain Awaji Nazik Alturki Noha Alnazzawi Muhammad Hanif Muhammad Shahbaz Khan 《Computer Modeling in Engineering & Sciences》 2025年第12期4395-4439,共45页
Quality of Service(QoS)assurance in programmable IoT and 5G networks is increasingly threatened by cyberattacks such as Distributed Denial of Service(DDoS),spoofing,and botnet intrusions.This paper presents AutoSHARC,... Quality of Service(QoS)assurance in programmable IoT and 5G networks is increasingly threatened by cyberattacks such as Distributed Denial of Service(DDoS),spoofing,and botnet intrusions.This paper presents AutoSHARC,a feedback-driven,explainable intrusion detection framework that integrates Boruta and LightGBM–SHAP feature selection with a lightweight CNN–Attention–GRU classifier.AutoSHARC employs a two-stage feature selection pipeline to identify the most informative features from high-dimensional IoT traffic and reduces 46 features to 30 highly informative ones,followed by post-hoc SHAP-guided retraining to refine feature importance,forming a feedback loopwhere only the most impactful attributes are reused to retrain themodel.This iterative refinement reduces computational overhead,accelerates detection latency,and improves transparency.Evaluated on the CIC IoT 2023 dataset,AutoSHARC achieves 98.98%accuracy,98.9%F1-score,and strong robustness with a Matthews Correlation Coefficient of 0.98 and Cohen’s Kappa of 0.98.The final model contains only 531,272 trainable parameters with a compact 2 MB size,enabling real-time deployment on resource-constrained IoT nodes.By combining explainable AI with iterative feature refinement,AutoSHARC provides scalable and trustworthy intrusion detection while preserving key QoS indicators such as latency,throughput,and reliability. 展开更多
关键词 QoS preservation intelligent programmable networks intrusion detection IoT security feature selection SHAP explainability Boruta LightGBM explainable deep learning resource-efficient AI
在线阅读 下载PDF
Development and testing of an image transformer for explainable autonomous driving systems 被引量:4
6
作者 Jiqian Dong Sikai Chen +2 位作者 Mohammad Miralinaghi Tiantian Chen Samuel Labi 《Journal of Intelligent and Connected Vehicles》 EI 2022年第3期235-249,共15页
Purpose–Perception has been identified as the main cause underlying most autonomous vehicle related accidents.As the key technology in perception,deep learning(DL)based computer vision models are generally considered... Purpose–Perception has been identified as the main cause underlying most autonomous vehicle related accidents.As the key technology in perception,deep learning(DL)based computer vision models are generally considered to be black boxes due to poor interpretability.These have exacerbated user distrust and further forestalled their widespread deployment in practical usage.This paper aims to develop explainable DL models for autonomous driving by jointly predicting potential driving actions with corresponding explanations.The explainable DL models can not only boost user trust in autonomy but also serve as a diagnostic approach to identify any model deficiencies or limitations during the system development phase.Design/methodology/approach–This paper proposes an explainable end-to-end autonomous driving system based on“Transformer,”a state-ofthe-art self-attention(SA)based model.The model maps visual features from images collected by onboard cameras to guide potential driving actions with corresponding explanations,and aims to achieve soft attention over the image’s global features.Findings–The results demonstrate the efficacy of the proposed model as it exhibits superior performance(in terms of correct prediction of actions and explanations)compared to the benchmark model by a significant margin with much lower computational cost on a public data set(BDD-OIA).From the ablation studies,the proposed SA module also outperforms other attention mechanisms in feature fusion and can generate meaningful representations for downstream prediction.Originality/value–In the contexts of situational awareness and driver assistance,the proposed model can perform as a driving alarm system for both human-driven vehicles and autonomous vehicles because it is capable of quickly understanding/characterizing the environment and identifying any infeasible driving actions.In addition,the extra explanation head of the proposed model provides an extra channel for sanity checks to guarantee that the model learns the ideal causal relationships.This provision is critical in the development of autonomous systems. 展开更多
关键词 explainable deep learning Computer vision TRANSFORMER Autonomous driving
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部