Model checking is an automated formal verification method to verify whether epistemic multi-agent systems adhere to property specifications.Although there is an extensive literature on qualitative properties such as s...Model checking is an automated formal verification method to verify whether epistemic multi-agent systems adhere to property specifications.Although there is an extensive literature on qualitative properties such as safety and liveness,there is still a lack of quantitative and uncertain property verifications for these systems.In uncertain environments,agents must make judicious decisions based on subjective epistemic.To verify epistemic and measurable properties in multi-agent systems,this paper extends fuzzy computation tree logic by introducing epistemic modalities and proposing a new Fuzzy Computation Tree Logic of Knowledge(FCTLK).We represent fuzzy multi-agent systems as distributed knowledge bases with fuzzy epistemic interpreted systems.In addition,we provide a transformation algorithm from fuzzy epistemic interpreted systems to fuzzy Kripke structures,as well as transformation rules from FCTLK formulas to Fuzzy Computation Tree Logic(FCTL)formulas.Accordingly,we transform the FCTLK model checking problem into the FCTL model checking.This enables the verification of FCTLK formulas by using the fuzzy model checking algorithm of FCTL without additional computational overheads.Finally,we present correctness proofs and complexity analyses of the proposed algorithms.Additionally,we further illustrate the practical application of our approach through an example of a train control system.展开更多
Black-box models have demonstrated remarkable accuracy in forecasting building energy loads.However,they usually lack interpretability and do not incorporate domain knowledge,making it difficult for users to trust the...Black-box models have demonstrated remarkable accuracy in forecasting building energy loads.However,they usually lack interpretability and do not incorporate domain knowledge,making it difficult for users to trust their predictions in practical applications.One important and interesting question remains unanswered:is it possible to use intrinsically interpretable models to achieve accuracy comparable to that of black-box models?With an aim of answering this question,this study proposes an intrinsically interpretable machine learning-based method to forecast building energy loads.It creatively combines two intrinsically interpretable machine learning algorithms:clustering decision trees and adaptive multiple linear regression.Clustering decision trees aim to automatically identify various building operation conditions,allowing for the training of multiple models tailored to each condition.It can reduce the complexity of model training data,leading to higher accuracy.Adaptive multiple linear regression is an improved regression algorithm tailored to building energy load prediction.It can adaptively modify regression coefficients according to building operations,enhancing the non-linear fitting capability of multiple linear regression.The proposed method is evaluated utilizing the operational data from an office building.The results indicate that the proposed method exhibits comparable accuracy to both random forests and extreme gradient boosting.Furthermore,it shows significantly superior accuracy,with an average improvement of 10.2%,compared with some popular black-box algorithms such as artificial neural networks,support vector regression,and classification and regression trees.As for model interpretability,the proposed method reveals that historical cooling loads are the most crucial for predicting building cooling loads under most conditions.Additionally,outdoor air temperature has a significant contribution to building cooling load prediction during the daytime on weekdays in summer and transition seasons.In the future,it will be valuable to explore integrating the laws of physics into the proposed method to further enhance its interpretability.展开更多
Detecting fake news in multimodal and multilingual social media environments is challenging due to inherent noise,inter-modal imbalance,computational bottlenecks,and semantic ambiguity.To address these issues,we propo...Detecting fake news in multimodal and multilingual social media environments is challenging due to inherent noise,inter-modal imbalance,computational bottlenecks,and semantic ambiguity.To address these issues,we propose SparseMoE-MFN,a novel unified framework that integrates sparse attention with a sparse-activated Mixture of-Experts(MoE)architecture.This framework aims to enhance the efficiency,inferential depth,and interpretability of multimodal fake news detection.Sparse MoE-MFN leverages LLaVA-v1.6-Mistral-7B-HF for efficient visual encoding and Qwen/Qwen2-7B for text processing.The sparse attention module adaptively filters irrelevant tokens and focuses on key regions,reducing computational costs and noise.The sparse MoE module dynamically routes inputs to specialized experts(visual,language,cross-modal alignment)based on content heterogeneity.This expert specialization design boosts computational efficiency and semantic adaptability,enabling precise processing of complex content and improving performance on ambiguous categories.Evaluated on the large-scale,multilingualMR2 dataset,SparseMoEMFN achieves state-of-the-art performance.It obtains an accuracy of 86.7%and a macro-averaged F1 score of 0.859,outperforming strong baselines like MiniGPT-4 by 3.4%and 3.2%,respectively.Notably,it shows significant advantages in the“unverified”category.Furthermore,SparseMoE-MFN demonstrates superior computational efficiency,with an average inference latency of 89.1 ms and 95.4 GFLOPs,substantially lower than existing models.Ablation studies and visualization analyses confirm the effectiveness of both sparse attention and sparse MoE components in improving accuracy,generalization,and efficiency.展开更多
Scattering obscures information carried by waves by producing speckle patterns,posing a fundamental challenge across diverse fields,from microscopy to astronomy.Although machine learning has recently shown promise in ...Scattering obscures information carried by waves by producing speckle patterns,posing a fundamental challenge across diverse fields,from microscopy to astronomy.Although machine learning has recently shown promise in speckle analysis,existing approaches are hindered by their dependence on large,labeled datasets—a significant bottleneck in many real-world applications.Here,we introduce speckle unsupervised recognition and evaluation(SURE),a groundbreaking unsupervised learning strategy for speckle recognition that eliminates the need for labeled training data.SURE's distinctive feature lies in its ability to extract invariant features through advanced clustering algorithms to enable direct classification of high-level information from speckle patterns without prior knowledge.We demonstrate the transformative potential of this approach in two key applications:(1)a noninvasive glucose monitoring system that accurately tracks glucose concentrations over time without extensive calibration and(2)a high-throughput communication system using multimode fibers,achieving improved performance in dynamic environments.In addition,we showcase SURE's unprecedented capability to classify objects hidden behind obstacles using scattered light,further broadening its scope.This versatile approach opens new frontiers in biomedical diagnostics,quantum network decoupling,and remote sensing,unlocking a transformative new paradigm for extracting information from seemingly random optical patterns.展开更多
Summary Pain is not pain because people interpret symptoms differently.Neck pain is one of the most common pains and should not be missing from a study on the effects of pain.Depression does not arise solely from pain...Summary Pain is not pain because people interpret symptoms differently.Neck pain is one of the most common pains and should not be missing from a study on the effects of pain.Depression does not arise solely from pain but is multicausal and often caused by this cumulative effect.展开更多
We thank Power et al.1 for their interest in our review2 and for contributing to this important scientific discussion.We welcome their commentary and acknowledge the merit of continuing to scrutinize and refine interp...We thank Power et al.1 for their interest in our review2 and for contributing to this important scientific discussion.We welcome their commentary and acknowledge the merit of continuing to scrutinize and refine interpretations in this evolving field.Given that much research time and financial investment is being given to the study of the effects of eccentric training in both athletic and clinical contexts,it is incumbent on our field to demonstrate whether eccentric contractions are a key(or the key)stimulus for sarcomerogenesis(increases in serial sarcomere number(SSN)).展开更多
Most predictive maintenance studies have emphasized accuracy but provide very little focus on Interpretability or deployment readiness.This study improves on prior methods by developing a small yet robust system that ...Most predictive maintenance studies have emphasized accuracy but provide very little focus on Interpretability or deployment readiness.This study improves on prior methods by developing a small yet robust system that can predict when turbofan engines will fail.It uses the NASA CMAPSS dataset,which has over 200,000 engine cycles from260 engines.The process begins with systematic preprocessing,which includes imputation,outlier removal,scaling,and labelling of the remaining useful life.Dimensionality is reduced using a hybrid selection method that combines variance filtering,recursive elimination,and gradient-boosted importance scores,yielding a stable set of 10 informative sensors.To mitigate class imbalance,minority cases are oversampled,and class-weighted losses are applied during training.Benchmarking is carried out with logistic regression,gradient boosting,and a recurrent design that integrates gated recurrent units with long short-term memory networks.The Long Short-Term Memory–Gated Recurrent Unit(LSTM–GRU)hybrid achieved the strongest performance with an F1 score of 0.92,precision of 0.93,recall of 0.91,ReceiverOperating Characteristic–AreaUnder the Curve(ROC-AUC)of 0.97,andminority recall of 0.75.Interpretability testing using permutation importance and Shapley values indicates that sensors 13,15,and 11 are the most important indicators of engine wear.The proposed system combines imbalance handling,feature reduction,and Interpretability into a practical design suitable for real industrial settings.展开更多
With the deep integration of smart manufacturing and IoT technologies,higher demands are placed on the intelligence and real-time performance of industrial equipment fault detection.For industrial fans,base bolt loose...With the deep integration of smart manufacturing and IoT technologies,higher demands are placed on the intelligence and real-time performance of industrial equipment fault detection.For industrial fans,base bolt loosening faults are difficult to identify through conventional spectrum analysis,and the extreme scarcity of fault data leads to limited training datasets,making traditional deep learning methods inaccurate in fault identification and incapable of detecting loosening severity.This paper employs Bayesian Learning by training on a small fault dataset collected from the actual operation of axial-flow fans in a factory to obtain posterior distribution.This method proposes specific data processing approaches and a configuration of Bayesian Convolutional Neural Network(BCNN).It can effectively improve the model’s generalization ability.Experimental results demonstrate high detection accuracy and alignment with real-world applications,offering practical significance and reference value for industrial fan bolt loosening detection under data-limited conditions.展开更多
Mortality prediction in respiratory health is challenging,especially when using large-scale clinical datasets composed primarily of categorical variables.Traditional digital twin(DT)frameworks often rely on longi-tudi...Mortality prediction in respiratory health is challenging,especially when using large-scale clinical datasets composed primarily of categorical variables.Traditional digital twin(DT)frameworks often rely on longi-tudinal or sensor-based data,which are not always available in public health contexts.In this article,we propose a novel proto-DT framework for mortality prediction in respiratory health using a large-scale categorical biomedical dataset.This dataset contains 415,711 severe acute respiratory infection cases from the Brazilian Unified Health System,including both COVID-19 and non-COVID-19 patients.Four classification models—extreme gradient boosting(XGBoost),logistic regression,random forest,and a deep neural network(DNN)—are trained using cost-sensitive learning to address class imbalance.The models are evaluated using accuracy,precision,recall,F1-score,and area under the curve(AUC)related to the receiver operating characteristic(ROC).The framework supports simulated interventions by modifying selected inputs and recalculating predicted mortality.Additionally,we incorporate multiple correspondence analysis and K-means clustering to explore model sensitivity.A Python library has been developed to ensure reproducibility.All models achieve AUC-ROC values near or above 0.85.XGBoost yields the highest accuracy(0.84),while the DNN achieves the highest recall(0.81).Scenario-based simulations reveal how key clinical factors,such as intensive care unit admission and oxygen support,affect predicted outcomes.The proposed proto-DT framework demonstrates the feasibility of mortality prediction and intervention simulation using categorical data alone.This framework provides a foundation for data-driven explainable DTs in public health,even in the absence of time-series data.展开更多
Deep learning has become integral to robotics,particularly in tasks such as robotic grasping,where objects often exhibit diverse shapes,textures,and physical properties.In robotic grasping tasks,due to the diverse cha...Deep learning has become integral to robotics,particularly in tasks such as robotic grasping,where objects often exhibit diverse shapes,textures,and physical properties.In robotic grasping tasks,due to the diverse characteristics of the targets,frequent adjustments to the network architecture and parameters are required to avoid a decrease in model accuracy,which presents a significant challenge for non-experts.Neural Architecture Search(NAS)provides a compelling method through the automated generation of network architectures,enabling the discovery of models that achieve high accuracy through efficient search algorithms.Compared to manually designed networks,NAS methods can significantly reduce design costs,time expenditure,and improve model performance.However,such methods often involve complex topological connections,and these redundant structures can severely reduce computational efficiency.To overcome this challenge,this work puts forward a robotic grasp detection framework founded on NAS.The method automatically designs a lightweight network with high accuracy and low topological complexity,effectively adapting to the target object to generate the optimal grasp pose,thereby significantly improving the success rate of robotic grasping.Additionally,we use Class Activation Mapping(CAM)as an interpretability tool,which captures sensitive information during the perception process through visualized results.The searched model achieved competitive,and in some cases superior,performance on the Cornell and Jacquard public datasets,achieving accuracies of 98.3%and 96.8%,respectively,while sustaining a detection speed of 89 frames per second with only 0.41 million parameters.To further validate its effectiveness beyond benchmark evaluations,we conducted real-world grasping experiments on a UR5 robotic arm,where the model demonstrated reliable performance across diverse objects and high grasp success rates,thereby confirming its practical applicability in robotic manipulation tasks.展开更多
Accurate forecasting of tropical cyclone(TC)tracks and intensities is essential.Although the TianXing large weather model,a six-hourly forecasting model surpassing operational forecasts,exhibits superior performance,i...Accurate forecasting of tropical cyclone(TC)tracks and intensities is essential.Although the TianXing large weather model,a six-hourly forecasting model surpassing operational forecasts,exhibits superior performance,its TC forecasts still require enhancement.Prediction errors persist due to biases in the training data and smoothing effects in data-driven methods.To address this,we introduce CycloneBCNet,a deep-learning model designed to correct TianXing’s TC forecast biases by leveraging spatial and temporal data.CycloneBCNet utilizes the SimVP(simpler yet better video prediction)framework with spatial attention to highlight cyclone core regions in forecast fields.It also incorporates TC trend information(center position,maximum wind speed,and minimum sea level pressure)via an LSTM(long short-term memory)module.These TC vectors are derived from post-processed TianXing forecasts.By fusing features from forecast fields and TC vectors,CycloneBCNet corrects biases across multiple lead times.At a 96-h lead time,the track error reduces from 162.4 to 86.4 km,the wind speed error from 17.2 to 6.69 m s^(-1),and the pressure error from 22.2 to 9.36 hPa.Interpretability analysis shows that CycloneBCNet adjusts its attention across forecast lead times.Intensity corrections prioritize inner-core dynamics,particularly the eye and eyewall,while track corrections shift from lower-level variables and the cyclone’s core to broader environmental factors and mid-to upper-level features as the forecast duration increases.These findings demonstrate that CycloneBCNet effectively captures key TC dynamics consistent with meteorological principles,including the dominance of near-surface conditions for intensity and the increasing influence of steering currents on track prediction.展开更多
The integration of machine learning(ML)into geohazard assessment has successfully instigated a paradigm shift,leading to the production of models that possess a level of predictive accuracy previously considered unatt...The integration of machine learning(ML)into geohazard assessment has successfully instigated a paradigm shift,leading to the production of models that possess a level of predictive accuracy previously considered unattainable.However,the black-box nature of these systems presents a significant barrier,hindering their operational adoption,regulatory approval,and full scientific validation.This paper provides a systematic review and synthesis of the emerging field of explainable artificial intelligence(XAI)as applied to geohazard science(GeoXAI),a domain that aims to resolve the long-standing trade-off between model performance and interpretability.A rigorous synthesis of 87 foundational studies is used to map the intellectual and methodological contours of this rapidly expanding field.The analysis reveals that current research efforts are concentrated predominantly on landslide and flood assessment.Methodologically,tree-based ensembles and deep learning models dominate the literature,with SHapley Additive exPlanations(SHAP)frequently adopted as the principal post-hoc explanation technique.More importantly,the review further documents how the role of XAI has shifted:rather than being used solely as a tool for interpreting models after training,it is increasingly integrated into the modeling cycle itself.Recent applications include its use in feature selection,adaptive sampling strategies,and model evaluation.The evidence also shows that GeoXAI extends beyond producing feature rankings.It reveals nonlinear thresholds and interaction effects that generate deeper mechanistic insights into hazard processes and mechanisms.Nevertheless,several key challenges remain unresolved within the field.These persistent issues are especially pronounced when considering the crucial necessity for interpretation stability,the demanding scholarly task of reliably distinguishing correlation from causation,and the development of appropriate methods for the treatment of complex spatio-temporal dynamics.展开更多
The majority of our daily activities and routines are highly dependent on vision.What we experience as our vision arises from the detection and encoding of visual signals in the retina,which are then interpreted in th...The majority of our daily activities and routines are highly dependent on vision.What we experience as our vision arises from the detection and encoding of visual signals in the retina,which are then interpreted in the brain.This interpretation has the benefit of providing a level of constancy to what we experience as vision but also limits our ability to perceive subtle decline in our own vision.展开更多
Deep learning-based methods have shown great potential in intelligent bearing fault diagnosis.However,most existing approaches suffer from the scarcity of labeled data,which often results in insufficient robustness un...Deep learning-based methods have shown great potential in intelligent bearing fault diagnosis.However,most existing approaches suffer from the scarcity of labeled data,which often results in insufficient robustness under complex working conditions and a general lack of interpretability.To address these challenges,we propose a physics-informed multimodal fault diagnosis framework based on few-shot learning,which integrates a 2D timefrequency image encoder and a 1Dvibration signal encoder.Specifically,we embed prior knowledge ofmulti-resolution analysis from signal processing into the model by designing a Laplace Wavelet Convolution(LWC)module,which enhances interpretability since wavelet coefficients naturally correspond to specific frequency and temporal structures.To further balance the guidance of physical priors with the flexibility of learnable representations,we introduce a parametric multi-kernel wavelet that employs channel-wise dynamic attention to adaptively select relevant wavelet bases,thereby improving the feature expressiveness.Moreover,we develop a Mahalanobis-Prototype Joint Metric,which constructs more accurate and distribution-consistent decision boundaries under few-shot conditions.Comprehensive experiments on the Case Western Reserve University(CWRU)and Paderborn University(PU)bearing datasets demonstrate the superior effectiveness,robustness,and interpretability of the proposed approach compared with state-of-the-art baselines.展开更多
Multimodal dialogue systems often fail to maintain coherent reasoning over extended conversations and suffer from hallucination due to limited context modeling capabilities.Current approaches struggle with crossmodal ...Multimodal dialogue systems often fail to maintain coherent reasoning over extended conversations and suffer from hallucination due to limited context modeling capabilities.Current approaches struggle with crossmodal alignment,temporal consistency,and robust handling of noisy or incomplete inputs across multiple modalities.We propose Multi Agent-Chain of Thought(CoT),a novel multi-agent chain-of-thought reasoning framework where specialized agents for text,vision,and speech modalities collaboratively construct shared reasoning traces through inter-agent message passing and consensus voting mechanisms.Our architecture incorporates self-reflection modules,conflict resolution protocols,and dynamic rationale alignment to enhance consistency,factual accuracy,and user engagement.The framework employs a hierarchical attention mechanism with cross-modal fusion and implements adaptive reasoning depth based on dialogue complexity.Comprehensive evaluations on Situated Interactive Multi-Modal Conversations(SIMMC)2.0,VisDial v1.0,and newly introduced challenging scenarios demonstrate statistically significant improvements in grounding accuracy(p<0.01),chain-of-thought interpretability,and robustness to adversarial inputs compared to state-of-the-art monolithic transformer baselines and existing multi-agent approaches.展开更多
Objective To develop a prognostic prediction model for early-stage triple-negative breast cancer(TNBC)using H&E-stained pathological images and to investigate its underlying biological interpretability.Methods A d...Objective To develop a prognostic prediction model for early-stage triple-negative breast cancer(TNBC)using H&E-stained pathological images and to investigate its underlying biological interpretability.Methods A deep learning model was trained on 340 WSIs and externally validated using 81 TCGA cases.Image-derived features extracted through convolutional neural networks were integrated with clinicopathological variables.Model performance was assessed using ROC curve analysis,and interpretability was evaluated by correlating image features with mRNA-seq data and characteristics of the immune microenvironment.Results The model achieved AUCs of 0.86 and 0.75 in the training and validation cohorts,respectively.Analysis using HoVer-Net indicated that lymphocyte abundance was associated with recurrence risk.Texture-related features showed significant correlations with immune cell infiltration and prognostic gene expression profiles.Conclusion This study demonstrates that deep learning can enable accurate prognostic prediction in early-stage TNBC,with interpretable image features that reflect the tumor immune microenvironment and gene expression profiles.展开更多
The forward model of optical fiber strain induced by fractures,together with the associated model resolution matrix,is used to demonstrate the interpretability of fracture parameters once the fracture intersects the f...The forward model of optical fiber strain induced by fractures,together with the associated model resolution matrix,is used to demonstrate the interpretability of fracture parameters once the fracture intersects the fiber.A regularized inversion framework for fracture parameters is established to evaluate the influence of measured data quality on the accuracy of iterative regularized inversion.An interpretation approach for both fracture width and height is proposed,and the synthetic forward data with measurement error and field examples are employed to validate the accuracy of the simultaneous inversion of fracture width and height.The results indicate that,after the fracture contacts the fiber,the strain response is strongly sensitive only to the fracture parameters at the intersection location,whereas the interpretability of parameters at other locations remains limited.The iterative regularized inversion method effectively suppresses the impact of measurement error and exhibits high computational efficiency,showing clear advantages for inversion applications.When incorporating the first-order regularization with a Neumann boundary constraint on the tip width,the inverted fracture-width distribution becomes highly sensitive to fracture height;thus,combined with a bisection strategy,simultaneous inversion of fracture width and height can be achieved.Examination using the model resolution matrix,noisy synthetic data,and field data confirms that the iterative regularized inversion model for fracture width and height provides high interpretive accuracy and can be applied to the calculation and analysis of fracture width,fracture height,net pressure and other parameters.展开更多
Dear Editor,This letter presents a new approach to developing interpretable and reliable soft sensors for Industry 5.0 applications.Although sophisticated machine learning methods have made remarkable strides in soft-...Dear Editor,This letter presents a new approach to developing interpretable and reliable soft sensors for Industry 5.0 applications.Although sophisticated machine learning methods have made remarkable strides in soft-sensor predictive accuracy,ensuring interpretability and reliable performance across varying industrial operating conditions remains a challenge[1]–[4].This is precisely what Industry 5.0,proposed by the European Commission in 2021,advocates[5],[6].It integrates various cutting-edge technologies,such as human-machine interaction,digital twins,cybersecurity and artificial intelligence,to facilitate the development of better soft sensors.展开更多
0 INTRODUCTION Earth science is a natural science concerned with the composition,dynamics,spatiotemporal evolution,and formation mechanisms of Earth materials(Chen and Yang,2023).Traditional Earth science research has...0 INTRODUCTION Earth science is a natural science concerned with the composition,dynamics,spatiotemporal evolution,and formation mechanisms of Earth materials(Chen and Yang,2023).Traditional Earth science research has largely been discipline-based,relying on field investigations,data collection,experimental analyses,and data interpretation to study individual components of the Earth system.展开更多
Landslide susceptibility mapping(LSM)is an essential tool for mitigating the escalating global risk of landslides.However,challenges such as the heterogeneity of different landslide triggers,extensive engineering acti...Landslide susceptibility mapping(LSM)is an essential tool for mitigating the escalating global risk of landslides.However,challenges such as the heterogeneity of different landslide triggers,extensive engineering activities exacerbated reactivation,and the interpretability of data-driven models have hindered the practical application of LSM.This work proposes a novel framework for enhancing LSM considering different triggers for accumulation and rock landslides,leveraging interpretable machine learning and Multi-temporal Interferometric Synthetic Aperture Radar(MT-InSAR)technology.Initially,a refined fieldinvestigation was conducted to delineate the accumulation and rock area according to landslide types,leading to the identificationof relevant contributing factors.Deformation along the slope was then combined with time-series analysis to derive a landslide activity level(AL)index to recognize the likelihood of reactivation or dormancy.The SHapley Additive exPlanation(SHAP)technique facilitated the interpretation of factors and the identificationof determinants in high susceptibility areas.The results indicate that random forest(RF)outperformed other models in both accumulation and rock areas.Key factors including thickness and weak intercalation were identifiedfor accumulation and rock landslides.The introduction of AL substantially enhanced the predictive capability of the LSM and outperformed models that neglect movement trends or deformation rates with an average ratio of 81.23%in high susceptibility zones.Besides,the fieldvalidation confirmedthat 83.8%of newly identifiedlandslides were correctly upgraded.Given its efficiencyand operational simplicity,the proposed hybrid model opens new avenues for the feasibility of enhancement in LSM at urban settlements worldwide.展开更多
基金The work is partially supported by Natural Science Foundation of Ningxia(Grant No.AAC03300)National Natural Science Foundation of China(Grant No.61962001)Graduate Innovation Project of North Minzu University(Grant No.YCX23152).
文摘Model checking is an automated formal verification method to verify whether epistemic multi-agent systems adhere to property specifications.Although there is an extensive literature on qualitative properties such as safety and liveness,there is still a lack of quantitative and uncertain property verifications for these systems.In uncertain environments,agents must make judicious decisions based on subjective epistemic.To verify epistemic and measurable properties in multi-agent systems,this paper extends fuzzy computation tree logic by introducing epistemic modalities and proposing a new Fuzzy Computation Tree Logic of Knowledge(FCTLK).We represent fuzzy multi-agent systems as distributed knowledge bases with fuzzy epistemic interpreted systems.In addition,we provide a transformation algorithm from fuzzy epistemic interpreted systems to fuzzy Kripke structures,as well as transformation rules from FCTLK formulas to Fuzzy Computation Tree Logic(FCTL)formulas.Accordingly,we transform the FCTLK model checking problem into the FCTL model checking.This enables the verification of FCTLK formulas by using the fuzzy model checking algorithm of FCTL without additional computational overheads.Finally,we present correctness proofs and complexity analyses of the proposed algorithms.Additionally,we further illustrate the practical application of our approach through an example of a train control system.
基金supported by the National Natural Science Foundation of China(No.52161135202).
文摘Black-box models have demonstrated remarkable accuracy in forecasting building energy loads.However,they usually lack interpretability and do not incorporate domain knowledge,making it difficult for users to trust their predictions in practical applications.One important and interesting question remains unanswered:is it possible to use intrinsically interpretable models to achieve accuracy comparable to that of black-box models?With an aim of answering this question,this study proposes an intrinsically interpretable machine learning-based method to forecast building energy loads.It creatively combines two intrinsically interpretable machine learning algorithms:clustering decision trees and adaptive multiple linear regression.Clustering decision trees aim to automatically identify various building operation conditions,allowing for the training of multiple models tailored to each condition.It can reduce the complexity of model training data,leading to higher accuracy.Adaptive multiple linear regression is an improved regression algorithm tailored to building energy load prediction.It can adaptively modify regression coefficients according to building operations,enhancing the non-linear fitting capability of multiple linear regression.The proposed method is evaluated utilizing the operational data from an office building.The results indicate that the proposed method exhibits comparable accuracy to both random forests and extreme gradient boosting.Furthermore,it shows significantly superior accuracy,with an average improvement of 10.2%,compared with some popular black-box algorithms such as artificial neural networks,support vector regression,and classification and regression trees.As for model interpretability,the proposed method reveals that historical cooling loads are the most crucial for predicting building cooling loads under most conditions.Additionally,outdoor air temperature has a significant contribution to building cooling load prediction during the daytime on weekdays in summer and transition seasons.In the future,it will be valuable to explore integrating the laws of physics into the proposed method to further enhance its interpretability.
基金supported by the National Social Science Fund of China(20BXW101).
文摘Detecting fake news in multimodal and multilingual social media environments is challenging due to inherent noise,inter-modal imbalance,computational bottlenecks,and semantic ambiguity.To address these issues,we propose SparseMoE-MFN,a novel unified framework that integrates sparse attention with a sparse-activated Mixture of-Experts(MoE)architecture.This framework aims to enhance the efficiency,inferential depth,and interpretability of multimodal fake news detection.Sparse MoE-MFN leverages LLaVA-v1.6-Mistral-7B-HF for efficient visual encoding and Qwen/Qwen2-7B for text processing.The sparse attention module adaptively filters irrelevant tokens and focuses on key regions,reducing computational costs and noise.The sparse MoE module dynamically routes inputs to specialized experts(visual,language,cross-modal alignment)based on content heterogeneity.This expert specialization design boosts computational efficiency and semantic adaptability,enabling precise processing of complex content and improving performance on ambiguous categories.Evaluated on the large-scale,multilingualMR2 dataset,SparseMoEMFN achieves state-of-the-art performance.It obtains an accuracy of 86.7%and a macro-averaged F1 score of 0.859,outperforming strong baselines like MiniGPT-4 by 3.4%and 3.2%,respectively.Notably,it shows significant advantages in the“unverified”category.Furthermore,SparseMoE-MFN demonstrates superior computational efficiency,with an average inference latency of 89.1 ms and 95.4 GFLOPs,substantially lower than existing models.Ablation studies and visualization analyses confirm the effectiveness of both sparse attention and sparse MoE components in improving accuracy,generalization,and efficiency.
基金supported by the National Natural Science Foundation of China(Grant Nos.11934011,12074339,62075194,U21A6006,62202418,and U21B2004)the National Key Research and Development Program of China(Grant Nos.2019YFA0308100,2023YFB2806000,and 2022YFA1204700)+4 种基金the Strategic Priority Research Program of Chinese Academy of Sciences(Grant No.XDB28000000)the Leading Innovation and Entrepreneurship Team in Zhejiang Province(Grant No.2020R01001)the Open Program of the State Key Laboratory of Advanced Optical Communication Systems and Networks at Shanghai Jiao Tong University(Grant No.2023GZKF024)the Fundamental Research Funds for the Central Universities,the Information Technology Center and State Key Lab of CAD&CG at the Zhejiang University,the Zhejiang Provincial Key Laboratory of Information Processing,Communication and Networking(IPCAN)the National Institutes of Health(NIH)(Grant Nos.R01GM127696,R01GM152633,R21GM142107,and R21CA269099)。
文摘Scattering obscures information carried by waves by producing speckle patterns,posing a fundamental challenge across diverse fields,from microscopy to astronomy.Although machine learning has recently shown promise in speckle analysis,existing approaches are hindered by their dependence on large,labeled datasets—a significant bottleneck in many real-world applications.Here,we introduce speckle unsupervised recognition and evaluation(SURE),a groundbreaking unsupervised learning strategy for speckle recognition that eliminates the need for labeled training data.SURE's distinctive feature lies in its ability to extract invariant features through advanced clustering algorithms to enable direct classification of high-level information from speckle patterns without prior knowledge.We demonstrate the transformative potential of this approach in two key applications:(1)a noninvasive glucose monitoring system that accurately tracks glucose concentrations over time without extensive calibration and(2)a high-throughput communication system using multimode fibers,achieving improved performance in dynamic environments.In addition,we showcase SURE's unprecedented capability to classify objects hidden behind obstacles using scattered light,further broadening its scope.This versatile approach opens new frontiers in biomedical diagnostics,quantum network decoupling,and remote sensing,unlocking a transformative new paradigm for extracting information from seemingly random optical patterns.
文摘Summary Pain is not pain because people interpret symptoms differently.Neck pain is one of the most common pains and should not be missing from a study on the effects of pain.Depression does not arise solely from pain but is multicausal and often caused by this cumulative effect.
文摘We thank Power et al.1 for their interest in our review2 and for contributing to this important scientific discussion.We welcome their commentary and acknowledge the merit of continuing to scrutinize and refine interpretations in this evolving field.Given that much research time and financial investment is being given to the study of the effects of eccentric training in both athletic and clinical contexts,it is incumbent on our field to demonstrate whether eccentric contractions are a key(or the key)stimulus for sarcomerogenesis(increases in serial sarcomere number(SSN)).
基金supported by the Deanship of Scientific Research,Vice Presidency for Graduate Studies and Scientific Research,King Faisal University,Saudi Arabia Grant No.KFU253765.
文摘Most predictive maintenance studies have emphasized accuracy but provide very little focus on Interpretability or deployment readiness.This study improves on prior methods by developing a small yet robust system that can predict when turbofan engines will fail.It uses the NASA CMAPSS dataset,which has over 200,000 engine cycles from260 engines.The process begins with systematic preprocessing,which includes imputation,outlier removal,scaling,and labelling of the remaining useful life.Dimensionality is reduced using a hybrid selection method that combines variance filtering,recursive elimination,and gradient-boosted importance scores,yielding a stable set of 10 informative sensors.To mitigate class imbalance,minority cases are oversampled,and class-weighted losses are applied during training.Benchmarking is carried out with logistic regression,gradient boosting,and a recurrent design that integrates gated recurrent units with long short-term memory networks.The Long Short-Term Memory–Gated Recurrent Unit(LSTM–GRU)hybrid achieved the strongest performance with an F1 score of 0.92,precision of 0.93,recall of 0.91,ReceiverOperating Characteristic–AreaUnder the Curve(ROC-AUC)of 0.97,andminority recall of 0.75.Interpretability testing using permutation importance and Shapley values indicates that sensors 13,15,and 11 are the most important indicators of engine wear.The proposed system combines imbalance handling,feature reduction,and Interpretability into a practical design suitable for real industrial settings.
基金funded by the Zhejiang Provincial Key Science and Technology“LingYan”Project Foundation,grant number 2023C01145Zhejiang Gongshang University Higher Education Research Projects,grant number Xgy22028.
文摘With the deep integration of smart manufacturing and IoT technologies,higher demands are placed on the intelligence and real-time performance of industrial equipment fault detection.For industrial fans,base bolt loosening faults are difficult to identify through conventional spectrum analysis,and the extreme scarcity of fault data leads to limited training datasets,making traditional deep learning methods inaccurate in fault identification and incapable of detecting loosening severity.This paper employs Bayesian Learning by training on a small fault dataset collected from the actual operation of axial-flow fans in a factory to obtain posterior distribution.This method proposes specific data processing approaches and a configuration of Bayesian Convolutional Neural Network(BCNN).It can effectively improve the model’s generalization ability.Experimental results demonstrate high detection accuracy and alignment with real-world applications,offering practical significance and reference value for industrial fan bolt loosening detection under data-limited conditions.
文摘Mortality prediction in respiratory health is challenging,especially when using large-scale clinical datasets composed primarily of categorical variables.Traditional digital twin(DT)frameworks often rely on longi-tudinal or sensor-based data,which are not always available in public health contexts.In this article,we propose a novel proto-DT framework for mortality prediction in respiratory health using a large-scale categorical biomedical dataset.This dataset contains 415,711 severe acute respiratory infection cases from the Brazilian Unified Health System,including both COVID-19 and non-COVID-19 patients.Four classification models—extreme gradient boosting(XGBoost),logistic regression,random forest,and a deep neural network(DNN)—are trained using cost-sensitive learning to address class imbalance.The models are evaluated using accuracy,precision,recall,F1-score,and area under the curve(AUC)related to the receiver operating characteristic(ROC).The framework supports simulated interventions by modifying selected inputs and recalculating predicted mortality.Additionally,we incorporate multiple correspondence analysis and K-means clustering to explore model sensitivity.A Python library has been developed to ensure reproducibility.All models achieve AUC-ROC values near or above 0.85.XGBoost yields the highest accuracy(0.84),while the DNN achieves the highest recall(0.81).Scenario-based simulations reveal how key clinical factors,such as intensive care unit admission and oxygen support,affect predicted outcomes.The proposed proto-DT framework demonstrates the feasibility of mortality prediction and intervention simulation using categorical data alone.This framework provides a foundation for data-driven explainable DTs in public health,even in the absence of time-series data.
基金funded by Guangdong Basic and Applied Basic Research Foundation(2023B1515120064)National Natural Science Foundation of China(62273097).
文摘Deep learning has become integral to robotics,particularly in tasks such as robotic grasping,where objects often exhibit diverse shapes,textures,and physical properties.In robotic grasping tasks,due to the diverse characteristics of the targets,frequent adjustments to the network architecture and parameters are required to avoid a decrease in model accuracy,which presents a significant challenge for non-experts.Neural Architecture Search(NAS)provides a compelling method through the automated generation of network architectures,enabling the discovery of models that achieve high accuracy through efficient search algorithms.Compared to manually designed networks,NAS methods can significantly reduce design costs,time expenditure,and improve model performance.However,such methods often involve complex topological connections,and these redundant structures can severely reduce computational efficiency.To overcome this challenge,this work puts forward a robotic grasp detection framework founded on NAS.The method automatically designs a lightweight network with high accuracy and low topological complexity,effectively adapting to the target object to generate the optimal grasp pose,thereby significantly improving the success rate of robotic grasping.Additionally,we use Class Activation Mapping(CAM)as an interpretability tool,which captures sensitive information during the perception process through visualized results.The searched model achieved competitive,and in some cases superior,performance on the Cornell and Jacquard public datasets,achieving accuracies of 98.3%and 96.8%,respectively,while sustaining a detection speed of 89 frames per second with only 0.41 million parameters.To further validate its effectiveness beyond benchmark evaluations,we conducted real-world grasping experiments on a UR5 robotic arm,where the model demonstrated reliable performance across diverse objects and high grasp success rates,thereby confirming its practical applicability in robotic manipulation tasks.
基金supported by the Meteorological Joint Funds of the National Natural Science Foundation of China(Grant No.U2142211)the National Natural Science Foundation of China(Grant Nos.42075141,42341202 and 62088101)+1 种基金the National Key Research and Development Program of China(Grant No.2020YFA0608000)the Shanghai Municipal Science and Technology Major Project(Grant No.2021SHZDZX0100).
文摘Accurate forecasting of tropical cyclone(TC)tracks and intensities is essential.Although the TianXing large weather model,a six-hourly forecasting model surpassing operational forecasts,exhibits superior performance,its TC forecasts still require enhancement.Prediction errors persist due to biases in the training data and smoothing effects in data-driven methods.To address this,we introduce CycloneBCNet,a deep-learning model designed to correct TianXing’s TC forecast biases by leveraging spatial and temporal data.CycloneBCNet utilizes the SimVP(simpler yet better video prediction)framework with spatial attention to highlight cyclone core regions in forecast fields.It also incorporates TC trend information(center position,maximum wind speed,and minimum sea level pressure)via an LSTM(long short-term memory)module.These TC vectors are derived from post-processed TianXing forecasts.By fusing features from forecast fields and TC vectors,CycloneBCNet corrects biases across multiple lead times.At a 96-h lead time,the track error reduces from 162.4 to 86.4 km,the wind speed error from 17.2 to 6.69 m s^(-1),and the pressure error from 22.2 to 9.36 hPa.Interpretability analysis shows that CycloneBCNet adjusts its attention across forecast lead times.Intensity corrections prioritize inner-core dynamics,particularly the eye and eyewall,while track corrections shift from lower-level variables and the cyclone’s core to broader environmental factors and mid-to upper-level features as the forecast duration increases.These findings demonstrate that CycloneBCNet effectively captures key TC dynamics consistent with meteorological principles,including the dominance of near-surface conditions for intensity and the increasing influence of steering currents on track prediction.
文摘The integration of machine learning(ML)into geohazard assessment has successfully instigated a paradigm shift,leading to the production of models that possess a level of predictive accuracy previously considered unattainable.However,the black-box nature of these systems presents a significant barrier,hindering their operational adoption,regulatory approval,and full scientific validation.This paper provides a systematic review and synthesis of the emerging field of explainable artificial intelligence(XAI)as applied to geohazard science(GeoXAI),a domain that aims to resolve the long-standing trade-off between model performance and interpretability.A rigorous synthesis of 87 foundational studies is used to map the intellectual and methodological contours of this rapidly expanding field.The analysis reveals that current research efforts are concentrated predominantly on landslide and flood assessment.Methodologically,tree-based ensembles and deep learning models dominate the literature,with SHapley Additive exPlanations(SHAP)frequently adopted as the principal post-hoc explanation technique.More importantly,the review further documents how the role of XAI has shifted:rather than being used solely as a tool for interpreting models after training,it is increasingly integrated into the modeling cycle itself.Recent applications include its use in feature selection,adaptive sampling strategies,and model evaluation.The evidence also shows that GeoXAI extends beyond producing feature rankings.It reveals nonlinear thresholds and interaction effects that generate deeper mechanistic insights into hazard processes and mechanisms.Nevertheless,several key challenges remain unresolved within the field.These persistent issues are especially pronounced when considering the crucial necessity for interpretation stability,the demanding scholarly task of reliably distinguishing correlation from causation,and the development of appropriate methods for the treatment of complex spatio-temporal dynamics.
基金supported by Vetenskapsrådet 2022-00799 and the Ulla and Ingemar Dahlberg Foundation(to PAW).
文摘The majority of our daily activities and routines are highly dependent on vision.What we experience as our vision arises from the detection and encoding of visual signals in the retina,which are then interpreted in the brain.This interpretation has the benefit of providing a level of constancy to what we experience as vision but also limits our ability to perceive subtle decline in our own vision.
文摘Deep learning-based methods have shown great potential in intelligent bearing fault diagnosis.However,most existing approaches suffer from the scarcity of labeled data,which often results in insufficient robustness under complex working conditions and a general lack of interpretability.To address these challenges,we propose a physics-informed multimodal fault diagnosis framework based on few-shot learning,which integrates a 2D timefrequency image encoder and a 1Dvibration signal encoder.Specifically,we embed prior knowledge ofmulti-resolution analysis from signal processing into the model by designing a Laplace Wavelet Convolution(LWC)module,which enhances interpretability since wavelet coefficients naturally correspond to specific frequency and temporal structures.To further balance the guidance of physical priors with the flexibility of learnable representations,we introduce a parametric multi-kernel wavelet that employs channel-wise dynamic attention to adaptively select relevant wavelet bases,thereby improving the feature expressiveness.Moreover,we develop a Mahalanobis-Prototype Joint Metric,which constructs more accurate and distribution-consistent decision boundaries under few-shot conditions.Comprehensive experiments on the Case Western Reserve University(CWRU)and Paderborn University(PU)bearing datasets demonstrate the superior effectiveness,robustness,and interpretability of the proposed approach compared with state-of-the-art baselines.
文摘Multimodal dialogue systems often fail to maintain coherent reasoning over extended conversations and suffer from hallucination due to limited context modeling capabilities.Current approaches struggle with crossmodal alignment,temporal consistency,and robust handling of noisy or incomplete inputs across multiple modalities.We propose Multi Agent-Chain of Thought(CoT),a novel multi-agent chain-of-thought reasoning framework where specialized agents for text,vision,and speech modalities collaboratively construct shared reasoning traces through inter-agent message passing and consensus voting mechanisms.Our architecture incorporates self-reflection modules,conflict resolution protocols,and dynamic rationale alignment to enhance consistency,factual accuracy,and user engagement.The framework employs a hierarchical attention mechanism with cross-modal fusion and implements adaptive reasoning depth based on dialogue complexity.Comprehensive evaluations on Situated Interactive Multi-Modal Conversations(SIMMC)2.0,VisDial v1.0,and newly introduced challenging scenarios demonstrate statistically significant improvements in grounding accuracy(p<0.01),chain-of-thought interpretability,and robustness to adversarial inputs compared to state-of-the-art monolithic transformer baselines and existing multi-agent approaches.
基金Supported by Capital’s Funds for Health Improvement and Research(CFH2024-1-4021)。
文摘Objective To develop a prognostic prediction model for early-stage triple-negative breast cancer(TNBC)using H&E-stained pathological images and to investigate its underlying biological interpretability.Methods A deep learning model was trained on 340 WSIs and externally validated using 81 TCGA cases.Image-derived features extracted through convolutional neural networks were integrated with clinicopathological variables.Model performance was assessed using ROC curve analysis,and interpretability was evaluated by correlating image features with mRNA-seq data and characteristics of the immune microenvironment.Results The model achieved AUCs of 0.86 and 0.75 in the training and validation cohorts,respectively.Analysis using HoVer-Net indicated that lymphocyte abundance was associated with recurrence risk.Texture-related features showed significant correlations with immune cell infiltration and prognostic gene expression profiles.Conclusion This study demonstrates that deep learning can enable accurate prognostic prediction in early-stage TNBC,with interpretable image features that reflect the tumor immune microenvironment and gene expression profiles.
基金Supported by the Ministry of Education U40 Program(ZYGXONJSKYCXNLZCXM-E19)National Natural Science Foundation of China(52574078)。
文摘The forward model of optical fiber strain induced by fractures,together with the associated model resolution matrix,is used to demonstrate the interpretability of fracture parameters once the fracture intersects the fiber.A regularized inversion framework for fracture parameters is established to evaluate the influence of measured data quality on the accuracy of iterative regularized inversion.An interpretation approach for both fracture width and height is proposed,and the synthetic forward data with measurement error and field examples are employed to validate the accuracy of the simultaneous inversion of fracture width and height.The results indicate that,after the fracture contacts the fiber,the strain response is strongly sensitive only to the fracture parameters at the intersection location,whereas the interpretability of parameters at other locations remains limited.The iterative regularized inversion method effectively suppresses the impact of measurement error and exhibits high computational efficiency,showing clear advantages for inversion applications.When incorporating the first-order regularization with a Neumann boundary constraint on the tip width,the inverted fracture-width distribution becomes highly sensitive to fracture height;thus,combined with a bisection strategy,simultaneous inversion of fracture width and height can be achieved.Examination using the model resolution matrix,noisy synthetic data,and field data confirms that the iterative regularized inversion model for fracture width and height provides high interpretive accuracy and can be applied to the calculation and analysis of fracture width,fracture height,net pressure and other parameters.
文摘Dear Editor,This letter presents a new approach to developing interpretable and reliable soft sensors for Industry 5.0 applications.Although sophisticated machine learning methods have made remarkable strides in soft-sensor predictive accuracy,ensuring interpretability and reliable performance across varying industrial operating conditions remains a challenge[1]–[4].This is precisely what Industry 5.0,proposed by the European Commission in 2021,advocates[5],[6].It integrates various cutting-edge technologies,such as human-machine interaction,digital twins,cybersecurity and artificial intelligence,to facilitate the development of better soft sensors.
基金supported by National Key R&D Program of China(No.2021YFF0501301)the National Natural Science Foundation of China(No.42172231)。
文摘0 INTRODUCTION Earth science is a natural science concerned with the composition,dynamics,spatiotemporal evolution,and formation mechanisms of Earth materials(Chen and Yang,2023).Traditional Earth science research has largely been discipline-based,relying on field investigations,data collection,experimental analyses,and data interpretation to study individual components of the Earth system.
基金supported by the National Key R&D Program of China(Grant No.2023YFC3007201)the National Natural Science Foundation of China(Grant No.42377161)the Opening Fund of Key Laboratory of Geological Survey and Evaluation of Ministry of Education(Grant No.GLAB 2024ZR03).
文摘Landslide susceptibility mapping(LSM)is an essential tool for mitigating the escalating global risk of landslides.However,challenges such as the heterogeneity of different landslide triggers,extensive engineering activities exacerbated reactivation,and the interpretability of data-driven models have hindered the practical application of LSM.This work proposes a novel framework for enhancing LSM considering different triggers for accumulation and rock landslides,leveraging interpretable machine learning and Multi-temporal Interferometric Synthetic Aperture Radar(MT-InSAR)technology.Initially,a refined fieldinvestigation was conducted to delineate the accumulation and rock area according to landslide types,leading to the identificationof relevant contributing factors.Deformation along the slope was then combined with time-series analysis to derive a landslide activity level(AL)index to recognize the likelihood of reactivation or dormancy.The SHapley Additive exPlanation(SHAP)technique facilitated the interpretation of factors and the identificationof determinants in high susceptibility areas.The results indicate that random forest(RF)outperformed other models in both accumulation and rock areas.Key factors including thickness and weak intercalation were identifiedfor accumulation and rock landslides.The introduction of AL substantially enhanced the predictive capability of the LSM and outperformed models that neglect movement trends or deformation rates with an average ratio of 81.23%in high susceptibility zones.Besides,the fieldvalidation confirmedthat 83.8%of newly identifiedlandslides were correctly upgraded.Given its efficiencyand operational simplicity,the proposed hybrid model opens new avenues for the feasibility of enhancement in LSM at urban settlements worldwide.