Alarm flood is one of the main problems in the alarm systems of industrial process. Alarm root-cause analysis and alarm prioritization are good for alarm flood reduction. This paper proposes a systematic rationalizati...Alarm flood is one of the main problems in the alarm systems of industrial process. Alarm root-cause analysis and alarm prioritization are good for alarm flood reduction. This paper proposes a systematic rationalization method for multivariate correlated alarms to realize the root cause analysis and alarm prioritization. An information fusion based interpretive structural model is constructed according to the data-driven partial correlation coefficient calculation and process knowledge modification. This hierarchical multi-layer model is helpful in abnormality propagation path identification and root-cause analysis. Revised Likert scale method is adopted to determine the alarm priority and reduce the blindness of alarm handling. As a case study, the Tennessee Eastman process is utilized to show the effectiveness and validity of proposed approach. Alarm system performance comparison shows that our rationalization methodology can reduce the alarm flood to some extent and improve the performance.展开更多
Through the collection of related literature,we point out the six major factors influencing China's forestry enterprises' financing: insufficient national support; regulations and institutional environmental f...Through the collection of related literature,we point out the six major factors influencing China's forestry enterprises' financing: insufficient national support; regulations and institutional environmental factors; narrow channels of financing; inappropriate existing mortgagebacked approach; forestry production characteristics; forestry enterprises' defects. Then,we use interpretive structural modeling( ISM) from System Engineering to analyze the structure of the six factors and set up ladder-type structure. We put three factors including forestry production characteristics,shortcomings of forestry enterprises and regulatory,institutional and environmental factors as basic factors and put other three factors as important factors. From the perspective of the government and enterprises,we put forward some personal advices and ideas based on the basic factors and important factors to ease the financing difficulties of forestry enterprises.展开更多
The possible risk factors during SAP Business One implementation were studied with depth interview. The results are then adjusted by experts. 20 categories of risk factors that are totally 49 factors were found. Based...The possible risk factors during SAP Business One implementation were studied with depth interview. The results are then adjusted by experts. 20 categories of risk factors that are totally 49 factors were found. Based on the risk factors during the SAP Business One implementation, questionnaire was used to study the key risk factors of SAP Business One implementation. Results illustrate ten key risk factors, these are risk of senior managers leadership, risk of project management, risk of process improvement, risk of implementation team organization, risk of process analysis, risk of based data, risk of personnel coordination, risk of change management, risk of secondary development, and risk of data import. Focus on the key risks of SAP Business One implementation, the interpretative structural modeling approach is used to study the relationship between these factors and establish a seven-level hierarchical structure. The study illustrates that the structure is olive-like, in which the risk of data import is on the top, and the risk of senior managers is on the bottom. They are the most important risk factors.展开更多
The interpretive theory of translation(ITT) is a school of theory originated in the late 1960 s in France,focusing on the discussion of the theory and teaching of interpreting and non-literary translation. ITT believe...The interpretive theory of translation(ITT) is a school of theory originated in the late 1960 s in France,focusing on the discussion of the theory and teaching of interpreting and non-literary translation. ITT believes that what the translator should convey is not the meaning of linguistic notation,but the non-verbal sense. In this paper,the author is going to briefly introduce ITT and analyze several examples to show different situations where ITT is either useful or unsuitable.展开更多
Interpretive theory brings forward three phases of interpretation: understanding, deverberlization and re-expression. It needs linguistic knowledge and non-linguistic knowledge. This essay discusses application of int...Interpretive theory brings forward three phases of interpretation: understanding, deverberlization and re-expression. It needs linguistic knowledge and non-linguistic knowledge. This essay discusses application of interpretive theory to business interpretation from the perspective of theory and practice.展开更多
This paper aims to explore teaching of interpreting nowadays by starting from the interpretive theory and its characteristics. The author believes that the theory is mainly based on the study of interpretation practic...This paper aims to explore teaching of interpreting nowadays by starting from the interpretive theory and its characteristics. The author believes that the theory is mainly based on the study of interpretation practice, whose core content, namely,"deverbalization"has made great strides and breakthroughs in the theory of translation; when we examine translation, or rather interpretation once again from the bi-perspective of language and culture, we will have come across new thoughts in terms of translation as well as teaching of interpreting.展开更多
Ceramic relief mural is a contemporary landscape art that is carefully designed based on human nature,culture,and architectural wall space,combined with social customs,visual sensibility,and art.It may also become the...Ceramic relief mural is a contemporary landscape art that is carefully designed based on human nature,culture,and architectural wall space,combined with social customs,visual sensibility,and art.It may also become the main axis of ceramic art in the future.Taiwan public ceramic relief murals(PCRM)are most distinctive with the PCRM pioneered by Pan-Hsiung Chu of Meinong Kiln in 1987.In addition to breaking through the limitations of traditional public ceramic murals,Chu leveraged local culture and sensibility.The theme of art gives PCRM its unique style and innovative value throughout the Taiwan region.This study mainly analyzes and understands the design image of public ceramic murals,taking Taiwan PCRM’s design and creation as the scope,and applies STEEP analysis,that is,the social,technological,economic,ecological,and political-legal environments are analyzed as core factors;eight main important factors in the artistic design image of ceramic murals are evaluated.Then,interpretive structural modeling(ISM)is used to establish five levels,analyze the four main problems in the main core factor area and the four main target results in the affected factor area;and analyze the problem points and target points as well as their causal relationships.It is expected to sort out the relationship between these factors,obtain the hierarchical relationship of each factor,and provide a reference basis and research methods.展开更多
Interpretive structural modeling(ISM)is an interactive process in which a malformed(bad structured)problem is structured into a comprehensive systematic model.Yet,despite many advantages that ISM provides,this method ...Interpretive structural modeling(ISM)is an interactive process in which a malformed(bad structured)problem is structured into a comprehensive systematic model.Yet,despite many advantages that ISM provides,this method has some shortcomings,the most important one of which is its reliance on participants’intuition and judgment.This problem undermines the validity of ISM.To solve this problem and further enhance the ISM method,the present study proposes a method called equation structural modeling(ESM),which draws on the capacities of structural equation modeling(SEM).As such,ESM provides a statistically verifiable framework and provides a graphical,hierarchical and intuitive model.展开更多
This paper outlines a diagnostic approach to quantify the maintainability of a Commercial off-the-Shelf (COTS)-based system by analyzing the complexity of the deployment of the system components. Interpretive Struct...This paper outlines a diagnostic approach to quantify the maintainability of a Commercial off-the-Shelf (COTS)-based system by analyzing the complexity of the deployment of the system components. Interpretive Structural Modeling (ISM) is used to demonstrate how ISM supports in identifying and understanding interdependencies among COTS components and how they affect the complexity of the maintenance of the COTS Based System (CBS). Through ISM analysis we have determined which components in the CBS contribute most significantly to the complexity of the system. With the ISM, architects, system integrators, and system maintainers can isolate the COTS products that cause the most complexity, and therefore cause the most effort to maintain, and take precautions to only change those products when necessary or during major maintenance efforts. The analysis also clearly shows the components that can be easily replaced or upgraded with very little impact on the rest of the system.展开更多
Most predictive maintenance studies have emphasized accuracy but provide very little focus on Interpretability or deployment readiness.This study improves on prior methods by developing a small yet robust system that ...Most predictive maintenance studies have emphasized accuracy but provide very little focus on Interpretability or deployment readiness.This study improves on prior methods by developing a small yet robust system that can predict when turbofan engines will fail.It uses the NASA CMAPSS dataset,which has over 200,000 engine cycles from260 engines.The process begins with systematic preprocessing,which includes imputation,outlier removal,scaling,and labelling of the remaining useful life.Dimensionality is reduced using a hybrid selection method that combines variance filtering,recursive elimination,and gradient-boosted importance scores,yielding a stable set of 10 informative sensors.To mitigate class imbalance,minority cases are oversampled,and class-weighted losses are applied during training.Benchmarking is carried out with logistic regression,gradient boosting,and a recurrent design that integrates gated recurrent units with long short-term memory networks.The Long Short-Term Memory–Gated Recurrent Unit(LSTM–GRU)hybrid achieved the strongest performance with an F1 score of 0.92,precision of 0.93,recall of 0.91,ReceiverOperating Characteristic–AreaUnder the Curve(ROC-AUC)of 0.97,andminority recall of 0.75.Interpretability testing using permutation importance and Shapley values indicates that sensors 13,15,and 11 are the most important indicators of engine wear.The proposed system combines imbalance handling,feature reduction,and Interpretability into a practical design suitable for real industrial settings.展开更多
Mortality prediction in respiratory health is challenging,especially when using large-scale clinical datasets composed primarily of categorical variables.Traditional digital twin(DT)frameworks often rely on longi-tudi...Mortality prediction in respiratory health is challenging,especially when using large-scale clinical datasets composed primarily of categorical variables.Traditional digital twin(DT)frameworks often rely on longi-tudinal or sensor-based data,which are not always available in public health contexts.In this article,we propose a novel proto-DT framework for mortality prediction in respiratory health using a large-scale categorical biomedical dataset.This dataset contains 415,711 severe acute respiratory infection cases from the Brazilian Unified Health System,including both COVID-19 and non-COVID-19 patients.Four classification models—extreme gradient boosting(XGBoost),logistic regression,random forest,and a deep neural network(DNN)—are trained using cost-sensitive learning to address class imbalance.The models are evaluated using accuracy,precision,recall,F1-score,and area under the curve(AUC)related to the receiver operating characteristic(ROC).The framework supports simulated interventions by modifying selected inputs and recalculating predicted mortality.Additionally,we incorporate multiple correspondence analysis and K-means clustering to explore model sensitivity.A Python library has been developed to ensure reproducibility.All models achieve AUC-ROC values near or above 0.85.XGBoost yields the highest accuracy(0.84),while the DNN achieves the highest recall(0.81).Scenario-based simulations reveal how key clinical factors,such as intensive care unit admission and oxygen support,affect predicted outcomes.The proposed proto-DT framework demonstrates the feasibility of mortality prediction and intervention simulation using categorical data alone.This framework provides a foundation for data-driven explainable DTs in public health,even in the absence of time-series data.展开更多
Deep learning has become integral to robotics,particularly in tasks such as robotic grasping,where objects often exhibit diverse shapes,textures,and physical properties.In robotic grasping tasks,due to the diverse cha...Deep learning has become integral to robotics,particularly in tasks such as robotic grasping,where objects often exhibit diverse shapes,textures,and physical properties.In robotic grasping tasks,due to the diverse characteristics of the targets,frequent adjustments to the network architecture and parameters are required to avoid a decrease in model accuracy,which presents a significant challenge for non-experts.Neural Architecture Search(NAS)provides a compelling method through the automated generation of network architectures,enabling the discovery of models that achieve high accuracy through efficient search algorithms.Compared to manually designed networks,NAS methods can significantly reduce design costs,time expenditure,and improve model performance.However,such methods often involve complex topological connections,and these redundant structures can severely reduce computational efficiency.To overcome this challenge,this work puts forward a robotic grasp detection framework founded on NAS.The method automatically designs a lightweight network with high accuracy and low topological complexity,effectively adapting to the target object to generate the optimal grasp pose,thereby significantly improving the success rate of robotic grasping.Additionally,we use Class Activation Mapping(CAM)as an interpretability tool,which captures sensitive information during the perception process through visualized results.The searched model achieved competitive,and in some cases superior,performance on the Cornell and Jacquard public datasets,achieving accuracies of 98.3%and 96.8%,respectively,while sustaining a detection speed of 89 frames per second with only 0.41 million parameters.To further validate its effectiveness beyond benchmark evaluations,we conducted real-world grasping experiments on a UR5 robotic arm,where the model demonstrated reliable performance across diverse objects and high grasp success rates,thereby confirming its practical applicability in robotic manipulation tasks.展开更多
The integration of machine learning(ML)into geohazard assessment has successfully instigated a paradigm shift,leading to the production of models that possess a level of predictive accuracy previously considered unatt...The integration of machine learning(ML)into geohazard assessment has successfully instigated a paradigm shift,leading to the production of models that possess a level of predictive accuracy previously considered unattainable.However,the black-box nature of these systems presents a significant barrier,hindering their operational adoption,regulatory approval,and full scientific validation.This paper provides a systematic review and synthesis of the emerging field of explainable artificial intelligence(XAI)as applied to geohazard science(GeoXAI),a domain that aims to resolve the long-standing trade-off between model performance and interpretability.A rigorous synthesis of 87 foundational studies is used to map the intellectual and methodological contours of this rapidly expanding field.The analysis reveals that current research efforts are concentrated predominantly on landslide and flood assessment.Methodologically,tree-based ensembles and deep learning models dominate the literature,with SHapley Additive exPlanations(SHAP)frequently adopted as the principal post-hoc explanation technique.More importantly,the review further documents how the role of XAI has shifted:rather than being used solely as a tool for interpreting models after training,it is increasingly integrated into the modeling cycle itself.Recent applications include its use in feature selection,adaptive sampling strategies,and model evaluation.The evidence also shows that GeoXAI extends beyond producing feature rankings.It reveals nonlinear thresholds and interaction effects that generate deeper mechanistic insights into hazard processes and mechanisms.Nevertheless,several key challenges remain unresolved within the field.These persistent issues are especially pronounced when considering the crucial necessity for interpretation stability,the demanding scholarly task of reliably distinguishing correlation from causation,and the development of appropriate methods for the treatment of complex spatio-temporal dynamics.展开更多
Landslide susceptibility mapping(LSM)is an essential tool for mitigating the escalating global risk of landslides.However,challenges such as the heterogeneity of different landslide triggers,extensive engineering acti...Landslide susceptibility mapping(LSM)is an essential tool for mitigating the escalating global risk of landslides.However,challenges such as the heterogeneity of different landslide triggers,extensive engineering activities exacerbated reactivation,and the interpretability of data-driven models have hindered the practical application of LSM.This work proposes a novel framework for enhancing LSM considering different triggers for accumulation and rock landslides,leveraging interpretable machine learning and Multi-temporal Interferometric Synthetic Aperture Radar(MT-InSAR)technology.Initially,a refined fieldinvestigation was conducted to delineate the accumulation and rock area according to landslide types,leading to the identificationof relevant contributing factors.Deformation along the slope was then combined with time-series analysis to derive a landslide activity level(AL)index to recognize the likelihood of reactivation or dormancy.The SHapley Additive exPlanation(SHAP)technique facilitated the interpretation of factors and the identificationof determinants in high susceptibility areas.The results indicate that random forest(RF)outperformed other models in both accumulation and rock areas.Key factors including thickness and weak intercalation were identifiedfor accumulation and rock landslides.The introduction of AL substantially enhanced the predictive capability of the LSM and outperformed models that neglect movement trends or deformation rates with an average ratio of 81.23%in high susceptibility zones.Besides,the fieldvalidation confirmedthat 83.8%of newly identifiedlandslides were correctly upgraded.Given its efficiencyand operational simplicity,the proposed hybrid model opens new avenues for the feasibility of enhancement in LSM at urban settlements worldwide.展开更多
Heart disease remains a leading cause of mortality worldwide,emphasizing the urgent need for reliable and interpretable predictive models to support early diagnosis and timely intervention.However,existing Deep Learni...Heart disease remains a leading cause of mortality worldwide,emphasizing the urgent need for reliable and interpretable predictive models to support early diagnosis and timely intervention.However,existing Deep Learning(DL)approaches often face several limitations,including inefficient feature extraction,class imbalance,suboptimal classification performance,and limited interpretability,which collectively hinder their deployment in clinical settings.To address these challenges,we propose a novel DL framework for heart disease prediction that integrates a comprehensive preprocessing pipeline with an advanced classification architecture.The preprocessing stage involves label encoding and feature scaling.To address the issue of class imbalance inherent in the personal key indicators of the heart disease dataset,the localized random affine shadowsampling technique is employed,which enhances minority class representation while minimizing overfitting.At the core of the framework lies the Deep Residual Network(DeepResNet),which employs hierarchical residual transformations to facilitate efficient feature extraction and capture complex,non-linear relationships in the data.Experimental results demonstrate that the proposed model significantly outperforms existing techniques,achieving improvements of 3.26%in accuracy,3.16%in area under the receiver operating characteristics,1.09%in recall,and 1.07%in F1-score.Furthermore,robustness is validated using 10-fold crossvalidation,confirming the model’s generalizability across diverse data distributions.Moreover,model interpretability is ensured through the integration of Shapley additive explanations and local interpretable model-agnostic explanations,offering valuable insights into the contribution of individual features to model predictions.Overall,the proposed DL framework presents a robust,interpretable,and clinically applicable solution for heart disease prediction.展开更多
1.Introduction Artificial intelligence(AI)is rapidly reshaping geoscience,from Earth observation interpretation and hazard forecasting to subsurface characterisation and Earth system modelling(Kochupillai et al.,2022;...1.Introduction Artificial intelligence(AI)is rapidly reshaping geoscience,from Earth observation interpretation and hazard forecasting to subsurface characterisation and Earth system modelling(Kochupillai et al.,2022;Sun et al.,2024).These capabilities emerge at a time when geoscientific evidence is increasingly informing high-stakes decisions about climate adaptation,resource development,and disaster risk reduction(McGovern et al.,2022).展开更多
Geological prospecting and the identification of adverse geological features are essential in tunnel construction,providing critical information to ensure safety and guide engineering decisions.As tunnel projects exte...Geological prospecting and the identification of adverse geological features are essential in tunnel construction,providing critical information to ensure safety and guide engineering decisions.As tunnel projects extend into deeper and more mountainous terrains,engineers face increasingly complex geological conditions,including high water pressure,intense geo-stress,elevated geothermal gradients,and active fault zones.These conditions pose substantial risks such as high-pressure water inrush,largescale collapses,and tunnel boring machine(TBM)blockages.Addressing these challenges requires advanced detection technologies capable of long-distance,high-precision,and intelligent assessments of adverse geology.This paper presents a comprehensive review of recent advancements in tunnel geological ahead prospecting methods.It summarizes the fundamental principles,technical maturity,key challenges,development trends,and real-world applications of various detection techniques.Airborne and semi-airborne geophysical methods enable large-scale reconnaissance for initial surveys in complex terrain.Tunnel-and borehole-based approaches offer high-resolution detection during excavation,including seismic ahead prospecting(SAP),TBM rock-breaking source seismic methods,fulltime-domain tunnel induced polarization(TIP),borehole electrical resistivity,and ground penetrating radar(GPR).To address scenarios involving multiple,coexisting adverse geologies,intelligent inversion and geological identification methods have been developed based on multi-source data fusion and artificial intelligence(AI)techniques.Overall,these advances significantly improve detection range,resolution,and geological characterization capabilities.The methods demonstrate strong adaptability to complex environments and provide reliable subsurface information,supporting safer and more efficient tunnel construction.展开更多
Artificial Intelligence(AI)is changing healthcare by helping with diagnosis.However,for doctors to trust AI tools,they need to be both accurate and easy to understand.In this study,we created a new machine learning sy...Artificial Intelligence(AI)is changing healthcare by helping with diagnosis.However,for doctors to trust AI tools,they need to be both accurate and easy to understand.In this study,we created a new machine learning system for the early detection of Autism Spectrum Disorder(ASD)in children.Our main goal was to build a model that is not only good at predicting ASD but also clear in its reasoning.For this,we combined several different models,including Random Forest,XGBoost,and Neural Networks,into a single,more powerful framework.We used two different types of datasets:(i)a standard behavioral dataset and(ii)a more complex multimodal dataset with images,audio,and physiological information.The datasets were carefully preprocessed for missing values,redundant features,and dataset imbalance to ensure fair learning.The results outperformed the state-of-the-art with a Regularized Neural Network,achieving 97.6%accuracy on behavioral data.Whereas,on the multimodal data,the accuracy is 98.2%.Other models also did well with accuracies consistently above 96%.We also used SHAP and LIME on a behavioral dataset for models’explainability.展开更多
Environmentalmonitoring systems based on remote sensing technology have a wider monitoringrange and longer timeliness, which makes them widely used in the detection andmanagement of pollution sources. However, haze we...Environmentalmonitoring systems based on remote sensing technology have a wider monitoringrange and longer timeliness, which makes them widely used in the detection andmanagement of pollution sources. However, haze weather conditions degrade image qualityand reduce the precision of environmental monitoring systems. To address this problem,this research proposes a remote sensing image dehazingmethod based on the atmosphericscattering model and a dark channel prior constrained network. The method consists ofa dehazing network, a dark channel information injection network (DCIIN), and a transmissionmap network. Within the dehazing network, the branch fusion module optimizesfeature weights to enhance the dehazing effect. By leveraging dark channel information,the DCIIN enables high-quality estimation of the atmospheric veil. To ensure the outputof the deep learning model aligns with physical laws, we reconstruct the haze image usingthe prediction results from the three networks. Subsequently, we apply the traditionalloss function and dark channel loss function between the reconstructed haze image and theoriginal haze image. This approach enhances interpretability and reliabilitywhile maintainingadherence to physical principles. Furthermore, the network is trained on a synthesizednon-homogeneous haze remote sensing dataset using dark channel information from cloudmaps. The experimental results show that the proposed network can achieve better imagedehazing on both synthetic and real remote sensing images with non-homogeneous hazedistribution. This research provides a new idea for solving the problem of decreased accuracyof environmental monitoring systems under haze weather conditions and has strongpracticability.展开更多
基金Supported by the National Natural Science Foundation of China(61473026,61104131)the Fundamental Research Funds for the Central Universities(JD1413)
文摘Alarm flood is one of the main problems in the alarm systems of industrial process. Alarm root-cause analysis and alarm prioritization are good for alarm flood reduction. This paper proposes a systematic rationalization method for multivariate correlated alarms to realize the root cause analysis and alarm prioritization. An information fusion based interpretive structural model is constructed according to the data-driven partial correlation coefficient calculation and process knowledge modification. This hierarchical multi-layer model is helpful in abnormality propagation path identification and root-cause analysis. Revised Likert scale method is adopted to determine the alarm priority and reduce the blindness of alarm handling. As a case study, the Tennessee Eastman process is utilized to show the effectiveness and validity of proposed approach. Alarm system performance comparison shows that our rationalization methodology can reduce the alarm flood to some extent and improve the performance.
文摘Through the collection of related literature,we point out the six major factors influencing China's forestry enterprises' financing: insufficient national support; regulations and institutional environmental factors; narrow channels of financing; inappropriate existing mortgagebacked approach; forestry production characteristics; forestry enterprises' defects. Then,we use interpretive structural modeling( ISM) from System Engineering to analyze the structure of the six factors and set up ladder-type structure. We put three factors including forestry production characteristics,shortcomings of forestry enterprises and regulatory,institutional and environmental factors as basic factors and put other three factors as important factors. From the perspective of the government and enterprises,we put forward some personal advices and ideas based on the basic factors and important factors to ease the financing difficulties of forestry enterprises.
文摘The possible risk factors during SAP Business One implementation were studied with depth interview. The results are then adjusted by experts. 20 categories of risk factors that are totally 49 factors were found. Based on the risk factors during the SAP Business One implementation, questionnaire was used to study the key risk factors of SAP Business One implementation. Results illustrate ten key risk factors, these are risk of senior managers leadership, risk of project management, risk of process improvement, risk of implementation team organization, risk of process analysis, risk of based data, risk of personnel coordination, risk of change management, risk of secondary development, and risk of data import. Focus on the key risks of SAP Business One implementation, the interpretative structural modeling approach is used to study the relationship between these factors and establish a seven-level hierarchical structure. The study illustrates that the structure is olive-like, in which the risk of data import is on the top, and the risk of senior managers is on the bottom. They are the most important risk factors.
文摘The interpretive theory of translation(ITT) is a school of theory originated in the late 1960 s in France,focusing on the discussion of the theory and teaching of interpreting and non-literary translation. ITT believes that what the translator should convey is not the meaning of linguistic notation,but the non-verbal sense. In this paper,the author is going to briefly introduce ITT and analyze several examples to show different situations where ITT is either useful or unsuitable.
文摘Interpretive theory brings forward three phases of interpretation: understanding, deverberlization and re-expression. It needs linguistic knowledge and non-linguistic knowledge. This essay discusses application of interpretive theory to business interpretation from the perspective of theory and practice.
文摘This paper aims to explore teaching of interpreting nowadays by starting from the interpretive theory and its characteristics. The author believes that the theory is mainly based on the study of interpretation practice, whose core content, namely,"deverbalization"has made great strides and breakthroughs in the theory of translation; when we examine translation, or rather interpretation once again from the bi-perspective of language and culture, we will have come across new thoughts in terms of translation as well as teaching of interpreting.
文摘Ceramic relief mural is a contemporary landscape art that is carefully designed based on human nature,culture,and architectural wall space,combined with social customs,visual sensibility,and art.It may also become the main axis of ceramic art in the future.Taiwan public ceramic relief murals(PCRM)are most distinctive with the PCRM pioneered by Pan-Hsiung Chu of Meinong Kiln in 1987.In addition to breaking through the limitations of traditional public ceramic murals,Chu leveraged local culture and sensibility.The theme of art gives PCRM its unique style and innovative value throughout the Taiwan region.This study mainly analyzes and understands the design image of public ceramic murals,taking Taiwan PCRM’s design and creation as the scope,and applies STEEP analysis,that is,the social,technological,economic,ecological,and political-legal environments are analyzed as core factors;eight main important factors in the artistic design image of ceramic murals are evaluated.Then,interpretive structural modeling(ISM)is used to establish five levels,analyze the four main problems in the main core factor area and the four main target results in the affected factor area;and analyze the problem points and target points as well as their causal relationships.It is expected to sort out the relationship between these factors,obtain the hierarchical relationship of each factor,and provide a reference basis and research methods.
文摘Interpretive structural modeling(ISM)is an interactive process in which a malformed(bad structured)problem is structured into a comprehensive systematic model.Yet,despite many advantages that ISM provides,this method has some shortcomings,the most important one of which is its reliance on participants’intuition and judgment.This problem undermines the validity of ISM.To solve this problem and further enhance the ISM method,the present study proposes a method called equation structural modeling(ESM),which draws on the capacities of structural equation modeling(SEM).As such,ESM provides a statistically verifiable framework and provides a graphical,hierarchical and intuitive model.
文摘This paper outlines a diagnostic approach to quantify the maintainability of a Commercial off-the-Shelf (COTS)-based system by analyzing the complexity of the deployment of the system components. Interpretive Structural Modeling (ISM) is used to demonstrate how ISM supports in identifying and understanding interdependencies among COTS components and how they affect the complexity of the maintenance of the COTS Based System (CBS). Through ISM analysis we have determined which components in the CBS contribute most significantly to the complexity of the system. With the ISM, architects, system integrators, and system maintainers can isolate the COTS products that cause the most complexity, and therefore cause the most effort to maintain, and take precautions to only change those products when necessary or during major maintenance efforts. The analysis also clearly shows the components that can be easily replaced or upgraded with very little impact on the rest of the system.
基金supported by the Deanship of Scientific Research,Vice Presidency for Graduate Studies and Scientific Research,King Faisal University,Saudi Arabia Grant No.KFU253765.
文摘Most predictive maintenance studies have emphasized accuracy but provide very little focus on Interpretability or deployment readiness.This study improves on prior methods by developing a small yet robust system that can predict when turbofan engines will fail.It uses the NASA CMAPSS dataset,which has over 200,000 engine cycles from260 engines.The process begins with systematic preprocessing,which includes imputation,outlier removal,scaling,and labelling of the remaining useful life.Dimensionality is reduced using a hybrid selection method that combines variance filtering,recursive elimination,and gradient-boosted importance scores,yielding a stable set of 10 informative sensors.To mitigate class imbalance,minority cases are oversampled,and class-weighted losses are applied during training.Benchmarking is carried out with logistic regression,gradient boosting,and a recurrent design that integrates gated recurrent units with long short-term memory networks.The Long Short-Term Memory–Gated Recurrent Unit(LSTM–GRU)hybrid achieved the strongest performance with an F1 score of 0.92,precision of 0.93,recall of 0.91,ReceiverOperating Characteristic–AreaUnder the Curve(ROC-AUC)of 0.97,andminority recall of 0.75.Interpretability testing using permutation importance and Shapley values indicates that sensors 13,15,and 11 are the most important indicators of engine wear.The proposed system combines imbalance handling,feature reduction,and Interpretability into a practical design suitable for real industrial settings.
文摘Mortality prediction in respiratory health is challenging,especially when using large-scale clinical datasets composed primarily of categorical variables.Traditional digital twin(DT)frameworks often rely on longi-tudinal or sensor-based data,which are not always available in public health contexts.In this article,we propose a novel proto-DT framework for mortality prediction in respiratory health using a large-scale categorical biomedical dataset.This dataset contains 415,711 severe acute respiratory infection cases from the Brazilian Unified Health System,including both COVID-19 and non-COVID-19 patients.Four classification models—extreme gradient boosting(XGBoost),logistic regression,random forest,and a deep neural network(DNN)—are trained using cost-sensitive learning to address class imbalance.The models are evaluated using accuracy,precision,recall,F1-score,and area under the curve(AUC)related to the receiver operating characteristic(ROC).The framework supports simulated interventions by modifying selected inputs and recalculating predicted mortality.Additionally,we incorporate multiple correspondence analysis and K-means clustering to explore model sensitivity.A Python library has been developed to ensure reproducibility.All models achieve AUC-ROC values near or above 0.85.XGBoost yields the highest accuracy(0.84),while the DNN achieves the highest recall(0.81).Scenario-based simulations reveal how key clinical factors,such as intensive care unit admission and oxygen support,affect predicted outcomes.The proposed proto-DT framework demonstrates the feasibility of mortality prediction and intervention simulation using categorical data alone.This framework provides a foundation for data-driven explainable DTs in public health,even in the absence of time-series data.
基金funded by Guangdong Basic and Applied Basic Research Foundation(2023B1515120064)National Natural Science Foundation of China(62273097).
文摘Deep learning has become integral to robotics,particularly in tasks such as robotic grasping,where objects often exhibit diverse shapes,textures,and physical properties.In robotic grasping tasks,due to the diverse characteristics of the targets,frequent adjustments to the network architecture and parameters are required to avoid a decrease in model accuracy,which presents a significant challenge for non-experts.Neural Architecture Search(NAS)provides a compelling method through the automated generation of network architectures,enabling the discovery of models that achieve high accuracy through efficient search algorithms.Compared to manually designed networks,NAS methods can significantly reduce design costs,time expenditure,and improve model performance.However,such methods often involve complex topological connections,and these redundant structures can severely reduce computational efficiency.To overcome this challenge,this work puts forward a robotic grasp detection framework founded on NAS.The method automatically designs a lightweight network with high accuracy and low topological complexity,effectively adapting to the target object to generate the optimal grasp pose,thereby significantly improving the success rate of robotic grasping.Additionally,we use Class Activation Mapping(CAM)as an interpretability tool,which captures sensitive information during the perception process through visualized results.The searched model achieved competitive,and in some cases superior,performance on the Cornell and Jacquard public datasets,achieving accuracies of 98.3%and 96.8%,respectively,while sustaining a detection speed of 89 frames per second with only 0.41 million parameters.To further validate its effectiveness beyond benchmark evaluations,we conducted real-world grasping experiments on a UR5 robotic arm,where the model demonstrated reliable performance across diverse objects and high grasp success rates,thereby confirming its practical applicability in robotic manipulation tasks.
文摘The integration of machine learning(ML)into geohazard assessment has successfully instigated a paradigm shift,leading to the production of models that possess a level of predictive accuracy previously considered unattainable.However,the black-box nature of these systems presents a significant barrier,hindering their operational adoption,regulatory approval,and full scientific validation.This paper provides a systematic review and synthesis of the emerging field of explainable artificial intelligence(XAI)as applied to geohazard science(GeoXAI),a domain that aims to resolve the long-standing trade-off between model performance and interpretability.A rigorous synthesis of 87 foundational studies is used to map the intellectual and methodological contours of this rapidly expanding field.The analysis reveals that current research efforts are concentrated predominantly on landslide and flood assessment.Methodologically,tree-based ensembles and deep learning models dominate the literature,with SHapley Additive exPlanations(SHAP)frequently adopted as the principal post-hoc explanation technique.More importantly,the review further documents how the role of XAI has shifted:rather than being used solely as a tool for interpreting models after training,it is increasingly integrated into the modeling cycle itself.Recent applications include its use in feature selection,adaptive sampling strategies,and model evaluation.The evidence also shows that GeoXAI extends beyond producing feature rankings.It reveals nonlinear thresholds and interaction effects that generate deeper mechanistic insights into hazard processes and mechanisms.Nevertheless,several key challenges remain unresolved within the field.These persistent issues are especially pronounced when considering the crucial necessity for interpretation stability,the demanding scholarly task of reliably distinguishing correlation from causation,and the development of appropriate methods for the treatment of complex spatio-temporal dynamics.
基金supported by the National Key R&D Program of China(Grant No.2023YFC3007201)the National Natural Science Foundation of China(Grant No.42377161)the Opening Fund of Key Laboratory of Geological Survey and Evaluation of Ministry of Education(Grant No.GLAB 2024ZR03).
文摘Landslide susceptibility mapping(LSM)is an essential tool for mitigating the escalating global risk of landslides.However,challenges such as the heterogeneity of different landslide triggers,extensive engineering activities exacerbated reactivation,and the interpretability of data-driven models have hindered the practical application of LSM.This work proposes a novel framework for enhancing LSM considering different triggers for accumulation and rock landslides,leveraging interpretable machine learning and Multi-temporal Interferometric Synthetic Aperture Radar(MT-InSAR)technology.Initially,a refined fieldinvestigation was conducted to delineate the accumulation and rock area according to landslide types,leading to the identificationof relevant contributing factors.Deformation along the slope was then combined with time-series analysis to derive a landslide activity level(AL)index to recognize the likelihood of reactivation or dormancy.The SHapley Additive exPlanation(SHAP)technique facilitated the interpretation of factors and the identificationof determinants in high susceptibility areas.The results indicate that random forest(RF)outperformed other models in both accumulation and rock areas.Key factors including thickness and weak intercalation were identifiedfor accumulation and rock landslides.The introduction of AL substantially enhanced the predictive capability of the LSM and outperformed models that neglect movement trends or deformation rates with an average ratio of 81.23%in high susceptibility zones.Besides,the fieldvalidation confirmedthat 83.8%of newly identifiedlandslides were correctly upgraded.Given its efficiencyand operational simplicity,the proposed hybrid model opens new avenues for the feasibility of enhancement in LSM at urban settlements worldwide.
基金funded by Ongoing Research Funding Program for Project number(ORF-2025-648),King Saud University,Riyadh,Saudi Arabia.
文摘Heart disease remains a leading cause of mortality worldwide,emphasizing the urgent need for reliable and interpretable predictive models to support early diagnosis and timely intervention.However,existing Deep Learning(DL)approaches often face several limitations,including inefficient feature extraction,class imbalance,suboptimal classification performance,and limited interpretability,which collectively hinder their deployment in clinical settings.To address these challenges,we propose a novel DL framework for heart disease prediction that integrates a comprehensive preprocessing pipeline with an advanced classification architecture.The preprocessing stage involves label encoding and feature scaling.To address the issue of class imbalance inherent in the personal key indicators of the heart disease dataset,the localized random affine shadowsampling technique is employed,which enhances minority class representation while minimizing overfitting.At the core of the framework lies the Deep Residual Network(DeepResNet),which employs hierarchical residual transformations to facilitate efficient feature extraction and capture complex,non-linear relationships in the data.Experimental results demonstrate that the proposed model significantly outperforms existing techniques,achieving improvements of 3.26%in accuracy,3.16%in area under the receiver operating characteristics,1.09%in recall,and 1.07%in F1-score.Furthermore,robustness is validated using 10-fold crossvalidation,confirming the model’s generalizability across diverse data distributions.Moreover,model interpretability is ensured through the integration of Shapley additive explanations and local interpretable model-agnostic explanations,offering valuable insights into the contribution of individual features to model predictions.Overall,the proposed DL framework presents a robust,interpretable,and clinically applicable solution for heart disease prediction.
基金supported by the Natural Science Foundation of Jiangsu Province,China(Grant No.BK20240937)the Natural Science Foundation of Shandong Province(Grant No.ZR2021QE187)+2 种基金the Shandong Higher Education“Young Entrepreneurship Talents Introduction and Cultivation Program”Project(Grant No.ZXQT20221228001)the Natural Science Foundation of China(Grant No.42502273)the Science and Technology Innovation Program of Hunan Province(Grant No.2022RC4028).
文摘1.Introduction Artificial intelligence(AI)is rapidly reshaping geoscience,from Earth observation interpretation and hazard forecasting to subsurface characterisation and Earth system modelling(Kochupillai et al.,2022;Sun et al.,2024).These capabilities emerge at a time when geoscientific evidence is increasingly informing high-stakes decisions about climate adaptation,resource development,and disaster risk reduction(McGovern et al.,2022).
基金supported by the National Natural Science Foundation of China(Grant Nos.52021005,52325904,and 51991391)。
文摘Geological prospecting and the identification of adverse geological features are essential in tunnel construction,providing critical information to ensure safety and guide engineering decisions.As tunnel projects extend into deeper and more mountainous terrains,engineers face increasingly complex geological conditions,including high water pressure,intense geo-stress,elevated geothermal gradients,and active fault zones.These conditions pose substantial risks such as high-pressure water inrush,largescale collapses,and tunnel boring machine(TBM)blockages.Addressing these challenges requires advanced detection technologies capable of long-distance,high-precision,and intelligent assessments of adverse geology.This paper presents a comprehensive review of recent advancements in tunnel geological ahead prospecting methods.It summarizes the fundamental principles,technical maturity,key challenges,development trends,and real-world applications of various detection techniques.Airborne and semi-airborne geophysical methods enable large-scale reconnaissance for initial surveys in complex terrain.Tunnel-and borehole-based approaches offer high-resolution detection during excavation,including seismic ahead prospecting(SAP),TBM rock-breaking source seismic methods,fulltime-domain tunnel induced polarization(TIP),borehole electrical resistivity,and ground penetrating radar(GPR).To address scenarios involving multiple,coexisting adverse geologies,intelligent inversion and geological identification methods have been developed based on multi-source data fusion and artificial intelligence(AI)techniques.Overall,these advances significantly improve detection range,resolution,and geological characterization capabilities.The methods demonstrate strong adaptability to complex environments and provide reliable subsurface information,supporting safer and more efficient tunnel construction.
基金the King Salman center for Disability Research for funding this work through Research Group No.KSRG-2024-050.
文摘Artificial Intelligence(AI)is changing healthcare by helping with diagnosis.However,for doctors to trust AI tools,they need to be both accurate and easy to understand.In this study,we created a new machine learning system for the early detection of Autism Spectrum Disorder(ASD)in children.Our main goal was to build a model that is not only good at predicting ASD but also clear in its reasoning.For this,we combined several different models,including Random Forest,XGBoost,and Neural Networks,into a single,more powerful framework.We used two different types of datasets:(i)a standard behavioral dataset and(ii)a more complex multimodal dataset with images,audio,and physiological information.The datasets were carefully preprocessed for missing values,redundant features,and dataset imbalance to ensure fair learning.The results outperformed the state-of-the-art with a Regularized Neural Network,achieving 97.6%accuracy on behavioral data.Whereas,on the multimodal data,the accuracy is 98.2%.Other models also did well with accuracies consistently above 96%.We also used SHAP and LIME on a behavioral dataset for models’explainability.
基金supported by the National Natural Science Foundation of China(No.51605054).
文摘Environmentalmonitoring systems based on remote sensing technology have a wider monitoringrange and longer timeliness, which makes them widely used in the detection andmanagement of pollution sources. However, haze weather conditions degrade image qualityand reduce the precision of environmental monitoring systems. To address this problem,this research proposes a remote sensing image dehazingmethod based on the atmosphericscattering model and a dark channel prior constrained network. The method consists ofa dehazing network, a dark channel information injection network (DCIIN), and a transmissionmap network. Within the dehazing network, the branch fusion module optimizesfeature weights to enhance the dehazing effect. By leveraging dark channel information,the DCIIN enables high-quality estimation of the atmospheric veil. To ensure the outputof the deep learning model aligns with physical laws, we reconstruct the haze image usingthe prediction results from the three networks. Subsequently, we apply the traditionalloss function and dark channel loss function between the reconstructed haze image and theoriginal haze image. This approach enhances interpretability and reliabilitywhile maintainingadherence to physical principles. Furthermore, the network is trained on a synthesizednon-homogeneous haze remote sensing dataset using dark channel information from cloudmaps. The experimental results show that the proposed network can achieve better imagedehazing on both synthetic and real remote sensing images with non-homogeneous hazedistribution. This research provides a new idea for solving the problem of decreased accuracyof environmental monitoring systems under haze weather conditions and has strongpracticability.