Feature selection(FS)is a pivotal pre-processing step in developing data-driven models,influencing reliability,performance and optimization.Although existing FS techniques can yield high-performance metrics for certai...Feature selection(FS)is a pivotal pre-processing step in developing data-driven models,influencing reliability,performance and optimization.Although existing FS techniques can yield high-performance metrics for certain models,they do not invariably guarantee the extraction of the most critical or impactful features.Prior literature underscores the significance of equitable FS practices and has proposed diverse methodologies for the identification of appropriate features.However,the challenge of discerning the most relevant and influential features persists,particularly in the context of the exponential growth and heterogeneity of big data—a challenge that is increasingly salient in modern artificial intelligence(AI)applications.In response,this study introduces an innovative,automated statistical method termed Farea Similarity for Feature Selection(FSFS).The FSFS approach computes a similarity metric for each feature by benchmarking it against the record-wise mean,thereby finding feature dependencies and mitigating the influence of outliers that could potentially distort evaluation outcomes.Features are subsequently ranked according to their similarity scores,with the threshold established at the average similarity score.Notably,lower FSFS values indicate higher similarity and stronger data correlations,whereas higher values suggest lower similarity.The FSFS method is designed not only to yield reliable evaluation metrics but also to reduce data complexity without compromising model performance.Comparative analyses were performed against several established techniques,including Chi-squared(CS),Correlation Coefficient(CC),Genetic Algorithm(GA),Exhaustive Approach,Greedy Stepwise Approach,Gain Ratio,and Filtered Subset Eval,using a variety of datasets such as the Experimental Dataset,Breast Cancer Wisconsin(Original),KDD CUP 1999,NSL-KDD,UNSW-NB15,and Edge-IIoT.In the absence of the FSFS method,the highest classifier accuracies observed were 60.00%,95.13%,97.02%,98.17%,95.86%,and 94.62%for the respective datasets.When the FSFS technique was integrated with data normalization,encoding,balancing,and feature importance selection processes,accuracies improved to 100.00%,97.81%,98.63%,98.94%,94.27%,and 98.46%,respectively.The FSFS method,with a computational complexity of O(fn log n),demonstrates robust scalability and is well-suited for datasets of large size,ensuring efficient processing even when the number of features is substantial.By automatically eliminating outliers and redundant data,FSFS reduces computational overhead,resulting in faster training and improved model performance.Overall,the FSFS framework not only optimizes performance but also enhances the interpretability and explainability of data-driven models,thereby facilitating more trustworthy decision-making in AI applications.展开更多
Cross-domain routing in Integrated Heterogeneous Networks(Inte-HetNet)should ensure efficient and secure data transmission across different network domains by satisfying diverse routing requirements.However,current so...Cross-domain routing in Integrated Heterogeneous Networks(Inte-HetNet)should ensure efficient and secure data transmission across different network domains by satisfying diverse routing requirements.However,current solutions face numerous challenges in continuously ensuring trustworthy routing,fulfilling diverse requirements,achieving reasonable resource allocation,and safeguarding against malicious behaviors of network operators.We propose CrowdRouting,a novel cross-domain routing scheme based on crowdsourcing,dedicated to establishing sustained trust in cross-domain routing,comprehensively considering and fulfilling various customized routing requirements,while ensuring reasonable resource allocation and effectively curbing malicious behavior of network operators.Concretely,CrowdRouting employs blockchain technology to verify the trustworthiness of border routers in different network domains,thereby establishing sustainable and trustworthy crossdomain routing based on sustained trust in these routers.In addition,CrowdRouting ingeniously integrates a crowdsourcing mechanism into the auction for routing,achieving fair and impartial allocation of routing rights by flexibly embedding various customized routing requirements into each auction phase.Moreover,CrowdRouting leverages incentive mechanisms and routing settlement to encourage network domains to actively participate in cross-domain routing,thereby promoting optimal resource allocation and efficient utilization.Furthermore,CrowdRouting introduces a supervisory agency(e.g.,undercover agent)to effectively suppress the malicious behavior of network operators through the game and interaction between the agent and the network operators.Through comprehensive experimental evaluations and comparisons with existing works,we demonstrate that CrowdRouting excels in providing trustworthy and fine-grained customized routing services,stimulating active participation in cross-domain routing,inhibiting malicious operator behavior,and maintaining reasonable resource allocation,all of which outperform baseline schemes.展开更多
To measure the trustworthiness of Intemetware,we need to understand the existing problems and design appropriate trustworthy metrics.The developing and running system of Internetware is analyzed in terms of process,ke...To measure the trustworthiness of Intemetware,we need to understand the existing problems and design appropriate trustworthy metrics.The developing and running system of Internetware is analyzed in terms of process,keystone,methods and techniques.According to the main related factors of Internetware trustworthiness,two important models,namely trustworthy metrics hierarchy model of components(TMHMC)with computing steps and local-global trustworthy metrics model of platform(LGTMMP)with algorithm respectively,are employed to evaluate the internal and external trustworthiness of Internetware benefiting for the development of Internetware.展开更多
Private clouds and public clouds are turning mutually into the open integrated cloud computing environment,which can aggregate and utilize WAN and LAN networks computing,storage,information and other hardware and soft...Private clouds and public clouds are turning mutually into the open integrated cloud computing environment,which can aggregate and utilize WAN and LAN networks computing,storage,information and other hardware and software resources sufficiently,but also bring a series of security,reliability and credibility problems.To solve these problems,a novel secure-agent-based trustworthy virtual private cloud model named SATVPC was proposed for the integrated and open cloud computing environment.Through the introduction of secure-agent technology,SATVPC provides an independent,safe and trustworthy computing virtual private platform for multi-tenant systems.In order to meet the needs of the credibility of SATVPC and mandate the trust relationship between each task execution agent and task executor node suitable for their security policies,a new dynamic composite credibility evaluation mechanism was presented,including the credit index computing algorithm and the credibility differentiation strategy.The experimental system shows that SATVPC and the credibility evaluation mechanism can ensure the security of open computing environments with feasibility.Experimental results and performance analysis also show that the credit indexes computing algorithm can evaluate the credibilities of task execution agents and task executor nodes quantitatively,correctly and operationally.展开更多
The cloud computing has been growing over the past few years, and service providers are creating an intense competitive world of business. This proliferation makes it hard for new users to select a proper service amon...The cloud computing has been growing over the past few years, and service providers are creating an intense competitive world of business. This proliferation makes it hard for new users to select a proper service among a large amount of service candidates. A novel user preferences-aware recommendation approach for trustworthy services is presented. For describing the requirements of new users in different application scenarios, user preferences are identified by usage preference, trust preference and cost preference. According to the similarity analysis of usage preference between consumers and new users, the candidates are selected, and these data about service trust provided by them are calculated as the fuzzy comprehensive evaluations. In accordance with the trust and cost preferences of new users, the dynamic fuzzy clusters are generated based on the fuzzy similarity computation. Then, the most suitable services can be selected to recommend to new users. The experiments show that this approach is effective and feasible, and can improve the quality of services recommendation meeting the requirements of new users in different scenario.展开更多
Predicting the motion of other road agents enables autonomous vehicles to perform safe and efficient path planning.This task is very complex,as the behaviour of road agents depends on many factors and the number of po...Predicting the motion of other road agents enables autonomous vehicles to perform safe and efficient path planning.This task is very complex,as the behaviour of road agents depends on many factors and the number of possible future trajectories can be consid-erable(multi-modal).Most prior approaches proposed to address multi-modal motion prediction are based on complex machine learning systems that have limited interpret-ability.Moreover,the metrics used in current benchmarks do not evaluate all aspects of the problem,such as the diversity and admissibility of the output.The authors aim to advance towards the design of trustworthy motion prediction systems,based on some of the re-quirements for the design of Trustworthy Artificial Intelligence.The focus is on evaluation criteria,robustness,and interpretability of outputs.First,the evaluation metrics are comprehensively analysed,the main gaps of current benchmarks are identified,and a new holistic evaluation framework is proposed.Then,a method for the assessment of spatial and temporal robustness is introduced by simulating noise in the perception system.To enhance the interpretability of the outputs and generate more balanced results in the proposed evaluation framework,an intent prediction layer that can be attached to multi-modal motion prediction models is proposed.The effectiveness of this approach is assessed through a survey that explores different elements in the visualisation of the multi-modal trajectories and intentions.The proposed approach and findings make a significant contribution to the development of trustworthy motion prediction systems for autono-mous vehicles,advancing the field towards greater safety and reliability.展开更多
In this paper, we merge software trustworthiness with software design and present an approach to trustworthy software design with an automatically adapting software update. First, software behavior and results can be ...In this paper, we merge software trustworthiness with software design and present an approach to trustworthy software design with an automatically adapting software update. First, software behavior and results can be expected and behavior states can be monitored when a software runs by introducing a trustworthy behavior trace on a software and inserting a checkpoint sensor at each checkpoint of the trustworthy software. Second, an updated approach of the trustworthy behavior trace for the software at the level of checkpoints is presented. The trustworthy behavior traces of the software for two versions can be merged adequately by constructing split points and merge points between two trustworthy behavior traces. Finally, experiments and analyses show that: (1) the software designed by our approach can detect and report the anomaly in a software automatically and effectively, so it has a higher ability of trustworthiness evaluation than the traditional software; and (2) our approach can realize the accurate update of the trustworthy behavior trace with a lower space overhead of checkpoints when the software updates.展开更多
In the intelligent medical diagnosis area,Artificial Intelligence(AI)’s trustworthiness,reliability,and interpretability are critical,especially in cancer diagnosis.Traditional neural networks,while excellent at proc...In the intelligent medical diagnosis area,Artificial Intelligence(AI)’s trustworthiness,reliability,and interpretability are critical,especially in cancer diagnosis.Traditional neural networks,while excellent at processing natural images,often lack interpretability and adaptability when processing high-resolution digital pathological images.This limitation is particularly evident in pathological diagnosis,which is the gold standard of cancer diagnosis and relies on a pathologist’s careful examination and analysis of digital pathological slides to identify the features and progression of the disease.Therefore,the integration of interpretable AI into smart medical diagnosis is not only an inevitable technological trend but also a key to improving diagnostic accuracy and reliability.In this paper,we introduce an innovative Multi-Scale Multi-Branch Feature Encoder(MSBE)and present the design of the CrossLinkNet Framework.The MSBE enhances the network’s capability for feature extraction by allowing the adjustment of hyperparameters to configure the number of branches and modules.The CrossLinkNet Framework,serving as a versatile image segmentation network architecture,employs cross-layer encoder-decoder connections for multi-level feature fusion,thereby enhancing feature integration and segmentation accuracy.Comprehensive quantitative and qualitative experiments on two datasets demonstrate that CrossLinkNet,equipped with the MSBE encoder,not only achieves accurate segmentation results but is also adaptable to various tumor segmentation tasks and scenarios by replacing different feature encoders.Crucially,CrossLinkNet emphasizes the interpretability of the AI model,a crucial aspect for medical professionals,providing an in-depth understanding of the model’s decisions and thereby enhancing trust and reliability in AI-assisted diagnostics.展开更多
Trustworthy service composition is an extremely important task when service composition becomes infeasible or even fails in an environment which is open,autonomic,uncertain and deceptive.This paper presents a trustwor...Trustworthy service composition is an extremely important task when service composition becomes infeasible or even fails in an environment which is open,autonomic,uncertain and deceptive.This paper presents a trustworthy service composition method based on an improved Cross generation elitist selection,Heterogeneous recombination,Catacly-smic mutation(CHC) Trustworthy Service Composition Method(CHC-TSCM) genetic algorithm.CHCTSCM firstly obtains the total trust degree of the individual service using a trust degree measurement and evaluation model proposed in previous research.Trust combination and computation then are performed according to the structural relation of the composite service.Finally,the optimal trustworthy service composition is acquired by the improved CHC genetic algorithm.Experimental results show that CHC-TSCM can effectively solve the trustworthy service composition problem.Comparing with GODSS and TOCSS,this new method has several advantages:1) a higher service composition successrate;2) a smaller decline trend of the service composition success-rate,and 3) enhanced stability.展开更多
Recently,intelligent fault diagnosis based on deep learning has been extensively investigated,exhibiting state-of-the-art performance.However,the deep learning model is often not truly trusted by users due to the lack...Recently,intelligent fault diagnosis based on deep learning has been extensively investigated,exhibiting state-of-the-art performance.However,the deep learning model is often not truly trusted by users due to the lack of interpretability of“black box”,which limits its deployment in safety-critical applications.A trusted fault diagnosis system requires that the faults can be accurately diagnosed in most cases,and the human in the deci-sion-making loop can be found to deal with the abnormal situa-tion when the models fail.In this paper,we explore a simplified method for quantifying both aleatoric and epistemic uncertainty in deterministic networks,called SAEU.In SAEU,Multivariate Gaussian distribution is employed in the deep architecture to compensate for the shortcomings of complexity and applicability of Bayesian neural networks.Based on the SAEU,we propose a unified uncertainty-aware deep learning framework(UU-DLF)to realize the grand vision of trustworthy fault diagnosis.Moreover,our UU-DLF effectively embodies the idea of“humans in the loop”,which not only allows for manual intervention in abnor-mal situations of diagnostic models,but also makes correspond-ing improvements on existing models based on traceability analy-sis.Finally,two experiments conducted on the gearbox and aero-engine bevel gears are used to demonstrate the effectiveness of UU-DLF and explore the effective reasons behind.展开更多
Artificial Intelligence(AI)technology has been extensively researched in various fields,including the field of malware detection.AI models must be trustworthy to introduce AI systems into critical decisionmaking and r...Artificial Intelligence(AI)technology has been extensively researched in various fields,including the field of malware detection.AI models must be trustworthy to introduce AI systems into critical decisionmaking and resource protection roles.The problem of robustness to adversarial attacks is a significant barrier to trustworthy AI.Although various adversarial attack and defense methods are actively being studied,there is a lack of research on robustness evaluation metrics that serve as standards for determining whether AI models are safe and reliable against adversarial attacks.An AI model’s robustness level cannot be evaluated by traditional evaluation indicators such as accuracy and recall.Additional evaluation indicators are necessary to evaluate the robustness of AI models against adversarial attacks.In this paper,a Sophisticated Adversarial Robustness Score(SARS)is proposed for AI model robustness evaluation.SARS uses various factors in addition to the ratio of perturbated features and the size of perturbation to evaluate robustness accurately in the evaluation process.This evaluation indicator reflects aspects that are difficult to evaluate using traditional evaluation indicators.Moreover,the level of robustness can be evaluated by considering the difficulty of generating adversarial samples through adversarial attacks.This paper proposed using SARS,calculated based on adversarial attacks,to identify data groups with robustness vulnerability and improve robustness through adversarial training.Through SARS,it is possible to evaluate the level of robustness,which can help developers identify areas for improvement.To validate the proposed method,experiments were conducted using a malware dataset.Through adversarial training,it was confirmed that SARS increased by 70.59%,and the recall reduction rate improved by 64.96%.Through SARS,it is possible to evaluate whether an AI model is vulnerable to adversarial attacks and to identify vulnerable data types.In addition,it is expected that improved models can be achieved by improving resistance to adversarial attacks via methods such as adversarial training.展开更多
A personalized trustworthy service selection method is proposed to fully express the features of trust, emphasize the importance of user preference and improve the trustworthiness of service selection. The trustworthi...A personalized trustworthy service selection method is proposed to fully express the features of trust, emphasize the importance of user preference and improve the trustworthiness of service selection. The trustworthiness of web service is defined as customized multi-dimensional trust metrics and the user preference is embodied in the weight of each trust metric. A service selection method combining AHP (analytic hierarchy process) and PROMETHEE (preference ranking organization method for enrichment evaluations) is proposed. AHP is used to determine the weights of trust metrics according to users' preferences. Hierarchy and pairwise comparison matrices are constructed. The weights of trust metrics are derived from the highest eigenvalue and eigenvector of the matrix. to obtain the final rank of candidate services. The preference functions are defined according to the inherent characteristics of the trust metrics and net outranking flows are calculated. Experimental results show that the proposed method can effectively express users' personalized preferences for trust metrics, and the trustworthiness of service ranking and selection is efficiently improved.展开更多
On September 4th, 2007, AQSIQ and the Press Office of the State Council invited 13 media representatives from countries such as America, Britain, France, Japan, Canada, Singapore to visit the Technical Center Toy ... On September 4th, 2007, AQSIQ and the Press Office of the State Council invited 13 media representatives from countries such as America, Britain, France, Japan, Canada, Singapore to visit the Technical Center Toy Laboratory of Guangdong Entry-exit Inspection and Quarantine Bureau, Zhentai (China) Industrial Limited Company and Guangdong Xinboxing Toys Limited Company.……展开更多
With the increasing maturity of various computational electromagnetics algorithms,the field has evolved from traditional standalone electromagnetic simulations to a stage in which trustworthy electromagnetic computati...With the increasing maturity of various computational electromagnetics algorithms,the field has evolved from traditional standalone electromagnetic simulations to a stage in which trustworthy electromagnetic computation methods serve as the core,aimed at meeting the advanced demands of computer-aided engineering.Trustworthy electromagnetic computation consists of three key aspects:trustworthy model,trustworthy mesh,and trustworthy algorithm.This paper focuses on the aspect of trustworthy mesh,aiming to establish a systematic framework and methodology for achieving trustworthy computation under the assumption that the target geometry and associated parameters are determined.The framework starts with high-fidelity geometric meshing.An effective strategy is nonconformal domain decomposition,which facilitates accurate modeling of complex geometries and diverse materials.Subsequently,efficient preconditioning methods are utilized to ensure stable convergence when solving the resulting multiscale systems associated with high-fidelity meshes.After obtaining the numerical solution,verification procedures are applied to evaluate whether the desired accuracy has been achieved.If the solution fails to meet the specified precision,adaptive mesh refinement techniques are used to automatically redistribute mesh density.The objective is to attain greater accuracy with a minimal increase in degrees of freedom,thereby enhancing computational efficiency.The adaptive refinement process proceeds iteratively until the computed solution satisfies the established accuracy criteria.Within this framework,we propose a novel,fast,physics-based self-reference method,which leverages power conservation laws to assess the accuracy of the solution.展开更多
The integration of machine learning(ML)into geohazard assessment has successfully instigated a paradigm shift,leading to the production of models that possess a level of predictive accuracy previously considered unatt...The integration of machine learning(ML)into geohazard assessment has successfully instigated a paradigm shift,leading to the production of models that possess a level of predictive accuracy previously considered unattainable.However,the black-box nature of these systems presents a significant barrier,hindering their operational adoption,regulatory approval,and full scientific validation.This paper provides a systematic review and synthesis of the emerging field of explainable artificial intelligence(XAI)as applied to geohazard science(GeoXAI),a domain that aims to resolve the long-standing trade-off between model performance and interpretability.A rigorous synthesis of 87 foundational studies is used to map the intellectual and methodological contours of this rapidly expanding field.The analysis reveals that current research efforts are concentrated predominantly on landslide and flood assessment.Methodologically,tree-based ensembles and deep learning models dominate the literature,with SHapley Additive exPlanations(SHAP)frequently adopted as the principal post-hoc explanation technique.More importantly,the review further documents how the role of XAI has shifted:rather than being used solely as a tool for interpreting models after training,it is increasingly integrated into the modeling cycle itself.Recent applications include its use in feature selection,adaptive sampling strategies,and model evaluation.The evidence also shows that GeoXAI extends beyond producing feature rankings.It reveals nonlinear thresholds and interaction effects that generate deeper mechanistic insights into hazard processes and mechanisms.Nevertheless,several key challenges remain unresolved within the field.These persistent issues are especially pronounced when considering the crucial necessity for interpretation stability,the demanding scholarly task of reliably distinguishing correlation from causation,and the development of appropriate methods for the treatment of complex spatio-temporal dynamics.展开更多
Graph neural networks(GNNs)have made rapid developments in the recent years.Due to their great ability in modeling graph-structured data,GNNs are vastly used in various applications,including high-stakes scenarios suc...Graph neural networks(GNNs)have made rapid developments in the recent years.Due to their great ability in modeling graph-structured data,GNNs are vastly used in various applications,including high-stakes scenarios such as financial analysis,traffic predictions,and drug discovery.Despite their great potential in benefiting humans in the real world,recent study shows that GNNs can leak private information,are vulnerable to adversarial attacks,can inherit and magnify societal bias from training data and lack inter-pretability,which have risk of causing unintentional harm to the users and society.For example,existing works demonstrate that at-tackers can fool the GNNs to give the outcome they desire with unnoticeable perturbation on training graph.GNNs trained on social networks may embed the discrimination in their decision process,strengthening the undesirable societal bias.Consequently,trust-worthy GNNs in various aspects are emerging to prevent the harm from GNN models and increase the users'trust in GNNs.In this pa-per,we give a comprehensive survey of GNNs in the computational aspects of privacy,robustness,fairness,and explainability.For each aspect,we give the taxonomy of the related methods and formulate the general frameworks for the multiple categories of trustworthy GNNs.We also discuss the future research directions of each aspect and connections between these aspects to help achieve trustworthi-ness.展开更多
Recently artificial intelligence(AI)and machine learning(ML)models have demonstrated remarkable progress with applications developed in various domains.It is also increasingly discussed that AI and ML models and appli...Recently artificial intelligence(AI)and machine learning(ML)models have demonstrated remarkable progress with applications developed in various domains.It is also increasingly discussed that AI and ML models and applications should be transparent,explainable,and trustworthy.Accordingly,the field of Explainable AI(XAI)is expanding rapidly.XAI holds substantial promise for improving trust and transparency in AI-based systems by explaining how complex models such as the deep neural network(DNN)produces their outcomes.Moreover,many researchers and practitioners consider that using provenance to explain these complex models will help improve transparency in AI-based systems.In this paper,we conduct a systematic literature review of provenance,XAI,and trustworthy AI(TAI)to explain the fundamental concepts and illustrate the potential of using provenance as a medium to help accomplish explainability in AI-based systems.Moreover,we also discuss the patterns of recent developments in this area and offer a vision for research in the near future.We hope this literature review will serve as a starting point for scholars and practitioners interested in learning about essential components of provenance,XAI,and TAI.展开更多
Artificial intelligence(AI) has accelerated the advancement of financial services by identifying hidden patterns from data to improve the quality of financial decisions. However, in addition to commonly desired attrib...Artificial intelligence(AI) has accelerated the advancement of financial services by identifying hidden patterns from data to improve the quality of financial decisions. However, in addition to commonly desired attributes,such as model accuracy, financial services demand trustworthy AI with properties that have not been adequately realized. These properties of trustworthy AI are interpretability, fairness and inclusiveness, robustness and security,and privacy protection. Here, we review the recent progress and limitations of applying AI to various areas of financial services, including risk management, fraud detection, wealth management, personalized services, and regulatory technology. Based on these progress and limitations, we introduce FinBrain 2.0, a research framework toward trustworthy AI. We argue that we are still a long way from having a truly trustworthy AI in financial services and call for the communities of AI and financial industry to join in this effort.展开更多
As AI technology continues to evolve,it plays an increasingly significant role in everyday life and social governance.However,the frequent occurrence of issues such as algorithmic bias,privacy breaches,and data leaks ...As AI technology continues to evolve,it plays an increasingly significant role in everyday life and social governance.However,the frequent occurrence of issues such as algorithmic bias,privacy breaches,and data leaks has led to a crisis of trust in AI among the public,presenting numerous challenges to social governance.Establishing technical trust in Al,reducing uncertainties in AI development,and enhancing its effectiveness in social governance have become a consensus among policymakers and researchers.By comparing different types of AI,the paper proposes and conceptualizes the idea of trustworthy Al,then discusses its characteristics and its value and impact pathways in social governance.The analysis addresses how mismatches in technological trust can affect social stability and the advancement of AI strategies.The paper highlights the potential of trustworthy AI to improve the efficiency of social governance and solve complex social problems.展开更多
文摘Feature selection(FS)is a pivotal pre-processing step in developing data-driven models,influencing reliability,performance and optimization.Although existing FS techniques can yield high-performance metrics for certain models,they do not invariably guarantee the extraction of the most critical or impactful features.Prior literature underscores the significance of equitable FS practices and has proposed diverse methodologies for the identification of appropriate features.However,the challenge of discerning the most relevant and influential features persists,particularly in the context of the exponential growth and heterogeneity of big data—a challenge that is increasingly salient in modern artificial intelligence(AI)applications.In response,this study introduces an innovative,automated statistical method termed Farea Similarity for Feature Selection(FSFS).The FSFS approach computes a similarity metric for each feature by benchmarking it against the record-wise mean,thereby finding feature dependencies and mitigating the influence of outliers that could potentially distort evaluation outcomes.Features are subsequently ranked according to their similarity scores,with the threshold established at the average similarity score.Notably,lower FSFS values indicate higher similarity and stronger data correlations,whereas higher values suggest lower similarity.The FSFS method is designed not only to yield reliable evaluation metrics but also to reduce data complexity without compromising model performance.Comparative analyses were performed against several established techniques,including Chi-squared(CS),Correlation Coefficient(CC),Genetic Algorithm(GA),Exhaustive Approach,Greedy Stepwise Approach,Gain Ratio,and Filtered Subset Eval,using a variety of datasets such as the Experimental Dataset,Breast Cancer Wisconsin(Original),KDD CUP 1999,NSL-KDD,UNSW-NB15,and Edge-IIoT.In the absence of the FSFS method,the highest classifier accuracies observed were 60.00%,95.13%,97.02%,98.17%,95.86%,and 94.62%for the respective datasets.When the FSFS technique was integrated with data normalization,encoding,balancing,and feature importance selection processes,accuracies improved to 100.00%,97.81%,98.63%,98.94%,94.27%,and 98.46%,respectively.The FSFS method,with a computational complexity of O(fn log n),demonstrates robust scalability and is well-suited for datasets of large size,ensuring efficient processing even when the number of features is substantial.By automatically eliminating outliers and redundant data,FSFS reduces computational overhead,resulting in faster training and improved model performance.Overall,the FSFS framework not only optimizes performance but also enhances the interpretability and explainability of data-driven models,thereby facilitating more trustworthy decision-making in AI applications.
基金supported in part by the National Natural Science Foundation of China under Grant U23A20300 and 62072351in part by the Key Research Project of Shaanxi Natural Science Foundation under Grant 2023-JC-ZD-35+1 种基金in part by the Concept Verification Funding of Hangzhou Institute of Technology of Xidian University under Grant GNYZ2024XX007in part by the 111 Project under Grant B16037.
文摘Cross-domain routing in Integrated Heterogeneous Networks(Inte-HetNet)should ensure efficient and secure data transmission across different network domains by satisfying diverse routing requirements.However,current solutions face numerous challenges in continuously ensuring trustworthy routing,fulfilling diverse requirements,achieving reasonable resource allocation,and safeguarding against malicious behaviors of network operators.We propose CrowdRouting,a novel cross-domain routing scheme based on crowdsourcing,dedicated to establishing sustained trust in cross-domain routing,comprehensively considering and fulfilling various customized routing requirements,while ensuring reasonable resource allocation and effectively curbing malicious behavior of network operators.Concretely,CrowdRouting employs blockchain technology to verify the trustworthiness of border routers in different network domains,thereby establishing sustainable and trustworthy crossdomain routing based on sustained trust in these routers.In addition,CrowdRouting ingeniously integrates a crowdsourcing mechanism into the auction for routing,achieving fair and impartial allocation of routing rights by flexibly embedding various customized routing requirements into each auction phase.Moreover,CrowdRouting leverages incentive mechanisms and routing settlement to encourage network domains to actively participate in cross-domain routing,thereby promoting optimal resource allocation and efficient utilization.Furthermore,CrowdRouting introduces a supervisory agency(e.g.,undercover agent)to effectively suppress the malicious behavior of network operators through the game and interaction between the agent and the network operators.Through comprehensive experimental evaluations and comparisons with existing works,we demonstrate that CrowdRouting excels in providing trustworthy and fine-grained customized routing services,stimulating active participation in cross-domain routing,inhibiting malicious operator behavior,and maintaining reasonable resource allocation,all of which outperform baseline schemes.
基金the Program for New Century Excellent Talents in University(NCET-06-0762)the Specialized Research Fund for the Doctoral Program of Higher Education(20060611009)the Natural Science Foundations of Chongqing(CSTC2007BA2003,CSTC2006BB2003)
文摘To measure the trustworthiness of Intemetware,we need to understand the existing problems and design appropriate trustworthy metrics.The developing and running system of Internetware is analyzed in terms of process,keystone,methods and techniques.According to the main related factors of Internetware trustworthiness,two important models,namely trustworthy metrics hierarchy model of components(TMHMC)with computing steps and local-global trustworthy metrics model of platform(LGTMMP)with algorithm respectively,are employed to evaluate the internal and external trustworthiness of Internetware benefiting for the development of Internetware.
基金Projects(61202004,61272084)supported by the National Natural Science Foundation of ChinaProjects(2011M500095,2012T50514)supported by the China Postdoctoral Science Foundation+2 种基金Projects(BK2011754,BK2009426)supported by the Natural Science Foundation of Jiangsu Province,ChinaProject(12KJB520007)supported by the Natural Science Fund of Higher Education of Jiangsu Province,ChinaProject(yx002001)supported by the Priority Academic Program Development of Jiangsu Higher Education Institutions,China
文摘Private clouds and public clouds are turning mutually into the open integrated cloud computing environment,which can aggregate and utilize WAN and LAN networks computing,storage,information and other hardware and software resources sufficiently,but also bring a series of security,reliability and credibility problems.To solve these problems,a novel secure-agent-based trustworthy virtual private cloud model named SATVPC was proposed for the integrated and open cloud computing environment.Through the introduction of secure-agent technology,SATVPC provides an independent,safe and trustworthy computing virtual private platform for multi-tenant systems.In order to meet the needs of the credibility of SATVPC and mandate the trust relationship between each task execution agent and task executor node suitable for their security policies,a new dynamic composite credibility evaluation mechanism was presented,including the credit index computing algorithm and the credibility differentiation strategy.The experimental system shows that SATVPC and the credibility evaluation mechanism can ensure the security of open computing environments with feasibility.Experimental results and performance analysis also show that the credit indexes computing algorithm can evaluate the credibilities of task execution agents and task executor nodes quantitatively,correctly and operationally.
基金Project(61272148) supported by the National Natural Science Foundation of ChinaProject(2014FJ3122) supported by the Science and Technology Project of Hunan Province,China
文摘The cloud computing has been growing over the past few years, and service providers are creating an intense competitive world of business. This proliferation makes it hard for new users to select a proper service among a large amount of service candidates. A novel user preferences-aware recommendation approach for trustworthy services is presented. For describing the requirements of new users in different application scenarios, user preferences are identified by usage preference, trust preference and cost preference. According to the similarity analysis of usage preference between consumers and new users, the candidates are selected, and these data about service trust provided by them are calculated as the fuzzy comprehensive evaluations. In accordance with the trust and cost preferences of new users, the dynamic fuzzy clusters are generated based on the fuzzy similarity computation. Then, the most suitable services can be selected to recommend to new users. The experiments show that this approach is effective and feasible, and can improve the quality of services recommendation meeting the requirements of new users in different scenario.
基金European Commission,Joint Research Center,Grant/Award Number:HUMAINTMinisterio de Ciencia e Innovación,Grant/Award Number:PID2020‐114924RB‐I00Comunidad de Madrid,Grant/Award Number:S2018/EMT‐4362 SEGVAUTO 4.0‐CM。
文摘Predicting the motion of other road agents enables autonomous vehicles to perform safe and efficient path planning.This task is very complex,as the behaviour of road agents depends on many factors and the number of possible future trajectories can be consid-erable(multi-modal).Most prior approaches proposed to address multi-modal motion prediction are based on complex machine learning systems that have limited interpret-ability.Moreover,the metrics used in current benchmarks do not evaluate all aspects of the problem,such as the diversity and admissibility of the output.The authors aim to advance towards the design of trustworthy motion prediction systems,based on some of the re-quirements for the design of Trustworthy Artificial Intelligence.The focus is on evaluation criteria,robustness,and interpretability of outputs.First,the evaluation metrics are comprehensively analysed,the main gaps of current benchmarks are identified,and a new holistic evaluation framework is proposed.Then,a method for the assessment of spatial and temporal robustness is introduced by simulating noise in the perception system.To enhance the interpretability of the outputs and generate more balanced results in the proposed evaluation framework,an intent prediction layer that can be attached to multi-modal motion prediction models is proposed.The effectiveness of this approach is assessed through a survey that explores different elements in the visualisation of the multi-modal trajectories and intentions.The proposed approach and findings make a significant contribution to the development of trustworthy motion prediction systems for autono-mous vehicles,advancing the field towards greater safety and reliability.
基金Supported by the National Natural Science Foundation of China (60873203)the Foundation of Key Laboratory of Aerospace Information Security and Trusted Computing Ministry of Education (AISTC2009_03)+1 种基金the Outstanding Youth Foundation of Hebei Province (F2010000317)the Natural Science Foundation of Hebei Province (F2010000319, F2011201039)
文摘In this paper, we merge software trustworthiness with software design and present an approach to trustworthy software design with an automatically adapting software update. First, software behavior and results can be expected and behavior states can be monitored when a software runs by introducing a trustworthy behavior trace on a software and inserting a checkpoint sensor at each checkpoint of the trustworthy software. Second, an updated approach of the trustworthy behavior trace for the software at the level of checkpoints is presented. The trustworthy behavior traces of the software for two versions can be merged adequately by constructing split points and merge points between two trustworthy behavior traces. Finally, experiments and analyses show that: (1) the software designed by our approach can detect and report the anomaly in a software automatically and effectively, so it has a higher ability of trustworthiness evaluation than the traditional software; and (2) our approach can realize the accurate update of the trustworthy behavior trace with a lower space overhead of checkpoints when the software updates.
基金supported by the National Natural Science Foundation of China(Grant Numbers:62372083,62072074,62076054,62027827,62002047)the Sichuan Provincial Science and Technology Innovation Platform and Talent Program(Grant Number:2022JDJQ0039)+1 种基金the Sichuan Provincial Science and Technology Support Program(Grant Numbers:2022YFQ0045,2022YFS0220,2021YFG0131,2023YFS0020,2023YFS0197,2023YFG0148)the CCF-Baidu Open Fund(Grant Number:202312).
文摘In the intelligent medical diagnosis area,Artificial Intelligence(AI)’s trustworthiness,reliability,and interpretability are critical,especially in cancer diagnosis.Traditional neural networks,while excellent at processing natural images,often lack interpretability and adaptability when processing high-resolution digital pathological images.This limitation is particularly evident in pathological diagnosis,which is the gold standard of cancer diagnosis and relies on a pathologist’s careful examination and analysis of digital pathological slides to identify the features and progression of the disease.Therefore,the integration of interpretable AI into smart medical diagnosis is not only an inevitable technological trend but also a key to improving diagnostic accuracy and reliability.In this paper,we introduce an innovative Multi-Scale Multi-Branch Feature Encoder(MSBE)and present the design of the CrossLinkNet Framework.The MSBE enhances the network’s capability for feature extraction by allowing the adjustment of hyperparameters to configure the number of branches and modules.The CrossLinkNet Framework,serving as a versatile image segmentation network architecture,employs cross-layer encoder-decoder connections for multi-level feature fusion,thereby enhancing feature integration and segmentation accuracy.Comprehensive quantitative and qualitative experiments on two datasets demonstrate that CrossLinkNet,equipped with the MSBE encoder,not only achieves accurate segmentation results but is also adaptable to various tumor segmentation tasks and scenarios by replacing different feature encoders.Crucially,CrossLinkNet emphasizes the interpretability of the AI model,a crucial aspect for medical professionals,providing an in-depth understanding of the model’s decisions and thereby enhancing trust and reliability in AI-assisted diagnostics.
基金supported by the National Natural Science Foundation of China under Grants No.61272063,No.61300129,No.61273216,No.61202048,No.61100054the Excellent Youth Foundation of Hunan Scientific Committee under Grant No.11JJ1011+2 种基金the Hunan Provincial Natural Science Foundation of China under Grant No.12JJB009Scientific Research Fund of Hunan Provincial Education Department of China under Grants No.09K085,No.12K105the Zhejiang Provincial Natural Science Foundation of China under Grant No.LQ12F02011
文摘Trustworthy service composition is an extremely important task when service composition becomes infeasible or even fails in an environment which is open,autonomic,uncertain and deceptive.This paper presents a trustworthy service composition method based on an improved Cross generation elitist selection,Heterogeneous recombination,Catacly-smic mutation(CHC) Trustworthy Service Composition Method(CHC-TSCM) genetic algorithm.CHCTSCM firstly obtains the total trust degree of the individual service using a trust degree measurement and evaluation model proposed in previous research.Trust combination and computation then are performed according to the structural relation of the composite service.Finally,the optimal trustworthy service composition is acquired by the improved CHC genetic algorithm.Experimental results show that CHC-TSCM can effectively solve the trustworthy service composition problem.Comparing with GODSS and TOCSS,this new method has several advantages:1) a higher service composition successrate;2) a smaller decline trend of the service composition success-rate,and 3) enhanced stability.
基金supported in part by the National Natural Science Foundation of China(52105116)Science Center for gas turbine project(P2022-DC-I-003-001)the Royal Society award(IEC\NSFC\223294)to Professor Asoke K.Nandi.
文摘Recently,intelligent fault diagnosis based on deep learning has been extensively investigated,exhibiting state-of-the-art performance.However,the deep learning model is often not truly trusted by users due to the lack of interpretability of“black box”,which limits its deployment in safety-critical applications.A trusted fault diagnosis system requires that the faults can be accurately diagnosed in most cases,and the human in the deci-sion-making loop can be found to deal with the abnormal situa-tion when the models fail.In this paper,we explore a simplified method for quantifying both aleatoric and epistemic uncertainty in deterministic networks,called SAEU.In SAEU,Multivariate Gaussian distribution is employed in the deep architecture to compensate for the shortcomings of complexity and applicability of Bayesian neural networks.Based on the SAEU,we propose a unified uncertainty-aware deep learning framework(UU-DLF)to realize the grand vision of trustworthy fault diagnosis.Moreover,our UU-DLF effectively embodies the idea of“humans in the loop”,which not only allows for manual intervention in abnor-mal situations of diagnostic models,but also makes correspond-ing improvements on existing models based on traceability analy-sis.Finally,two experiments conducted on the gearbox and aero-engine bevel gears are used to demonstrate the effectiveness of UU-DLF and explore the effective reasons behind.
基金supported by an Institute of Information and Communications Technology Planning and Evaluation (IITP)grant funded by the Korean Government (MSIT) (No.2022-0-00089,Development of Clustering and Analysis Technology to Identify Cyber-Attack Groups Based on Life-Cycle)and MISP (Ministry of Science,ICT&Future Planning),Korea,under the National Program for Excellence in SW (2019-0-01834)supervised by the IITP (Institute of Information&Communications Technology Planning&Evaluation) (2019-0-01834).
文摘Artificial Intelligence(AI)technology has been extensively researched in various fields,including the field of malware detection.AI models must be trustworthy to introduce AI systems into critical decisionmaking and resource protection roles.The problem of robustness to adversarial attacks is a significant barrier to trustworthy AI.Although various adversarial attack and defense methods are actively being studied,there is a lack of research on robustness evaluation metrics that serve as standards for determining whether AI models are safe and reliable against adversarial attacks.An AI model’s robustness level cannot be evaluated by traditional evaluation indicators such as accuracy and recall.Additional evaluation indicators are necessary to evaluate the robustness of AI models against adversarial attacks.In this paper,a Sophisticated Adversarial Robustness Score(SARS)is proposed for AI model robustness evaluation.SARS uses various factors in addition to the ratio of perturbated features and the size of perturbation to evaluate robustness accurately in the evaluation process.This evaluation indicator reflects aspects that are difficult to evaluate using traditional evaluation indicators.Moreover,the level of robustness can be evaluated by considering the difficulty of generating adversarial samples through adversarial attacks.This paper proposed using SARS,calculated based on adversarial attacks,to identify data groups with robustness vulnerability and improve robustness through adversarial training.Through SARS,it is possible to evaluate the level of robustness,which can help developers identify areas for improvement.To validate the proposed method,experiments were conducted using a malware dataset.Through adversarial training,it was confirmed that SARS increased by 70.59%,and the recall reduction rate improved by 64.96%.Through SARS,it is possible to evaluate whether an AI model is vulnerable to adversarial attacks and to identify vulnerable data types.In addition,it is expected that improved models can be achieved by improving resistance to adversarial attacks via methods such as adversarial training.
基金The National Natural Science Foundation of China(No.60973149)the Open Funds of State Key Laboratory of Computer Science of the Chinese Academy of Sciences(No.SYSKF1110)+1 种基金the Doctoral Fund of Ministry of Education of China(No.20100092110022)the College Industrialization Project of Jiangsu Province(No.JHB2011-3)
文摘A personalized trustworthy service selection method is proposed to fully express the features of trust, emphasize the importance of user preference and improve the trustworthiness of service selection. The trustworthiness of web service is defined as customized multi-dimensional trust metrics and the user preference is embodied in the weight of each trust metric. A service selection method combining AHP (analytic hierarchy process) and PROMETHEE (preference ranking organization method for enrichment evaluations) is proposed. AHP is used to determine the weights of trust metrics according to users' preferences. Hierarchy and pairwise comparison matrices are constructed. The weights of trust metrics are derived from the highest eigenvalue and eigenvector of the matrix. to obtain the final rank of candidate services. The preference functions are defined according to the inherent characteristics of the trust metrics and net outranking flows are calculated. Experimental results show that the proposed method can effectively express users' personalized preferences for trust metrics, and the trustworthiness of service ranking and selection is efficiently improved.
文摘 On September 4th, 2007, AQSIQ and the Press Office of the State Council invited 13 media representatives from countries such as America, Britain, France, Japan, Canada, Singapore to visit the Technical Center Toy Laboratory of Guangdong Entry-exit Inspection and Quarantine Bureau, Zhentai (China) Industrial Limited Company and Guangdong Xinboxing Toys Limited Company.……
基金supported by the National Natural Science Foundation of China(Grant Nos.62231007 and 62031010).
文摘With the increasing maturity of various computational electromagnetics algorithms,the field has evolved from traditional standalone electromagnetic simulations to a stage in which trustworthy electromagnetic computation methods serve as the core,aimed at meeting the advanced demands of computer-aided engineering.Trustworthy electromagnetic computation consists of three key aspects:trustworthy model,trustworthy mesh,and trustworthy algorithm.This paper focuses on the aspect of trustworthy mesh,aiming to establish a systematic framework and methodology for achieving trustworthy computation under the assumption that the target geometry and associated parameters are determined.The framework starts with high-fidelity geometric meshing.An effective strategy is nonconformal domain decomposition,which facilitates accurate modeling of complex geometries and diverse materials.Subsequently,efficient preconditioning methods are utilized to ensure stable convergence when solving the resulting multiscale systems associated with high-fidelity meshes.After obtaining the numerical solution,verification procedures are applied to evaluate whether the desired accuracy has been achieved.If the solution fails to meet the specified precision,adaptive mesh refinement techniques are used to automatically redistribute mesh density.The objective is to attain greater accuracy with a minimal increase in degrees of freedom,thereby enhancing computational efficiency.The adaptive refinement process proceeds iteratively until the computed solution satisfies the established accuracy criteria.Within this framework,we propose a novel,fast,physics-based self-reference method,which leverages power conservation laws to assess the accuracy of the solution.
文摘The integration of machine learning(ML)into geohazard assessment has successfully instigated a paradigm shift,leading to the production of models that possess a level of predictive accuracy previously considered unattainable.However,the black-box nature of these systems presents a significant barrier,hindering their operational adoption,regulatory approval,and full scientific validation.This paper provides a systematic review and synthesis of the emerging field of explainable artificial intelligence(XAI)as applied to geohazard science(GeoXAI),a domain that aims to resolve the long-standing trade-off between model performance and interpretability.A rigorous synthesis of 87 foundational studies is used to map the intellectual and methodological contours of this rapidly expanding field.The analysis reveals that current research efforts are concentrated predominantly on landslide and flood assessment.Methodologically,tree-based ensembles and deep learning models dominate the literature,with SHapley Additive exPlanations(SHAP)frequently adopted as the principal post-hoc explanation technique.More importantly,the review further documents how the role of XAI has shifted:rather than being used solely as a tool for interpreting models after training,it is increasingly integrated into the modeling cycle itself.Recent applications include its use in feature selection,adaptive sampling strategies,and model evaluation.The evidence also shows that GeoXAI extends beyond producing feature rankings.It reveals nonlinear thresholds and interaction effects that generate deeper mechanistic insights into hazard processes and mechanisms.Nevertheless,several key challenges remain unresolved within the field.These persistent issues are especially pronounced when considering the crucial necessity for interpretation stability,the demanding scholarly task of reliably distinguishing correlation from causation,and the development of appropriate methods for the treatment of complex spatio-temporal dynamics.
基金National Science Foundation(NSF),USA(No.IIS-1909702)Army Research Office(ARO),USA(No.W911NF21-1-0198)Department of Homeland Security(DNS)CINA,USA(No.E205949D).
文摘Graph neural networks(GNNs)have made rapid developments in the recent years.Due to their great ability in modeling graph-structured data,GNNs are vastly used in various applications,including high-stakes scenarios such as financial analysis,traffic predictions,and drug discovery.Despite their great potential in benefiting humans in the real world,recent study shows that GNNs can leak private information,are vulnerable to adversarial attacks,can inherit and magnify societal bias from training data and lack inter-pretability,which have risk of causing unintentional harm to the users and society.For example,existing works demonstrate that at-tackers can fool the GNNs to give the outcome they desire with unnoticeable perturbation on training graph.GNNs trained on social networks may embed the discrimination in their decision process,strengthening the undesirable societal bias.Consequently,trust-worthy GNNs in various aspects are emerging to prevent the harm from GNN models and increase the users'trust in GNNs.In this pa-per,we give a comprehensive survey of GNNs in the computational aspects of privacy,robustness,fairness,and explainability.For each aspect,we give the taxonomy of the related methods and formulate the general frameworks for the multiple categories of trustworthy GNNs.We also discuss the future research directions of each aspect and connections between these aspects to help achieve trustworthi-ness.
基金supported by the National Science Foundation under Grants No.2019609the National Aeronautics and Space Administration under Grant No.80NSSC21M0028.
文摘Recently artificial intelligence(AI)and machine learning(ML)models have demonstrated remarkable progress with applications developed in various domains.It is also increasingly discussed that AI and ML models and applications should be transparent,explainable,and trustworthy.Accordingly,the field of Explainable AI(XAI)is expanding rapidly.XAI holds substantial promise for improving trust and transparency in AI-based systems by explaining how complex models such as the deep neural network(DNN)produces their outcomes.Moreover,many researchers and practitioners consider that using provenance to explain these complex models will help improve transparency in AI-based systems.In this paper,we conduct a systematic literature review of provenance,XAI,and trustworthy AI(TAI)to explain the fundamental concepts and illustrate the potential of using provenance as a medium to help accomplish explainability in AI-based systems.Moreover,we also discuss the patterns of recent developments in this area and offer a vision for research in the near future.We hope this literature review will serve as a starting point for scholars and practitioners interested in learning about essential components of provenance,XAI,and TAI.
基金Project supported by the National Natural Science Foundation of China (Nos. 62172362 and 72192823)。
文摘Artificial intelligence(AI) has accelerated the advancement of financial services by identifying hidden patterns from data to improve the quality of financial decisions. However, in addition to commonly desired attributes,such as model accuracy, financial services demand trustworthy AI with properties that have not been adequately realized. These properties of trustworthy AI are interpretability, fairness and inclusiveness, robustness and security,and privacy protection. Here, we review the recent progress and limitations of applying AI to various areas of financial services, including risk management, fraud detection, wealth management, personalized services, and regulatory technology. Based on these progress and limitations, we introduce FinBrain 2.0, a research framework toward trustworthy AI. We argue that we are still a long way from having a truly trustworthy AI in financial services and call for the communities of AI and financial industry to join in this effort.
文摘As AI technology continues to evolve,it plays an increasingly significant role in everyday life and social governance.However,the frequent occurrence of issues such as algorithmic bias,privacy breaches,and data leaks has led to a crisis of trust in AI among the public,presenting numerous challenges to social governance.Establishing technical trust in Al,reducing uncertainties in AI development,and enhancing its effectiveness in social governance have become a consensus among policymakers and researchers.By comparing different types of AI,the paper proposes and conceptualizes the idea of trustworthy Al,then discusses its characteristics and its value and impact pathways in social governance.The analysis addresses how mismatches in technological trust can affect social stability and the advancement of AI strategies.The paper highlights the potential of trustworthy AI to improve the efficiency of social governance and solve complex social problems.