Large language models(LLMs)represent significant advancements in artificial intelligence.However,their increasing capabilities come with a serious challenge:misalignment,which refers to the deviation of model behavior...Large language models(LLMs)represent significant advancements in artificial intelligence.However,their increasing capabilities come with a serious challenge:misalignment,which refers to the deviation of model behavior from the designers’intentions and human values.This review aims to synthesize the current understanding of the LLM misalignment issue and provide researchers and practitioners with a comprehensive overview.We define the concept of misalignment and elaborate on its various manifestations,including generating harmful content,factual errors(hallucinations),propagating biases,failing to follow instructions,emerging deceptive behaviors,and emergent misalignment.We explore the multifaceted causes of misalignment,systematically analyzing factors from surface-level technical issues(e.g.,training data,objective function design,model scaling)to deeper fundamental challenges(e.g.,difficulties formalizing values,discrepancies between training signals and real intentions).This review covers existing and emerging techniques for detecting and evaluating the degree of misalignment,such as benchmark tests,red-teaming,and formal safety assessments.Subsequently,we examine strategies to mitigate misalignment,focusing on mainstream alignment techniques such as RLHF,Constitutional AI(CAI),instruction fine-tuning,and novel approaches that address scalability and robustness.In particular,we analyze recent advances in misalignment attack research,including system prompt modifications,supervised fine-tuning,self-supervised representation attacks,and model editing,which challenge the robustness of model alignment.We categorize and analyze the surveyed literature,highlighting major findings,persistent limitations,and current contentious points.Finally,we identify key open questions and propose several promising future research directions,including constructing high-quality alignment datasets,exploring novel alignment methods,coordinating diverse values,and delving into the deep philosophical aspects of alignment.This work underscores the complexity and multidimensionality of LLM misalignment issues,calling for interdisciplinary approaches to reliably align LLMs with human values.展开更多
In order to balancing based on data achieve dynamic load flow level, in this paper, we apply SDN technology to the cloud data center, and propose a dynamic load balancing method of cloud center based on SDN. The appro...In order to balancing based on data achieve dynamic load flow level, in this paper, we apply SDN technology to the cloud data center, and propose a dynamic load balancing method of cloud center based on SDN. The approach of using the SDN technology in the current task scheduling flexibility, accomplish real-time monitoring of the service node flow and load condition by the OpenFlow protocol. When the load of system is imbalanced, the controller can allocate globally network resources. What's more, by using dynamic correction, the load of the system is not obvious tilt in the long run. The results of simulation show that this approach can realize and ensure that the load will not tilt over a long period of time, and improve the system throughput.展开更多
Crow Search Algorithm(CSA)is a swarm-based single-objective optimizer proposed in recent years,which tried to inspire the behavior of crows that hide foods in different locations and retrieve them when needed.The orig...Crow Search Algorithm(CSA)is a swarm-based single-objective optimizer proposed in recent years,which tried to inspire the behavior of crows that hide foods in different locations and retrieve them when needed.The original version of the CSA has simple parameters and moderate performance.However,it often tends to converge slowly or get stuck in a locally optimal region due to a missed harmonizing strategy during the exploitation and exploration phases.Therefore,strategies of mutation and crisscross are combined into CSA(CCMSCSA)in this paper to improve the performance and provide an efficient optimizer for various optimization problems.To verify the superiority of CCMSCSA,a set of comparisons has been performed reasonably with some well-established metaheuristics and advanced metaheuristics on 15 benchmark functions.The experimental results expose and verify that the proposed CCMSCSA has meaningfully improved the convergence speed and the ability to jump out of the local optimum.In addition,the scalability of CCMSCSA is analyzed,and the algorithm is applied to several engineering problems in a constrained space and feature selection problems.Experimental results show that the scalability of CCMSCSA has been significantly improved and can find better solutions than its competitors when dealing with combinatorial optimization problems.The proposed CCMSCSA performs well in almost all experimental results.Therefore,we hope the researchers can see it as an effective method for solving constrained and unconstrained optimization problems.展开更多
To solve the shortage problem of the semantic descrip- tion scope and verification capability existed in the security policy, a semantic description method for the security policy based on ontology is presented. By de...To solve the shortage problem of the semantic descrip- tion scope and verification capability existed in the security policy, a semantic description method for the security policy based on ontology is presented. By defining the basic elements of the security policy, the relationship model between the ontology and the concept of security policy based on the Web ontology language (OWL) is established, so as to construct the semantic description framework of the security policy. Through modeling and reasoning in the Protege, the ontology model of authorization policy is proposed, and the first-order predicate description logic is introduced to the analysis and verification of the model. Results show that the ontology-based semantic description of security policy has better flexibility and practicality.展开更多
Artificial Intelligence(AI)becomes one hotspot in the field of the medical images analysis and provides rather promising solution.Although some research has been explored in smart diagnosis for the common diseases of ...Artificial Intelligence(AI)becomes one hotspot in the field of the medical images analysis and provides rather promising solution.Although some research has been explored in smart diagnosis for the common diseases of urinary system,some problems remain unsolved completely A nine-layer Convolutional Neural Network(CNN)is proposed in this paper to classify the renal Computed Tomography(CT)images.Four group of comparative experiments prove the structure of this CNN is optimal and can achieve good performance with average accuracy about 92.07±1.67%.Although our renal CT data is not very large,we do augment the training data by affine,translating,rotating and scaling geometric transformation and gamma,noise transformation in color space.Experimental results validate the Data Augmentation(DA)on training data can improve the performance of our proposed CNN compared to without DA with the average accuracy about 0.85%.This proposed algorithm gives a promising solution to help clinical doctors automatically recognize the abnormal images faster than manual judgment and more accurately than previous methods.展开更多
With the maturity and development of 5G field,Mobile Edge CrowdSensing(MECS),as an intelligent data collection paradigm,provides a broad prospect for various applications in IoT.However,sensing users as data uploaders...With the maturity and development of 5G field,Mobile Edge CrowdSensing(MECS),as an intelligent data collection paradigm,provides a broad prospect for various applications in IoT.However,sensing users as data uploaders lack a balance between data benefits and privacy threats,leading to conservative data uploads and low revenue or excessive uploads and privacy breaches.To solve this problem,a Dynamic Privacy Measurement and Protection(DPMP)framework is proposed based on differential privacy and reinforcement learning.Firstly,a DPM model is designed to quantify the amount of data privacy,and a calculation method for personalized privacy threshold of different users is also designed.Furthermore,a Dynamic Private sensing data Selection(DPS)algorithm is proposed to help sensing users maximize data benefits within their privacy thresholds.Finally,theoretical analysis and ample experiment results show that DPMP framework is effective and efficient to achieve a balance between data benefits and sensing user privacy protection,in particular,the proposed DPMP framework has 63%and 23%higher training efficiency and data benefits,respectively,compared to the Monte Carlo algorithm.展开更多
As a complementary technology to Binary Decision Diagram-based(BDD-based) symbolic model checking, the verification techniques on Boolean satisfiability problem have gained an increasing wide of applications over the ...As a complementary technology to Binary Decision Diagram-based(BDD-based) symbolic model checking, the verification techniques on Boolean satisfiability problem have gained an increasing wide of applications over the last few decades, which brings a dramatic improvement for automatic verification. In this paper, we firstly introduce the theory about the Boolean satisfiability verification, including the description on the problem of Boolean satisfiability verification, Davis-Putnam-Logemann-Loveland(DPLL) based complete verification algorithm, and all kinds of solvers generated and the logic languages used by those solvers. Moreover, we formulate a large number optimizations of technique revolutions based on Boolean SATisfiability(SAT) and Satisfiability Modulo Theories(SMT) solving in detail, including incomplete methods such as bounded model checking, and other methods for concurrent programs model checking. Finally, we point out the major challenge pervasively in industrial practice and prospect directions for future research in the field of formal verification.展开更多
Dynamic publishing of social network graphs offers insights into user behavior but brings privacy risks, notably re-identification attacks on evolving data snapshots. Existing methods based on -anonymity can mitigate ...Dynamic publishing of social network graphs offers insights into user behavior but brings privacy risks, notably re-identification attacks on evolving data snapshots. Existing methods based on -anonymity can mitigate these attacks but are cumbersome, neglect dynamic protection of community structure, and lack precise utility measures. To address these challenges, we present a dynamic social network graph anonymity scheme with community structure protection (DSNGA-CSP), which achieves the dynamic anonymization process by incorporating community detection. First, DSNGA-CSP categorizes communities of the original graph into three types at each timestamp, and only partitions community subgraphs for a specific category at each updated timestamp. Then, DSNGA-CSP achieves intra-community and inter-community anonymization separately to retain more of the community structure of the original graph at each timestamp. It anonymizes community subgraphs by the proposed novel -composition method and anonymizes inter-community edges by edge isomorphism. Finally, a novel information loss metric is introduced in DSNGA-CSP to precisely capture the utility of the anonymized graph through original information preservation and anonymous information changes. Extensive experiments conducted on five real-world datasets demonstrate that DSNGA-CSP consistently outperforms existing methods, providing a more effective balance between privacy and utility. Specifically, DSNGA-CSP shows an average utility improvement of approximately 30% compared to TAKG and CTKGA for three dynamic graph datasets, according to the proposed information loss metric IL.展开更多
The evidential reasoning(ER)rule framework has been widely applied in multi-attribute decision analysis and system assessment to manage uncertainty.However,traditional ER implementations rely on two critical limitatio...The evidential reasoning(ER)rule framework has been widely applied in multi-attribute decision analysis and system assessment to manage uncertainty.However,traditional ER implementations rely on two critical limitations:1)unrealistic assumptions of complete evidence independence,and 2)a lack of mechanisms to differentiate causal relationships from spurious correlations.Existing similarity-based approaches often misinterpret interdependent evidence,leading to unreliable decision outcomes.To address these gaps,this study proposes a causality-enhanced ER rule(CER-e)framework with three key methodological innovations:1)a multidimensional causal representation of evidence to capture dependency structures;2)probabilistic quantification of causal strength using transfer entropy,a model-free information-theoretic measure;3)systematic integration of causal parameters into the ER inference process while maintaining evidential objectivity.The PC algorithm is employed during causal discovery to eliminate spurious correlations,ensuring robust causal inference.Case studies in two types of domains—telecommunications network security assessment and structural risk evaluation—validate CER-e’s effectiveness in real-world scenarios.Under simulated incomplete information conditions,the framework demonstrates superior algorithmic robustness compared to traditional ER.Comparative analyses show that CER-e significantly improves both the interpretability of causal relationships and the reliability of assessment results,establishing a novel paradigm for integrating causal inference with evidential reasoning in complex system evaluation.展开更多
Due to the excellent performance in complex systems modeling under small samples and uncertainty,Belief Rule Base(BRB)expert system has been widely applied in fault diagnosis.However,the fault diagnosis process for co...Due to the excellent performance in complex systems modeling under small samples and uncertainty,Belief Rule Base(BRB)expert system has been widely applied in fault diagnosis.However,the fault diagnosis process for complex mechanical equipment normally needs multiple attributes,which can lead to the rule number explosion problem in BRB,and limit the efficiency and accuracy.To solve this problem,a novel Combination Belief Rule Base(C-BRB)model based on Directed Acyclic Graph(DAG)structure is proposed in this paper.By dispersing numerous attributes into the parallel structure composed of different sub-BRBs,C-BRB can effectively reduce the amount of calculation with acceptable result.At the same time,a path selection strategy considering the accuracy of child nodes is designed in C-BRB to obtain the most suitable submodels.Finally,a fusion method based on Evidential Reasoning(ER)rule is used to combine the belief rules of C-BRB and generate the final results.To illustrate the effectiveness and reliability of the proposed method,a case study of fault diagnosis of rolling bearing is conducted,and the result is compared with other methods.展开更多
Software defect prediction is aimed to find potential defects based on historical data and software features. Software features can reflect the characteristics of software modules. However, some of these features may ...Software defect prediction is aimed to find potential defects based on historical data and software features. Software features can reflect the characteristics of software modules. However, some of these features may be more relevant to the class (defective or non-defective), but others may be redundant or irrelevant. To fully measure the correlation between different features and the class, we present a feature selection approach based on a similarity measure (SM) for software defect prediction. First, the feature weights are updated according to the similarity of samples in different classes. Second, a feature ranking list is generated by sorting the feature weights in descending order, and all feature subsets are selected from the feature ranking list in sequence. Finally, all feature subsets are evaluated on a k-nearest neighbor (KNN) model and measured by an area under curve (AUC) metric for classification performance. The experiments are conducted on 11 National Aeronautics and Space Administration (NASA) datasets, and the results show that our approach performs better than or is comparable to the compared feature selection approaches in terms of classification performance.展开更多
There are two key issues in distributed intrusion detection system,that is,maintaining load balance of system and protecting data integrity.To address these issues,this paper proposes a new distributed intrusion detec...There are two key issues in distributed intrusion detection system,that is,maintaining load balance of system and protecting data integrity.To address these issues,this paper proposes a new distributed intrusion detection model for big data based on nondestructive partitioning and balanced allocation.A data allocation strategy based on capacity and workload is introduced to achieve local load balance,and a dynamic load adjustment strategy is adopted to maintain global load balance of cluster.Moreover,data integrity is protected by using session reassemble and session partitioning.The simulation results show that the new model enjoys favorable advantages such as good load balance,higher detection rate and detection efficiency.展开更多
The advent of Big Data has rendered Machine Learning tasks more intricate as they frequently involve higher-dimensional data.Feature Selection(FS)methods can abate the complexity of the data and enhance the accuracy,g...The advent of Big Data has rendered Machine Learning tasks more intricate as they frequently involve higher-dimensional data.Feature Selection(FS)methods can abate the complexity of the data and enhance the accuracy,generalizability,and interpretability of models.Meta-heuristic algorithms are often utilized for FS tasks due to their low requirements and efficient performance.This paper introduces an augmented Forensic-Based Investigation algorithm(DCFBI)that incorporates a Dynamic Individual Selection(DIS)and crisscross(CC)mechanism to improve the pursuit phase of the FBI.Moreover,a binary version of DCFBI(BDCFBI)is applied to FS.Experiments conducted on IEEE CEC 2017 with other metaheuristics demonstrate that DCFBI surpasses them in search capability.The influence of different mechanisms on the original FBI is analyzed on benchmark functions,while its scalability is verified by comparing it with the original FBI on benchmarks with varied dimensions.BDCFBI is then applied to 18 real datasets from the UCI machine learning database and the Wieslaw dataset to select near-optimal features,which are then compared with six renowned binary metaheuristics.The results show that BDCFBI can be more competitive than similar methods and acquire a subset of features with superior classification accuracy.展开更多
As the risk of malware is sharply increasing in Android platform,Android malware detection has become an important research topic.Existing works have demonstrated that required permissions of Android applications are ...As the risk of malware is sharply increasing in Android platform,Android malware detection has become an important research topic.Existing works have demonstrated that required permissions of Android applications are valuable for malware analysis,but how to exploit those permission patterns for malware detection remains an open issue.In this paper,we introduce the contrasting permission patterns to characterize the essential differences between malwares and clean applications from the permission aspect Then a framework based on contrasting permission patterns is presented for Android malware detection.According to the proposed framework,an ensemble classifier,Enclamald,is further developed to detect whether an application is potentially malicious.Every contrasting permission pattern is acting as a weak classifier in Enclamald,and the weighted predictions of involved weak classifiers are aggregated to the final result.Experiments on real-world applications validate that the proposed Enclamald classifier outperforms commonly used classifiers for Android Malware Detection.展开更多
Evidential Reasoning(ER)rule,which can combine multiple pieces of independent evidence conjunctively,is widely applied in multiple attribute decision analysis.However,the assumption of independence among evidence is o...Evidential Reasoning(ER)rule,which can combine multiple pieces of independent evidence conjunctively,is widely applied in multiple attribute decision analysis.However,the assumption of independence among evidence is often not satisfied,resulting in ER rule inapplicable.In this paper,an Evidential Reasoning rule for Dependent Evidence combination(ERr-DE)is developed.Firstly,the aggregation sequence of multiple pieces of evidence is determined according to evidence reliability.On this basis,a calculation method of evidence Relative Total Dependence Coefficient(RTDC)is proposed using the distance correlation method.Secondly,as a discounting factor,RTDC is introduced into the ER rule framework,and the ERr-DE model is formulated.The aggregation process of two pieces of dependent evidence by ERr-DE is investigated,which is then generalized to aggregate multiple pieces of non-independent evidence.Thirdly,sensitivity analysis is carried out to investigate the relationship between the model output and the RTDC.The properties of sensitivity coefficient are explored and mathematically proofed.The conjunctive probabilistic reasoning process of ERr-DE and the properties of sensitivity coefficient are verified by two numerical examples respectively.Finally,the practical application of the ERr-DE is validated by a case study on the performance assessment of satellite turntable system.展开更多
Inferring unknown social trust relations attracts increasing attention in recent years. However, social trust, as a social concept, is intrinsically dynamic, and exploiting temporal dynamics provides challenges and op...Inferring unknown social trust relations attracts increasing attention in recent years. However, social trust, as a social concept, is intrinsically dynamic, and exploiting temporal dynamics provides challenges and opportunities for social trust prediction. In this paper, we investigate social trust prediction by exploiting temporal dynamics. In particular, we model the dynamics of user preferences in two principled ways. The first one focuses on temporal weight; the second one targets temporal smoothness. By incorporating these two types of temporal dynamics into traditional matrix factorization based social trust prediction model, two extended social trust prediction models are proposed and the cor- responding algorithms to solve the models are designed too. We conduct experiments on a real-world dataset and the results dem- onstrate the effectiveness of our proposed new models. Further experiments are also conducted to understand the importance of temporal dynamics in social trust prediction.展开更多
Group recommendations derive from a phenomenon in which people tend to participate in activities together regardless of whether they are online or in reality,which creates real scenarios and promotes the development o...Group recommendations derive from a phenomenon in which people tend to participate in activities together regardless of whether they are online or in reality,which creates real scenarios and promotes the development of group recommendation systems.Different from traditional personalized recommendation methods,which are concerned only with the accuracy of recommendations for individuals,group recommendation is expected to balance the needs of multiple users.Building a proper model for a group of users to improve the quality of a recommended list and to achieve a better recommendation has become a large challenge for group recommendation applications.Existing studies often focus on explicit user characteristics,such as gender,occupation,and social status,to analyze the importance of users for modeling group preferences.However,it is usually difficult to obtain extra user information,especially for ad hoc groups.To this end,we design a novel entropy-based method that extracts users’implicit characteristics from users’historical ratings to obtain the weights of group members.These weights represent user importance so that we can obtain group preferences according to user weights and then model the group decision process to make a recommendation.We evaluate our method for the two metrics of recommendation relevance and overall ratings of recommended items.We compare our method to baselines,and experimental results show that our method achieves a significant improvement in group recommendation performance.展开更多
Humans are experiencing the inclusion of artificial agents in their lives,such as unmanned vehicles,service robots,voice assistants,and intelligent medical care.If the artificial agents cannot align with social values...Humans are experiencing the inclusion of artificial agents in their lives,such as unmanned vehicles,service robots,voice assistants,and intelligent medical care.If the artificial agents cannot align with social values or make ethical decisions,they may not meet the expectations of humans.Traditionally,an ethical decision-making framework is constructed by rule-based or statistical approaches.In this paper,we propose an ethical decision-making framework based on incremental ILP(Inductive Logic Programming),which can overcome the brittleness of rule-based approaches and little interpretability of statistical approaches.As the current incremental ILP makes it difficult to solve conflicts,we propose a novel ethical decision-making framework considering conflicts in this paper,which adopts our proposed incremental ILP system.The framework consists of two processes:the learning process and the deduction process.The first process records bottom clauses with their score functions and learns rules guided by the entailment and the score function.The second process obtains an ethical decision based on the rules.In an ethical scenario about chatbots for teenagers’mental health,we verify that our framework can learn ethical rules and make ethical decisions.Besides,we extract incremental ILP from the framework and compare it with the state-of-the-art ILP systems based on ASP(Answer Set Programming)focusing on conflict resolution.The results of comparisons show that our proposed system can generate better-quality rules than most other systems.展开更多
Knowledge graph embedding aims at embedding entities and relations in a knowledge graph into a continuous, dense, low-dimensional and realvalued vector space. Among various embedding models appeared in recent years, t...Knowledge graph embedding aims at embedding entities and relations in a knowledge graph into a continuous, dense, low-dimensional and realvalued vector space. Among various embedding models appeared in recent years, translation-based models such as TransE, TransH and TransR achieve state-of-the-art performance. However, in these models, negative triples used for training phase are generated by replacing each positive entity in positive triples with negative entities from the entity set with the same probability;as a result, a large number of invalid negative triples will be generated and used in the training process. In this paper, a method named adaptive negative sampling (ANS) is proposed to generate valid negative triples. In this method, it first divided all the entities into a number of groups which consist of similar entities by some clustering algorithms such as K-Means. Then, corresponding to each positive triple, the head entity was replaced by a negative entity from the cluster in which the head entity was located and the tail entity was replaced in a similar approach. As a result, it generated a set of high-quality negative triples which benefit for improving the effectiveness of embedding models. The ANS method was combined with the TransE model and the resulted model was named as TransE-ANS. Experimental results show that TransE-ANS achieves significant improvement in the link prediction task.展开更多
基金supported by National Natural Science Foundation of China(62462019,62172350)Guangdong Basic and Applied Basic Research Foundation(2023A1515012846)+6 种基金Guangxi Science and Technology Major Program(AA24263010)The Key Research and Development Program of Guangxi(AB24010085)Key Laboratory of Equipment Data Security and Guarantee Technology,Ministry of Education(GDZB2024060500)2024 Higher Education Scientific Research Planning Project(No.24NL0419)Nantong Science and Technology Project(No.JC2023070)the Open Fund of Advanced Cryptography and System Security Key Laboratory of Sichuan Province(GrantNo.SKLACSS-202407)sponsored by the Cultivation of Young andMiddle-aged Academic Leaders in the“Qing Lan Project”of Jiangsu Province and the 2025 Outstanding Teaching Team in the“Qing Lan Project”of Jiangsu Province.
文摘Large language models(LLMs)represent significant advancements in artificial intelligence.However,their increasing capabilities come with a serious challenge:misalignment,which refers to the deviation of model behavior from the designers’intentions and human values.This review aims to synthesize the current understanding of the LLM misalignment issue and provide researchers and practitioners with a comprehensive overview.We define the concept of misalignment and elaborate on its various manifestations,including generating harmful content,factual errors(hallucinations),propagating biases,failing to follow instructions,emerging deceptive behaviors,and emergent misalignment.We explore the multifaceted causes of misalignment,systematically analyzing factors from surface-level technical issues(e.g.,training data,objective function design,model scaling)to deeper fundamental challenges(e.g.,difficulties formalizing values,discrepancies between training signals and real intentions).This review covers existing and emerging techniques for detecting and evaluating the degree of misalignment,such as benchmark tests,red-teaming,and formal safety assessments.Subsequently,we examine strategies to mitigate misalignment,focusing on mainstream alignment techniques such as RLHF,Constitutional AI(CAI),instruction fine-tuning,and novel approaches that address scalability and robustness.In particular,we analyze recent advances in misalignment attack research,including system prompt modifications,supervised fine-tuning,self-supervised representation attacks,and model editing,which challenge the robustness of model alignment.We categorize and analyze the surveyed literature,highlighting major findings,persistent limitations,and current contentious points.Finally,we identify key open questions and propose several promising future research directions,including constructing high-quality alignment datasets,exploring novel alignment methods,coordinating diverse values,and delving into the deep philosophical aspects of alignment.This work underscores the complexity and multidimensionality of LLM misalignment issues,calling for interdisciplinary approaches to reliably align LLMs with human values.
基金supported by the National Natural Science Foundation of China(No.61163058No.61201250 and No.61363006)Guangxi Key Laboratory of Trusted Software(No.KX201306)
文摘In order to balancing based on data achieve dynamic load flow level, in this paper, we apply SDN technology to the cloud data center, and propose a dynamic load balancing method of cloud center based on SDN. The approach of using the SDN technology in the current task scheduling flexibility, accomplish real-time monitoring of the service node flow and load condition by the OpenFlow protocol. When the load of system is imbalanced, the controller can allocate globally network resources. What's more, by using dynamic correction, the load of the system is not obvious tilt in the long run. The results of simulation show that this approach can realize and ensure that the load will not tilt over a long period of time, and improve the system throughput.
基金Natural Science Foundation of Zhejiang Province(LZ22F020005)National Natural Science Foundation of China(42164002,62076185 and,U1809209)National Key R&D Program of China(2018YFC1503806).
文摘Crow Search Algorithm(CSA)is a swarm-based single-objective optimizer proposed in recent years,which tried to inspire the behavior of crows that hide foods in different locations and retrieve them when needed.The original version of the CSA has simple parameters and moderate performance.However,it often tends to converge slowly or get stuck in a locally optimal region due to a missed harmonizing strategy during the exploitation and exploration phases.Therefore,strategies of mutation and crisscross are combined into CSA(CCMSCSA)in this paper to improve the performance and provide an efficient optimizer for various optimization problems.To verify the superiority of CCMSCSA,a set of comparisons has been performed reasonably with some well-established metaheuristics and advanced metaheuristics on 15 benchmark functions.The experimental results expose and verify that the proposed CCMSCSA has meaningfully improved the convergence speed and the ability to jump out of the local optimum.In addition,the scalability of CCMSCSA is analyzed,and the algorithm is applied to several engineering problems in a constrained space and feature selection problems.Experimental results show that the scalability of CCMSCSA has been significantly improved and can find better solutions than its competitors when dealing with combinatorial optimization problems.The proposed CCMSCSA performs well in almost all experimental results.Therefore,we hope the researchers can see it as an effective method for solving constrained and unconstrained optimization problems.
基金Supported by the National Natural Science Foundation of China(61462020,61363006,61163057)the Guangxi Experiment Center of Information Science Foundation(20130329)the Guangxi Natural Science Foundation(2014GXNSFAA118375)
文摘To solve the shortage problem of the semantic descrip- tion scope and verification capability existed in the security policy, a semantic description method for the security policy based on ontology is presented. By defining the basic elements of the security policy, the relationship model between the ontology and the concept of security policy based on the Web ontology language (OWL) is established, so as to construct the semantic description framework of the security policy. Through modeling and reasoning in the Protege, the ontology model of authorization policy is proposed, and the first-order predicate description logic is introduced to the analysis and verification of the model. Results show that the ontology-based semantic description of security policy has better flexibility and practicality.
基金This study was supported by National Educational Science Plan Foundation“in 13th Five-Year”(DIA170375),ChinaGuangxi Key Laboratory of Trusted Software(kx201901)British Heart Foundation Accelerator Award,UK.
文摘Artificial Intelligence(AI)becomes one hotspot in the field of the medical images analysis and provides rather promising solution.Although some research has been explored in smart diagnosis for the common diseases of urinary system,some problems remain unsolved completely A nine-layer Convolutional Neural Network(CNN)is proposed in this paper to classify the renal Computed Tomography(CT)images.Four group of comparative experiments prove the structure of this CNN is optimal and can achieve good performance with average accuracy about 92.07±1.67%.Although our renal CT data is not very large,we do augment the training data by affine,translating,rotating and scaling geometric transformation and gamma,noise transformation in color space.Experimental results validate the Data Augmentation(DA)on training data can improve the performance of our proposed CNN compared to without DA with the average accuracy about 0.85%.This proposed algorithm gives a promising solution to help clinical doctors automatically recognize the abnormal images faster than manual judgment and more accurately than previous methods.
基金supported in part by the National Natural Science Foundation of China under Grant U1905211,Grant 61872088,Grant 62072109,Grant 61872090,and Grant U1804263in part by the Guangxi Key Laboratory of Trusted Software under Grant KX202042+3 种基金in part by the Science and Technology Major Support Program of Guizhou Province under Grant 20183001in part by the Science and Technology Program of Guizhou Province under Grant 20191098in part by the Project of High-level Innovative Talents of Guizhou Province under Grant 20206008in part by the Open Research Fund of Key Laboratory of Cryptography of Zhejiang Province under Grant ZCL21015.
文摘With the maturity and development of 5G field,Mobile Edge CrowdSensing(MECS),as an intelligent data collection paradigm,provides a broad prospect for various applications in IoT.However,sensing users as data uploaders lack a balance between data benefits and privacy threats,leading to conservative data uploads and low revenue or excessive uploads and privacy breaches.To solve this problem,a Dynamic Privacy Measurement and Protection(DPMP)framework is proposed based on differential privacy and reinforcement learning.Firstly,a DPM model is designed to quantify the amount of data privacy,and a calculation method for personalized privacy threshold of different users is also designed.Furthermore,a Dynamic Private sensing data Selection(DPS)algorithm is proposed to help sensing users maximize data benefits within their privacy thresholds.Finally,theoretical analysis and ample experiment results show that DPMP framework is effective and efficient to achieve a balance between data benefits and sensing user privacy protection,in particular,the proposed DPMP framework has 63%and 23%higher training efficiency and data benefits,respectively,compared to the Monte Carlo algorithm.
基金Supported by the National Natural Science Foundation of China(Nos.61063002,61100186,61262008)Guangxi Natural Science Foundation of China(2011GXNSFA018164,2011GXNSFA018166,2012GXNSFAA053220)the Key Project of Education Department of Guangxi
文摘As a complementary technology to Binary Decision Diagram-based(BDD-based) symbolic model checking, the verification techniques on Boolean satisfiability problem have gained an increasing wide of applications over the last few decades, which brings a dramatic improvement for automatic verification. In this paper, we firstly introduce the theory about the Boolean satisfiability verification, including the description on the problem of Boolean satisfiability verification, Davis-Putnam-Logemann-Loveland(DPLL) based complete verification algorithm, and all kinds of solvers generated and the logic languages used by those solvers. Moreover, we formulate a large number optimizations of technique revolutions based on Boolean SATisfiability(SAT) and Satisfiability Modulo Theories(SMT) solving in detail, including incomplete methods such as bounded model checking, and other methods for concurrent programs model checking. Finally, we point out the major challenge pervasively in industrial practice and prospect directions for future research in the field of formal verification.
基金supported by the Natural Science Foundation of China(No.U22A2099)the Innovation Project of Guangxi Graduate Education(YCBZ2023130).
文摘Dynamic publishing of social network graphs offers insights into user behavior but brings privacy risks, notably re-identification attacks on evolving data snapshots. Existing methods based on -anonymity can mitigate these attacks but are cumbersome, neglect dynamic protection of community structure, and lack precise utility measures. To address these challenges, we present a dynamic social network graph anonymity scheme with community structure protection (DSNGA-CSP), which achieves the dynamic anonymization process by incorporating community detection. First, DSNGA-CSP categorizes communities of the original graph into three types at each timestamp, and only partitions community subgraphs for a specific category at each updated timestamp. Then, DSNGA-CSP achieves intra-community and inter-community anonymization separately to retain more of the community structure of the original graph at each timestamp. It anonymizes community subgraphs by the proposed novel -composition method and anonymizes inter-community edges by edge isomorphism. Finally, a novel information loss metric is introduced in DSNGA-CSP to precisely capture the utility of the anonymized graph through original information preservation and anonymous information changes. Extensive experiments conducted on five real-world datasets demonstrate that DSNGA-CSP consistently outperforms existing methods, providing a more effective balance between privacy and utility. Specifically, DSNGA-CSP shows an average utility improvement of approximately 30% compared to TAKG and CTKGA for three dynamic graph datasets, according to the proposed information loss metric IL.
基金supported by the Natural Science Foundation of China(Nos.U22A2099,62273113,62203461,62203365)the Innovation Project of Guangxi Graduate Education under Grant YCBZ2023130by the Guangxi Higher Education Undergraduate Teaching Reform Project Key Project,grant number 2022JGZ130.
文摘The evidential reasoning(ER)rule framework has been widely applied in multi-attribute decision analysis and system assessment to manage uncertainty.However,traditional ER implementations rely on two critical limitations:1)unrealistic assumptions of complete evidence independence,and 2)a lack of mechanisms to differentiate causal relationships from spurious correlations.Existing similarity-based approaches often misinterpret interdependent evidence,leading to unreliable decision outcomes.To address these gaps,this study proposes a causality-enhanced ER rule(CER-e)framework with three key methodological innovations:1)a multidimensional causal representation of evidence to capture dependency structures;2)probabilistic quantification of causal strength using transfer entropy,a model-free information-theoretic measure;3)systematic integration of causal parameters into the ER inference process while maintaining evidential objectivity.The PC algorithm is employed during causal discovery to eliminate spurious correlations,ensuring robust causal inference.Case studies in two types of domains—telecommunications network security assessment and structural risk evaluation—validate CER-e’s effectiveness in real-world scenarios.Under simulated incomplete information conditions,the framework demonstrates superior algorithmic robustness compared to traditional ER.Comparative analyses show that CER-e significantly improves both the interpretability of causal relationships and the reliability of assessment results,establishing a novel paradigm for integrating causal inference with evidential reasoning in complex system evaluation.
基金supported by the Natural Science Foundation of China(Nos.61773388,61751304,61833016,61702142,U1811264 and 61966009)the Shaanxi Outstanding Youth Science Foundation,China(No.2020JC-34)+2 种基金the Key Research and Development Plan of Hainan,China(No.ZDYF2019007)China Postdoctoral Science Foundation(No.2020M673668)Guangxi Key Laboratory of Trusted Software,China(No.KX202050)。
文摘Due to the excellent performance in complex systems modeling under small samples and uncertainty,Belief Rule Base(BRB)expert system has been widely applied in fault diagnosis.However,the fault diagnosis process for complex mechanical equipment normally needs multiple attributes,which can lead to the rule number explosion problem in BRB,and limit the efficiency and accuracy.To solve this problem,a novel Combination Belief Rule Base(C-BRB)model based on Directed Acyclic Graph(DAG)structure is proposed in this paper.By dispersing numerous attributes into the parallel structure composed of different sub-BRBs,C-BRB can effectively reduce the amount of calculation with acceptable result.At the same time,a path selection strategy considering the accuracy of child nodes is designed in C-BRB to obtain the most suitable submodels.Finally,a fusion method based on Evidential Reasoning(ER)rule is used to combine the belief rules of C-BRB and generate the final results.To illustrate the effectiveness and reliability of the proposed method,a case study of fault diagnosis of rolling bearing is conducted,and the result is compared with other methods.
基金Project supported by the National Natural Science Foundation of China (Nos. 61673384 and 61502497), the Guangxi Key Laboratory of Trusted Software (No. kx201530), the China Postdoctoral Science Foundation (No. 2015M581887), and the Scientific Research Innovation Project for Graduate Students of Jiangsu Province, China (No. KYLX15 1443)
文摘Software defect prediction is aimed to find potential defects based on historical data and software features. Software features can reflect the characteristics of software modules. However, some of these features may be more relevant to the class (defective or non-defective), but others may be redundant or irrelevant. To fully measure the correlation between different features and the class, we present a feature selection approach based on a similarity measure (SM) for software defect prediction. First, the feature weights are updated according to the similarity of samples in different classes. Second, a feature ranking list is generated by sorting the feature weights in descending order, and all feature subsets are selected from the feature ranking list in sequence. Finally, all feature subsets are evaluated on a k-nearest neighbor (KNN) model and measured by an area under curve (AUC) metric for classification performance. The experiments are conducted on 11 National Aeronautics and Space Administration (NASA) datasets, and the results show that our approach performs better than or is comparable to the compared feature selection approaches in terms of classification performance.
文摘There are two key issues in distributed intrusion detection system,that is,maintaining load balance of system and protecting data integrity.To address these issues,this paper proposes a new distributed intrusion detection model for big data based on nondestructive partitioning and balanced allocation.A data allocation strategy based on capacity and workload is introduced to achieve local load balance,and a dynamic load adjustment strategy is adopted to maintain global load balance of cluster.Moreover,data integrity is protected by using session reassemble and session partitioning.The simulation results show that the new model enjoys favorable advantages such as good load balance,higher detection rate and detection efficiency.
基金supported by Special Fund of Fundamental Scientific Research Business Expense for Higher School of Central Government(ZY20180119)the Natural Science Foundation of Zhejiang Province(LZ22F020005)+1 种基金the Natural Science Foundation of Hebei Province(D2022512001)National Natural Science Foundation of China(42164002,62076185).
文摘The advent of Big Data has rendered Machine Learning tasks more intricate as they frequently involve higher-dimensional data.Feature Selection(FS)methods can abate the complexity of the data and enhance the accuracy,generalizability,and interpretability of models.Meta-heuristic algorithms are often utilized for FS tasks due to their low requirements and efficient performance.This paper introduces an augmented Forensic-Based Investigation algorithm(DCFBI)that incorporates a Dynamic Individual Selection(DIS)and crisscross(CC)mechanism to improve the pursuit phase of the FBI.Moreover,a binary version of DCFBI(BDCFBI)is applied to FS.Experiments conducted on IEEE CEC 2017 with other metaheuristics demonstrate that DCFBI surpasses them in search capability.The influence of different mechanisms on the original FBI is analyzed on benchmark functions,while its scalability is verified by comparing it with the original FBI on benchmarks with varied dimensions.BDCFBI is then applied to 18 real datasets from the UCI machine learning database and the Wieslaw dataset to select near-optimal features,which are then compared with six renowned binary metaheuristics.The results show that BDCFBI can be more competitive than similar methods and acquire a subset of features with superior classification accuracy.
基金This work was supported by Deakin Cyber Security Research Cluster National Natural Science Foundation of China under Grant Nos. 61304067 and 61202211 +1 种基金 Guangxi Key Laboratory of Trusted Software No. kx201325 the Fundamental Research Funds for the Central Universities under Grant No 31541311314.
文摘As the risk of malware is sharply increasing in Android platform,Android malware detection has become an important research topic.Existing works have demonstrated that required permissions of Android applications are valuable for malware analysis,but how to exploit those permission patterns for malware detection remains an open issue.In this paper,we introduce the contrasting permission patterns to characterize the essential differences between malwares and clean applications from the permission aspect Then a framework based on contrasting permission patterns is presented for Android malware detection.According to the proposed framework,an ensemble classifier,Enclamald,is further developed to detect whether an application is potentially malicious.Every contrasting permission pattern is acting as a weak classifier in Enclamald,and the weighted predictions of involved weak classifiers are aggregated to the final result.Experiments on real-world applications validate that the proposed Enclamald classifier outperforms commonly used classifiers for Android Malware Detection.
基金co-supported by the National Natural Science Foundation of China (No. 61833016)the Shaanxi Outstanding Youth Science Foundation,China (No. 2020JC-34)the Shaanxi Science and Technology Innovation Team,China(No. 2022TD-24)
文摘Evidential Reasoning(ER)rule,which can combine multiple pieces of independent evidence conjunctively,is widely applied in multiple attribute decision analysis.However,the assumption of independence among evidence is often not satisfied,resulting in ER rule inapplicable.In this paper,an Evidential Reasoning rule for Dependent Evidence combination(ERr-DE)is developed.Firstly,the aggregation sequence of multiple pieces of evidence is determined according to evidence reliability.On this basis,a calculation method of evidence Relative Total Dependence Coefficient(RTDC)is proposed using the distance correlation method.Secondly,as a discounting factor,RTDC is introduced into the ER rule framework,and the ERr-DE model is formulated.The aggregation process of two pieces of dependent evidence by ERr-DE is investigated,which is then generalized to aggregate multiple pieces of non-independent evidence.Thirdly,sensitivity analysis is carried out to investigate the relationship between the model output and the RTDC.The properties of sensitivity coefficient are explored and mathematically proofed.The conjunctive probabilistic reasoning process of ERr-DE and the properties of sensitivity coefficient are verified by two numerical examples respectively.Finally,the practical application of the ERr-DE is validated by a case study on the performance assessment of satellite turntable system.
基金Supported by the National Natural Science Foundation of China(61063039)Project of Guangxi Key Laboratory of Trusted Software(kx201202)
文摘Inferring unknown social trust relations attracts increasing attention in recent years. However, social trust, as a social concept, is intrinsically dynamic, and exploiting temporal dynamics provides challenges and opportunities for social trust prediction. In this paper, we investigate social trust prediction by exploiting temporal dynamics. In particular, we model the dynamics of user preferences in two principled ways. The first one focuses on temporal weight; the second one targets temporal smoothness. By incorporating these two types of temporal dynamics into traditional matrix factorization based social trust prediction model, two extended social trust prediction models are proposed and the cor- responding algorithms to solve the models are designed too. We conduct experiments on a real-world dataset and the results dem- onstrate the effectiveness of our proposed new models. Further experiments are also conducted to understand the importance of temporal dynamics in social trust prediction.
基金This study is funded by the National Natural Science Foundation of China(Nos.61862013,61662015,U1811264,and U1711263)Guangxi Natural Science Foundation of China(Nos.2018GXNSFAA281199 and 2017GXNSFAA198035)+1 种基金Guangxi Key Laboratory of Automatic Measurement Technology and Instrument(No.YQ19109)Guangxi Key Laboratory of Trusted Software(No.kx201915).
文摘Group recommendations derive from a phenomenon in which people tend to participate in activities together regardless of whether they are online or in reality,which creates real scenarios and promotes the development of group recommendation systems.Different from traditional personalized recommendation methods,which are concerned only with the accuracy of recommendations for individuals,group recommendation is expected to balance the needs of multiple users.Building a proper model for a group of users to improve the quality of a recommended list and to achieve a better recommendation has become a large challenge for group recommendation applications.Existing studies often focus on explicit user characteristics,such as gender,occupation,and social status,to analyze the importance of users for modeling group preferences.However,it is usually difficult to obtain extra user information,especially for ad hoc groups.To this end,we design a novel entropy-based method that extracts users’implicit characteristics from users’historical ratings to obtain the weights of group members.These weights represent user importance so that we can obtain group preferences according to user weights and then model the group decision process to make a recommendation.We evaluate our method for the two metrics of recommendation relevance and overall ratings of recommended items.We compare our method to baselines,and experimental results show that our method achieves a significant improvement in group recommendation performance.
基金This work was funded by the National Natural Science Foundation of China Nos.U22A2099,61966009,62006057the Graduate Innovation Program No.YCSW2022286.
文摘Humans are experiencing the inclusion of artificial agents in their lives,such as unmanned vehicles,service robots,voice assistants,and intelligent medical care.If the artificial agents cannot align with social values or make ethical decisions,they may not meet the expectations of humans.Traditionally,an ethical decision-making framework is constructed by rule-based or statistical approaches.In this paper,we propose an ethical decision-making framework based on incremental ILP(Inductive Logic Programming),which can overcome the brittleness of rule-based approaches and little interpretability of statistical approaches.As the current incremental ILP makes it difficult to solve conflicts,we propose a novel ethical decision-making framework considering conflicts in this paper,which adopts our proposed incremental ILP system.The framework consists of two processes:the learning process and the deduction process.The first process records bottom clauses with their score functions and learns rules guided by the entailment and the score function.The second process obtains an ethical decision based on the rules.In an ethical scenario about chatbots for teenagers’mental health,we verify that our framework can learn ethical rules and make ethical decisions.Besides,we extract incremental ILP from the framework and compare it with the state-of-the-art ILP systems based on ASP(Answer Set Programming)focusing on conflict resolution.The results of comparisons show that our proposed system can generate better-quality rules than most other systems.
基金the National Natural Science Foundation of China (Nos. U1501252, 61572146 and U1711263)the Natural Science Foundation of Guangxi Province (No. 2016GXNSFDA380006)+1 种基金the Guangxi Innovation-Driven Development Project (No. AA17202024)the Guangxi Universities Young and Middle-aged Teacher Basic Ability Enhancement Project (No. 2018KY0203).
文摘Knowledge graph embedding aims at embedding entities and relations in a knowledge graph into a continuous, dense, low-dimensional and realvalued vector space. Among various embedding models appeared in recent years, translation-based models such as TransE, TransH and TransR achieve state-of-the-art performance. However, in these models, negative triples used for training phase are generated by replacing each positive entity in positive triples with negative entities from the entity set with the same probability;as a result, a large number of invalid negative triples will be generated and used in the training process. In this paper, a method named adaptive negative sampling (ANS) is proposed to generate valid negative triples. In this method, it first divided all the entities into a number of groups which consist of similar entities by some clustering algorithms such as K-Means. Then, corresponding to each positive triple, the head entity was replaced by a negative entity from the cluster in which the head entity was located and the tail entity was replaced in a similar approach. As a result, it generated a set of high-quality negative triples which benefit for improving the effectiveness of embedding models. The ANS method was combined with the TransE model and the resulted model was named as TransE-ANS. Experimental results show that TransE-ANS achieves significant improvement in the link prediction task.