Large language models(LLMs)have significantly advanced artificial intelligence(AI)by excelling in tasks such as understanding,generation,and reasoning across multiple modalities.Despite these achievements,LLMs have in...Large language models(LLMs)have significantly advanced artificial intelligence(AI)by excelling in tasks such as understanding,generation,and reasoning across multiple modalities.Despite these achievements,LLMs have inherent limitations including outdated information,hallucinations,inefficiency,lack of interpretability,and challenges in domain-specific accuracy.To address these issues,this survey explores three promising directions in the post-LLM era:knowledge empowerment,model collaboration,and model co-evolution.First,we examine methods of integrating external knowledge into LLMs to enhance factual accuracy,reasoning capabilities,and interpretability,including incorporating knowledge into training objectives,instruction tuning,retrieval-augmented inference,and knowledge prompting.Second,we discuss model collaboration strategies that leverage the complementary strengths of LLMs and smaller models to improve efficiency and domain-specific performance through techniques such as model merging,functional model collaboration,and knowledge injection.Third,we delve into model co-evolution,in which multiple models collaboratively evolve by sharing knowledge,parameters,and learning strategies to adapt to dynamic environments and tasks,thereby enhancing their adaptability and continual learning.We illustrate how the integration of these techniques advances AI capabilities in science,engineering,and society—particularly in hypothesis development,problem formulation,problem-solving,and interpretability across various domains.We conclude by outlining future pathways for further advancement and applications.展开更多
Extensive studies on selecting recombination operators adaptively,namely,adaptive operator selection(AOS),during the search process of an evolutionary algorithm(EA),have shown that AOS is promising for improving EA...Extensive studies on selecting recombination operators adaptively,namely,adaptive operator selection(AOS),during the search process of an evolutionary algorithm(EA),have shown that AOS is promising for improving EA's performance.A variety of heuristic mechanisms for AOS have been proposed in recent decades,which usually contain two main components:the feature extraction and the policy setting.The feature extraction refers to as extracting relevant features from the information collected during the search process.The policy setting means to set a strategy(or policy)on how to select an operator from a pool of operators based on the extracted feature.Both components are designed by hand in existing studies,which may not be efficient for adapting optimization problems.In this paper,a generalized framework is proposed for learning the components of AOS for one of the main streams of EAs,namely,differential evolution(DE).In the framework,the feature extraction is parameterized as a deep neural network(DNN),while a Dirichlet distribution is considered to be the policy.A reinforcement learning method,named policy gradient,is used to train the DNN.As case studies,the proposed framework is applied to two DEs including the classic DE and a recently-proposed DE,which result in two new algorithms named PG-DE and PG-MPEDE,respectively.Experiments on the Congress of Evolutionary Computation(CEC)2018 test suite show that the proposed new algorithms perform significantly better than their counterparts.Finally,we prove theoretically that the considered classic methods are the special cases of the proposed framework.展开更多
基金supported in part by National Natural Science Foundation of China(62441605)。
文摘Large language models(LLMs)have significantly advanced artificial intelligence(AI)by excelling in tasks such as understanding,generation,and reasoning across multiple modalities.Despite these achievements,LLMs have inherent limitations including outdated information,hallucinations,inefficiency,lack of interpretability,and challenges in domain-specific accuracy.To address these issues,this survey explores three promising directions in the post-LLM era:knowledge empowerment,model collaboration,and model co-evolution.First,we examine methods of integrating external knowledge into LLMs to enhance factual accuracy,reasoning capabilities,and interpretability,including incorporating knowledge into training objectives,instruction tuning,retrieval-augmented inference,and knowledge prompting.Second,we discuss model collaboration strategies that leverage the complementary strengths of LLMs and smaller models to improve efficiency and domain-specific performance through techniques such as model merging,functional model collaboration,and knowledge injection.Third,we delve into model co-evolution,in which multiple models collaboratively evolve by sharing knowledge,parameters,and learning strategies to adapt to dynamic environments and tasks,thereby enhancing their adaptability and continual learning.We illustrate how the integration of these techniques advances AI capabilities in science,engineering,and society—particularly in hypothesis development,problem formulation,problem-solving,and interpretability across various domains.We conclude by outlining future pathways for further advancement and applications.
基金supported by National Natural Science Foundation of China(Grant No.62076197)Key Research and Development Project of Shaanxi Province(Grant No.2022GXLH-01-15)。
文摘Extensive studies on selecting recombination operators adaptively,namely,adaptive operator selection(AOS),during the search process of an evolutionary algorithm(EA),have shown that AOS is promising for improving EA's performance.A variety of heuristic mechanisms for AOS have been proposed in recent decades,which usually contain two main components:the feature extraction and the policy setting.The feature extraction refers to as extracting relevant features from the information collected during the search process.The policy setting means to set a strategy(or policy)on how to select an operator from a pool of operators based on the extracted feature.Both components are designed by hand in existing studies,which may not be efficient for adapting optimization problems.In this paper,a generalized framework is proposed for learning the components of AOS for one of the main streams of EAs,namely,differential evolution(DE).In the framework,the feature extraction is parameterized as a deep neural network(DNN),while a Dirichlet distribution is considered to be the policy.A reinforcement learning method,named policy gradient,is used to train the DNN.As case studies,the proposed framework is applied to two DEs including the classic DE and a recently-proposed DE,which result in two new algorithms named PG-DE and PG-MPEDE,respectively.Experiments on the Congress of Evolutionary Computation(CEC)2018 test suite show that the proposed new algorithms perform significantly better than their counterparts.Finally,we prove theoretically that the considered classic methods are the special cases of the proposed framework.