期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
Structure of the Quarks and a New Model of Protons and Neutrons: Answer to Some Open Questions
1
作者 Ágnes Cziráki 《Natural Science》 CAS 2023年第1期11-18,共8页
The described structural model tries to answer some open questions such as: Why do quarks not exist in the open state? Where are the antiparticles from the Big Bang?
关键词 Structure of Quarks New Model PROTON Neutron open questions
在线阅读 下载PDF
Inequalities for pedal simplices
2
作者 马统一 张海娟 《Journal of Shanghai University(English Edition)》 CAS 2010年第3期157-162,共6页
In this paper,we present inequalities for volumes of subsimplices of a simplex and its pedal simplex and generalize them to m + 1 simplices and their pedal simplices.
关键词 Euclidean space SIMPLEX pedal simplex volume FACET inequality open question
在线阅读 下载PDF
Prompting Is Not Enough:Exploring Knowledge Integration and Controllable Generation on Large Language Models
3
作者 Tingjia Shen Hao Wang +5 位作者 Chuan Qin Ruijun Sun Yang Song Defu Lian Hengshu Zhu Enhong Chen 《Big Data Mining and Analytics》 2026年第2期563-579,共17页
Recently,with the rapid advancements in Large Language Models(LLMs),LLM-based Open-domain Question Answering(OpenQA)methods have reaped the benefits of emergent understanding and answering capabilities enabled by mass... Recently,with the rapid advancements in Large Language Models(LLMs),LLM-based Open-domain Question Answering(OpenQA)methods have reaped the benefits of emergent understanding and answering capabilities enabled by massive parameters compared to traditional methods.However,most of these methods encounter two critical challenges:how to integrate knowledge into LLMs effectively and how to adaptively generate results with specific answer formats.To address these challenges,we propose a novel framework,which aims to improve the OpenQA performance by exploring knowledge integration and controllable generation on LLMs simultaneously,namely GenKI.Specifically,we first train a dense passage retrieval model to retrieve associated knowledge from a given knowledge base.Subsequently,we introduce a novel knowledge integration model that incorporates the retrieval knowledge into instructions during fine-tuning to intensify the model.Furthermore,to enable controllable generation in LLMs,we leverage a certain fine-tuned LLM and an ensemble framework based on text consistency incorporating all coherence,fluency,and answer format assurance.Finally,extensive experiments conducted on three datasets with diverse answer formats demonstrate the effectiveness of GenKI with comparison of state-of-the-art baselines.Moreover,ablation studies have disclosed a linear relationship between the frequency of retrieved knowledge and the model’s ability to recall knowledge accurately with the ground truth.Tests focusing on the out-of-domain scenario and knowledge base independence scenario have further affirmed the robustness and controllable capability of GenKI.Our code of GenKI is available at https://github.com/USTC-StarTeam/GenKI. 展开更多
关键词 open domain Question Answering(openQA) question answering Large Language Model(LLM)
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部