期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
Prompting Large Language Models with Knowledge-Injection for Knowledge-Based Visual Question Answering 被引量:2
1
作者 Zhongjian Hu Peng Yang +2 位作者 Fengyuan Liu Yuan Meng Xingyu Liu 《Big Data Mining and Analytics》 EI CSCD 2024年第3期843-857,共15页
Previous works employ the Large Language Model(LLM)like GPT-3 for knowledge-based Visual Question Answering(VQA).We argue that the inferential capacity of LLM can be enhanced through knowledge injection.Although metho... Previous works employ the Large Language Model(LLM)like GPT-3 for knowledge-based Visual Question Answering(VQA).We argue that the inferential capacity of LLM can be enhanced through knowledge injection.Although methods that utilize knowledge graphs to enhance LLM have been explored in various tasks,they may have some limitations,such as the possibility of not being able to retrieve the required knowledge.In this paper,we introduce a novel framework for knowledge-based VQA titled“Prompting Large Language Models with Knowledge-Injection”(PLLMKI).We use vanilla VQA model to inspire the LLM and further enhance the LLM with knowledge injection.Unlike earlier approaches,we adopt the LLM for knowledge enhancement instead of relying on knowledge graphs.Furthermore,we leverage open LLMs,incurring no additional costs.In comparison to existing baselines,our approach exhibits the accuracy improvement of over 1.3 and 1.7 on two knowledge-based VQA datasets,namely OK-VQA and A-OKVQA,respectively. 展开更多
关键词 visual question answering knowledge-based visual question answering large language model knowledge injection
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部