期刊文献+

AI“幻觉”:认知困境、术语反思与范式嬗变 被引量:2

AI“Hallucination”:Cognitive Dilemma,Terminological Reflection,and Paradigm Shift
在线阅读 下载PDF
导出
摘要 自大语言模型引领新一轮生成式人工智能浪潮以来,AI“幻觉”——即模型生成看似合理但与事实不符或无法验证的内容——已成为突出问题。其挑战不仅是技术性错误,更在于它冲击了人类认知捷径与信任机制,构成了“认知困境”。该现象以高度迷惑性渗透进日常工作与生活,“一本正经胡说八道”的特性对信息生态构成严重威胁。加之算法黑箱性质,使得公众忧虑日增,形成一场人机认知博弈。AI“幻觉”一词是借自精神病理学的拟人化隐喻,其广泛流行恰是认知困境在语言层面的投射,是人们在“人-工具”框架失灵时,理解和安置AI这一新型“认知代理人”的表征。它暴露了大语言模型缺乏对世界深层、因果理解的固有局限,促使我们反思数智时代算法编织的深度拟像,重新审视知识的形态。AI“幻觉”的异常现象催生着一场深刻的认知“范式嬗变”:从传统人类中心主义知识范式,转向“人机混合认知”新范式。面对这一技术与认知的双重困境,短期内彻底消除幻觉难以实现。因此,除持续的技术攻坚外,主动建构人机协作的认知伦理与验证机制,是走向健康数智文明、确保人类认知自主性的关键。 With the rapid development of generative artificial intelligence led by large language models,“AI hallucination”has attracted growing scrutiny.The so-called“AI hallucination”can be defined as content generated by large language models that appears coherent and plausible but is factually inaccurate or unverifiable.The causes of this phenomenon are multifaceted:Noise,biases,or incompleteness in training data;inherent limitations in model architecture;and contradictions between parameterized knowledge storage and factual retrieval.“AI hallucination”has become one of the most pressing technical challenges in current AI applications,with an impact far exceeding that of traditional computer errors,permeating various aspects of social life on an unprecedented scale.The uniqueness of“AI hallucination”lies in its combination of concealment and deception.Unlike traditional computer errors,which overtly reveal flaws,it misleads users through apparently highly logical yet false information.This characteristic poses a serious threat to the information ecosystem.More concerning is that the large language models(LLMs)underpinning these AI systems exhibit a“black box”nature:Even developers cannot fully explain their operational mechanisms,deepening public distrust.Notably,while most researchers view“AI hallucination”as a technical defect,some scholars explore its potential to stimulate creativity,underscoring the complexity of this phenomenon.From a cognitive perspective,the term“AI hallucination”itself is a metaphor worthy of examination.Borrowed from psychopathology,this term projects human disease characteristics onto machines,sparking debates about its appropriateness.AI experts like Lance Eliot criticize this anthropomorphic framing,arguing it may mislead public understanding of AI capabilities and enable developers to evade accountability.Nevertheless,the term remains prevalent in academia,appearing in formal technical environments such as papers in Nature and research reports from Tsinghua University.This phenomenon actually appears to reflect a common naming strategy in technology—using human experience to conceptualize technical phenomena.The popularity of the term“hallucination”stems partly from its vivid capture of AIs uncanny ability to“fabricate nonsense with conviction,”and partly from its pathological connotations and implied quasi-subjectivity;as such,the term aptly expresses unease toward AIs“aberrant”behavior.Behind this terminology lies the inevitable cognitive projection and emotional response humans exhibit when engaging with intelligent machines.The occurrence of“AI hallucination”compels us to reevaluate cognitive paradigms in the digital intelligence era.As researchers have noted,LLMs function essentially as“stochastic parrots,”generating linguistic symbols through statistical pattern matching rather than genuine understanding.This phenomenon renders the digital world increasingly akin to“hyperreality,”where the complex interplay of true and false information poses new challenges to human cognition.Addressing this requires not only technical improvements but also the development of cognitive literacy suited to the algorithmic age.Viewed through a Foucauldian lens,“AI hallucination”can be seen as an inevitable“cognitive bias”arising from specific technical paradigms—not merely a technical issue but one that reflects deeper contradictions in human-machine relations.Thus,while technological innovation must mitigate hallucination risks,ongoing reflection at cognitive and ethical levels is equally imperative to preserve human cognitive autonomy in a future of human-AI coexistence.
作者 梁昭 Liang Zhao(School of Literature and Journalism,Sichuan University,Chengdu,610064,Sichuan,China)
机构地区 四川大学
出处 《民族学刊》 北大核心 2025年第8期82-87,161,共7页 Journal of Ethnology
基金 国家社会科学基金一般项目“媒介融合视域下的中国少数民族文学转型研究”(19BZW170)阶段性成果。
关键词 AI“幻觉” 大语言模型(LLM) 术语隐喻 人机互动 数智文明 AI“hallucination” Large Language Models(LLMs) terminological metaphor human-computer interaction digital-intelligent civilization
  • 相关文献

参考文献5

二级参考文献51

  • 1刘正光.莱柯夫隐喻理论中的缺陷[J].外语与外语教学,2001(1):25-29. 被引量:54
  • 2Black, Max. Metaphors we live by (Book Review) [J]. Journal ofAesthetics & Art Criticism, 1981,(2): 208-211.
  • 3Dancygier, Barbara. Metaphors we live by (Book Review) [J].Canadian Literature^ 2005,(4): 142-143.
  • 4Dobrovolskij, D. & E. Piirainen. Figurative language: Cross-cultural and cross-linguistic perspective [M]. Amsterdam: Elsevier,2005.
  • 5Gibbs, R. W. Evaluating conceptual metaphor theory [J]. DiscourseProcesses, 2011,(48): 529-562.
  • 6Goodman, Nelson. Languages of art [M]. Indianapolis: Bobbs-Merrill, 1968.
  • 7Grady, J. Theories are buildings revisited [J]. Cognitive Linguistics,1997,(8): 267 - 290.
  • 8Grady, J. A typology of motivation for metaphor: Correlations vs.resemblances [A]. In R. Gibbs & G. Steen (eds. ), Metaphor incognitive linguistics [C]. Amsterdam: Benjamins, 1999.
  • 9Haser, Verena. Metaphor, metonymy, and experientialistphilosophy: Challenging cognitive semantics [M]. Berlin & NewYork: Mouton de Gruyter, 2005.
  • 10Kertesz, A.,& Rakosi, C. Cyclic vs. circular argumentation inconceptual metaphor theory [J]. Cognitive Linguistics, 2009, (20):703 - 732.

共引文献132

同被引文献16

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部