摘要
因生成式人工智能(Generative Artificial Intelligence, GAI)赋能人工智能生成内容(Artificial Intelligence Generated Content, AIGC),AI幻觉的隐蔽性与危害性显著增强。鉴于此,文章针对性提出提升用户AIGC验证意愿的建议,旨在为阻断AI幻觉传播、保障GAI信息生态健康提供理论与实践支撑。研究结合隐私计算理论、信任理论及计划行为理论,通过实证分析探讨潜在变量对GAI用户AIGC验证意愿的影响及作用机制,并采用偏最小二乘结构方程模型(PLS-SEM)对数据进行检验。PLS-SEM结果证实,行为态度、主观规范、感知收益对AIGC验证意愿具有显著的正向影响。
With generative artificial intelligence(GAI)empowering artificial intelligence generated content(AIGC),the concealment and harm of AI hallucinations have significantly increased.In light of this,this paper proposes targeted recommendations to enhance users’willingness to verify AIGC,aiming to provide theoretical and practical support for blocking the spread of AI hallucinations and safeguarding the healthy information ecosystem of GAI.The study integrates privacy computing theory,trust theory,and planned behavior theory,exploring the influence and mechanisms of potential variables on GAI users’AIGC verification willingness through empirical analysis.The partial least squares structural equation model(PLS-SEM)was employed to test the data.PLS-SEM results confirm that behavioral attitudes,subjective norms,and perceived benefits have a significant positive impact on AIGC verification willingness.
作者
毛太田
许鑫
MAO Taitian;XU Xin(School of Public Administration,Xiangtan University,Xiangtan 411100,China)
出处
《江苏科技信息》
2026年第2期115-120,共6页
Jiangsu Science and Technology Information
基金
国家社会科学基金项目,项目名称:人智交互情境下用户隐私披露行为生成机理与引导策略研究,项目编号:24BTQ052。
关键词
AI幻觉
AIGC
验证意愿
影响因素
AI hallucinations
AIGC
willingness to verify
influencing factors