期刊文献+

功能限度视域下人工智能生成内容标识义务的解释进路

Interpretation Approach of AIGC Identification Obligation from the Perspective of Functional Limitations
在线阅读 下载PDF
导出
摘要 对人工智能生成内容标识制度功能实效的理解,直接影响对标识义务范围的界定。当前,标识制度尚未充分实现实质透明化发展,用户及平台标识意愿有限。生成内容标识在信息披露层面亦存在功能限度,无法等同于信息质量和著作权判断,且标识易被篡改,影响其溯源监管功能的发挥。应遵循保障功能实效最大化解释原则,促进标识制度向实质透明化方向发展,反思产品信息披露理论说的应用限度,强调标识义务履行的多元主体协同化。同时,应坚持控制标识成本的区分化解释原则,基于标识主体差异、风险场景差异区分解释。平台服务提供者所承担的标识义务主要由行政机关进行监管,可衔接《中华人民共和国网络安全法》第22条第1款与第60条进行解释;用户标识义务履行则以平台监管为主,且平台服务提供者应为用户标识提供便捷工具并进行显著提示。《人工智能生成合成内容标识办法》第9条显式标识例外的解释意义重大,应逐渐由当前“监管说”向基于风险场景分类的“发展说”路径转型。 With generative artificial intelligence(AIGC)becoming deeply integrated into human life,people are increasingly unable to distinguish between AI-generated synthetic content and human-created material.China's Measures for the Identification of AIGenerated Synthetic Content(hereinafter referred to as the“Measures”)officially came into effect on September 1,2025.In addition to specifying the identification obligations of providers of AI-generated synthetic services,the Measures also clarify the responsibilities of users of such content,online information dissemination service providers,and application distribution platforms.Through the“Exception to Explicit Identification”clause in Article 9,the Measures aim to strike a better balance between facilitating the use of generated content and maintaining oversight of the information ecosystem.This study conducts an interpretive analysis of the regulatory cluster on identification obligations,including the"Measures",to address the urgent compliance needs arising from the current implementation of the"Measures".Moreover,it fills the gap in the current research regarding the lack of law and economics analysis,clarifies the functional limitations of the identification system in terms of efficiency,and expounds that the interpretation of identification obligations should follow the principle of"cost reduction and efficiency improvement".What's more,it reflects on some theoretical viewpoints that currently support the legitimacy of identification obligations and examines the interpretive limits of"the theory of product information disclosure obligation".Lastly,it conducts a detailed analysis of two interpretive approaches to the"Exception to Explicit Identification"in Article 9 of the"Measures",providing solutions for the implementation and optimization of the"Measures".At present,there is an overly optimistic perception of the functional effectiveness of the identification system,leading to a tendency to expand the scope of supervision.However,the progress of substantive transparency in China's identification system remains limited.The identification of generated content is prone to problems such as devaluation of information value and weakening of the contributions of human co-creators,and undermines users'and platforms'willingness to comply with identification requirements.Meanwhile,the identification of generated content has functional limitations:it cannot replace judgments on information quality,authenticity,or copyright,and is easily tampered with,which affects the function of traceability supervision.From the cost-benefit perspective,if the social welfare brought by identification is limited by"formal transparency"and"watermark tampering",and is lower than social costs such as technology construction,value devaluation,and dispute resolution,the legitimacy of the system will be difficult to justify.In an environment of"prevalent non-compliance",high rigid penalties are not only ineffective but also incur high law enforcement costs.To promote the long-term development of the identification system,it is necessary to establish a legal and economic analysis mindset,adhere to the principle of maximizing functional effectiveness,promote the transition to substantive transparency,and conduct substantive labeling of the extent and methods of AI participation.It is essential to reflect on the limitations of the product information disclosure theory and emphasize the collaborative fulfillment of identification obligations by multiple subjects.The principle of differentiated interpretation for cost control should be upheld:requirements should be adjusted according to differences in subjects and risk scenarios,and explicit identification can be reduced or canceled in low-risk scenarios.The identification obligations of platforms shall be supervised by administrative authorities,relying on the four-dimensional framework of"legal provisions-departmental regulations-national standards-platform rules";the obligations of users shall be mainly supervised by platforms,which need to provide convenient tools and prompts,and establish a proportional sanction and appeal mechanism.The interpretation of Article 9 of the"Measures"needs to shift from the"supervision-oriented perspective"to the"development-oriented perspective".As a common exception to relevant provisions,explicit identification should only be mandatory in high-risk scenarios,while in other scenarios,it may not be mandatory and can be determined in accordance with user agreements.Further efforts should be made to improve empirical research and quantitative analysis on the implementation effects of the AIGC identification system,refine the standards for AIGC risk classification,and improve the specific provisions of penalties such as administrative penalties for identification obligations applicable to platforms and users.In addition,the relationships between AIGC identification,copyright,and fair use of data need to be further clarified.
作者 庄语滋 Zhuang Yuzi(School of Law,Renmin University of China,Beijing 100872,China)
出处 《科技进步与对策》 北大核心 2025年第24期116-126,共11页 Science & Technology Progress and Policy
基金 中国法学会部级法学研究青年课题项目(CLS(2025)Y03)。
关键词 人工智能 生成合成 标识义务 数据标识 透明度 深度伪造 Artificial Intelligence Generative Synthesis Identification Obligation Data Identification Transparency Deepfake
  • 相关文献

二级参考文献283

共引文献922

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部