摘要
算法解释在算法治理中举足轻重。算法解释承载着权益保障、社会交往和风险治理三重意义,其在技术层面上的障碍正在逐渐被突破,可以通过多种技术机制实现。在算法治理活动中,应根据常规场景、关键场景和争议场景分别限定选择解释的路径选择和技术方案,通过冻结机制、抽样机制和镜像机制固定算法解释,并使之接受外部的验证与审查,确保算法解释真实、有效。算法解释的系列机制应被进一步构建为体系化的算法解释制度,在这一制度框架内,对解释路径与精度、解释时限、解释瑕疵责任等要素的合理配置,可以实现社会效益与规制负担的精细平衡。
The interpretation of algorithms is of paramount importance in the realm of algorithm governance.It serves to protect rights,enable social interactions,and manage risks.The technical impediments to interpreting algorithms are increasingly being surmounted,and interpretations can now be achieved through a variety of technical mechanisms.Within the sphere of governance of algorithms,the choice of interpretation approaches and technological options should be determined based on usual,critical,and contentious scenarios.Algorithm interpretations can be solidified through mechanisms such as freezing,sampling,and mirroring,which then undergo external validation and review to ensure their authenticity and efficacy.The array of mechanisms for interpreting algorithms should be further organized into a comprehensive regulatory system.Within it,the rational setting of requirements such as the routes and precision of interpretations,time limits,and liability for interpretation flaws can balance social benefits against regulatory burdens.
出处
《东方法学》
CSSCI
北大核心
2024年第1期81-95,共15页
Oriental Law
基金
2022年度国家社会科学基金一般项目“算法解释制度的体系化构建研究”(项目批准号:22BFX016)的阶段性研究成果。
关键词
算法解释
算法验证
算法黑箱
算法透明度
机器学习
算法治理
interpretations of algorithms
verification of algorithms
algorithmic black box
algorithmic transparency
machine learning
algorithmic governance