摘要
本研究以学生评教大数据为实例,借助基于Transformer架构的有监督深度学习模型,探究其在社会研究中的预测与解释能力。实验结果表明,该模型在评教结果预测方面的表现显著优于传统机器学习模型。本文通过对自注意力机制开展实例分析,提出“计算理解”这一概念,阐明模型如何捕捉文本语义和上下文依赖,这类似于社会研究理解方法在特定语境下对主观意义的探寻。结合SHAP值等可解释性方法,本研究量化了影响评教的关键特征,揭示了数据背后的社会模式。在此基础上,本文进一步论证了在大数据和人工智能时代,基于Transformer架构的“计算理解”范式在整合质性与量化数据、推动社会研究方法论创新方面的潜力。
Using student evaluation big data as a case study,this research leverages a supervised deep learning model based on the Transformer architecture to investigate its predictive and explanatory capabilities in social science research.Experimental results demonstrate the model's significant superiority over traditional machine learning methods in predicting evaluation outcomes.Through instance analysis of the self-attention mechanism,we propose the concept of“Computational Interpretation”,elucidating how the model captures textual semantics and contextual dependencies-paralleling interpretive approaches in social research that seek subjective meaning.By integrating explainability methods(e.g.,SHAPvalues),we quantify key features influencing evaluations and uncover underlying social patterns.Building on this,we argue that the Transformer-based“Computational Interpretation”paradigm holds transformative potential for integrating qualitative and quantitative data and advancing methodological innovation in the era of big data and AL.Specifically,it extends traditional interpretive approaches by enabling large-scale excavation of meaning through computational means.We also critically reflect on methodological challenges,offering theoretical and practical references for applying this paradigm in social sciences.
出处
《智能社会研究》
2025年第3期154-179,265-266,共28页
JOURNAL OF INTELLIGENT SOCIETY
关键词
Transformer架构
学生评教
自然语言处理
计算理解
Transformer architecture
student evaluations of teaching(sets)
natural language processing
computational interpretation