Theintegration of human factors into artificial intelligence(AI)systems has emerged as a critical research frontier,particularly in reinforcement learning(RL),where human-AI interaction(HAII)presents both opportunitie...Theintegration of human factors into artificial intelligence(AI)systems has emerged as a critical research frontier,particularly in reinforcement learning(RL),where human-AI interaction(HAII)presents both opportunities and challenges.As RL continues to demonstrate remarkable success in model-free and partially observable environments,its real-world deployment increasingly requires effective collaboration with human operators and stakeholders.This article systematically examines HAII techniques in RL through both theoretical analysis and practical case studies.We establish a conceptual framework built upon three fundamental pillars of effective human-AI collaboration:computational trust modeling,system usability,and decision understandability.Our comprehensive review organizes HAII methods into five key categories:(1)learning from human feedback,including various shaping approaches;(2)learning from human demonstration through inverse RL and imitation learning;(3)shared autonomy architectures for dynamic control allocation;(4)human-in-the-loop querying strategies for active learning;and(5)explainable RL techniques for interpretable policy generation.Recent state-of-the-art works are critically reviewed,with particular emphasis on advances incorporating large language models in human-AI interaction research.To illustrate some concepts,we present three detailed case studies:an empirical trust model for farmers adopting AI-driven agricultural management systems,the implementation of ethical constraints in roboticmotion planning through human-guided RL,and an experimental investigation of human trust dynamics using a multi-armed bandit paradigm.These applications demonstrate how HAII principles can enhance RL systems’practical utility while bridging the gap between theoretical RL and real-world human-centered applications,ultimately contributing to more deployable and socially beneficial intelligent systems.展开更多
This study explores the impact of human-AI collaborative teaching strategies on English teachers in secondary schools.Based on semi-structured interviews with five English teachers in Jiangxi Province,thematic analysi...This study explores the impact of human-AI collaborative teaching strategies on English teachers in secondary schools.Based on semi-structured interviews with five English teachers in Jiangxi Province,thematic analysis was conducted using the SAMR,UTAUT,and GHEX-IPACK theoretical frameworks.The findings indicate that AI technology is primarily applied in scenarios such as resource generation,assignment distribution,and learning analytics.By substituting traditional tools,enhancing teaching interactions,and reconstructing instructional processes,AI facilitates a shift in teaching strategies from“teacher-led”to“human-AI collaboration”.Teachers generally recognized the potential of this model for improving efficiency and supporting personalized learning,but also pointed out challenges,including data bias,hardware limitations,and a lack of emotional interaction.The study suggests that achieving deep human-AI collaboration requires balancing technological efficacy with humanistic care relying on blended instructional design and teacher training to optimize teachers’knowledge structures.This research preliminary constructs a practical model of human-AI collaboration in secondary school English education,providing insights for teacher professional development.展开更多
The rapid integration of artificial intelligence(AI)into software development,driven by large language models(LLMs),is reshaping the role of programmers from traditional coders into strategic collaborators within Indu...The rapid integration of artificial intelligence(AI)into software development,driven by large language models(LLMs),is reshaping the role of programmers from traditional coders into strategic collaborators within Industry 4.0 ecosystems.This qualitative study employs a hermeneutic phenomenological approach to explore the lived experiences of Information Technology(IT)professionals as they navigate a dynamic technological landscape marked by intelligent automation,shifting professional identities,and emerging ethical concerns.Findings indicate that developers are actively adapting to AI-augmented environments by engaging in continuous upskilling,prompt engineering,interdisciplinary collaboration,and heightened ethical awareness.However,participants also voiced growing concerns about the reliability and security of AI-generated code,noting that these tools can introduce hidden vulnerabilities and reduce critical engagement due to automation bias.Many described instances of flawed logic,insecure patterns,or syntactically correct but contextually inappropriate suggestions,underscoring the need for rigorous human oversight.Additionally,the study reveals anxieties around job displacement and the gradual erosion of fundamental coding skills,particularly in environments where AI tools dominate routine development tasks.These findings highlight an urgent need for educational reforms,industry standards,and organizational policies that prioritize both technical robustness and the preservation of human expertise.As AI becomes increasingly embedded in software engineering workflows,this research offers timely insights into how developers and organizations can responsibly integrate intelligent systems to promote accountability,resilience,and innovation across the software development lifecycle.展开更多
Using the differences and complementarities between human intelligence and artificial intelligence(AI),a hybrid-augmented intelligence,that is both stronger than human intelligence and AI,is created through Human-AI C...Using the differences and complementarities between human intelligence and artificial intelligence(AI),a hybrid-augmented intelligence,that is both stronger than human intelligence and AI,is created through Human-AI Cooperation(HAC)for teaching and learning.Human-AI Cooperation is infiltrating into all links of education,and recent research has focused a lot on the impact of teaching,learning,management,and evaluation with Human-AI Cooperation.However,AI still has its limits of intelligence,and cannot cooperate as humans.Thus,it is critical to study the obstacles of Human-AI Cooperation in education,as AI plays a role as a partner,not a tool.This study discussed for the first time how teachers and AI cooperate based on Multiple Intelligences of AI proposed by Andrzej Cichocki and puts forward a new Human-AI Cooperation teaching mode:human in the loop and teaching as leadership.It is proposed that humans in the loop and teaching as leadership can solve the problem that AI cannot cope with complex and dynamic teaching tasks in open situations,as well as the limits of intelligence for AI.展开更多
Developing intelligent agents that can effectively coordinate with diverse human partners is a fundamental goal of artificial general intelligence.Previous approaches typically generate a variety of partners to cover ...Developing intelligent agents that can effectively coordinate with diverse human partners is a fundamental goal of artificial general intelligence.Previous approaches typically generate a variety of partners to cover human policies,and then either train a single universal agent or maintain multiple best-response(BR)policies for different partners.However,the first direction struggles with the stochastic and multimodal nature of human behaviors,and the second relies on costly few-shot adaptations during policy deployment,which is unbearable in real-world applications such as healthcare and autonomous driving.Recognizing that human partners can easily articulate their preferences or behavioral styles through natural languages(NLs)and make conventions beforehand,we propose a framework for Human-AI Coordination via Policy Generation from Language-guided Diffusion(Haland).Haland first trains BR policies for various partners using reinforcement learning,and then compresses policy parameters into a single latent diffusion model,conditioned on task-relevant language derived from their behaviors.Finally,the alignment between task-relevant and NLs is achieved to facilitate efficient human-AI coordination.Empirical evaluations across diverse cooperative environments demonstrate that Haland generates agents with significantly enhanced zero-shot coordination performance,utilizing only NL instructions from various partners,and outperforms existing methods by approximately 89.64%.展开更多
Human-AI coordination aims to develop AI agents capable of effectively coordinating with human partners,making it a crucial aspect of cooperative multi-agent reinforcement learning(MARL).Achieving satisfying performan...Human-AI coordination aims to develop AI agents capable of effectively coordinating with human partners,making it a crucial aspect of cooperative multi-agent reinforcement learning(MARL).Achieving satisfying performance of AI agents poses a long-standing challenge.Recently,ah-hoc teamwork and zero-shot coordination have shown promising advancements in open-world settings,requiring agents to coordinate efficiently with a range of unseen human partners.However,these methods usually assume an overly idealistic scenario by assuming homogeneity between the agent and the partner,which deviates from real-world conditions.To facilitate the practical deployment and application of human-AI coordination in open and real-world environments,we propose the first benchmark for open and real-world human-AI coordination(ORC)called ORCBench.ORCBench includes widely used human-AI coordination environments.Notably,within the context of real-world scenarios,ORCBench considers heterogeneity between AI agents and partners,encompassing variations in capabilities and observations,which aligns more closely with real-world applications.Furthermore,we introduce a framework known as Heterogeneous training with Communication(HeteC)for ORC.HeteC builds upon a heterogeneous training framework and enhances partner population diversity by using mixed partner training and frozen historical partners.Additionally,HeteC incorporates a communication module that enables human partners to communicate with AI agents,mitigating the adverse effects of partially observable environments.Through a series of experiments,we demonstrate the effectiveness of HeteC in improving coordination performance.Our contribution serves as an initial but important step towards addressing the challenges of ORC.展开更多
This paper explores effective human-AI collaboration in academic writing using Large Language Models(LLMs).Focusing on the two critical stages of ideation and revision,the article argues that higher education institut...This paper explores effective human-AI collaboration in academic writing using Large Language Models(LLMs).Focusing on the two critical stages of ideation and revision,the article argues that higher education institutions must develop specific pedagogical strategies to guide students in leveraging the benefits of LLMs while mitigat-ing risks such as academic integrity issues,over-reliance,and bias.The core of these strategies is to emphasize the primacy of human agency,critical thinking,and ethical responsibility.The ultimate goal is to transform AI from a potential pitfall into a powerful tool that enhances scholarly skills and depth of thought,rather than being used as a simple text generator.展开更多
基金funded by the U.S.Department of Education under Grant Number ED#P116S210005the National Science Foundation under Grant Numbers 2226936 and 2420405.
文摘Theintegration of human factors into artificial intelligence(AI)systems has emerged as a critical research frontier,particularly in reinforcement learning(RL),where human-AI interaction(HAII)presents both opportunities and challenges.As RL continues to demonstrate remarkable success in model-free and partially observable environments,its real-world deployment increasingly requires effective collaboration with human operators and stakeholders.This article systematically examines HAII techniques in RL through both theoretical analysis and practical case studies.We establish a conceptual framework built upon three fundamental pillars of effective human-AI collaboration:computational trust modeling,system usability,and decision understandability.Our comprehensive review organizes HAII methods into five key categories:(1)learning from human feedback,including various shaping approaches;(2)learning from human demonstration through inverse RL and imitation learning;(3)shared autonomy architectures for dynamic control allocation;(4)human-in-the-loop querying strategies for active learning;and(5)explainable RL techniques for interpretable policy generation.Recent state-of-the-art works are critically reviewed,with particular emphasis on advances incorporating large language models in human-AI interaction research.To illustrate some concepts,we present three detailed case studies:an empirical trust model for farmers adopting AI-driven agricultural management systems,the implementation of ethical constraints in roboticmotion planning through human-guided RL,and an experimental investigation of human trust dynamics using a multi-armed bandit paradigm.These applications demonstrate how HAII principles can enhance RL systems’practical utility while bridging the gap between theoretical RL and real-world human-centered applications,ultimately contributing to more deployable and socially beneficial intelligent systems.
基金supported by the Jinan University Teaching Research Project:Investigation and Path Optimization of Teachers’Lesson Planning Model Based on the“Human-AI Collaborative Workflow”,the 2025 Special Project for Quality Improvement and Upgrading Reform of Experimental Teaching at Jinan University(Project No.:82625039)the Higher Education Special Program of Guangdong Provincial Education Science Planning Project(Project No.:2023GXJK233).
文摘This study explores the impact of human-AI collaborative teaching strategies on English teachers in secondary schools.Based on semi-structured interviews with five English teachers in Jiangxi Province,thematic analysis was conducted using the SAMR,UTAUT,and GHEX-IPACK theoretical frameworks.The findings indicate that AI technology is primarily applied in scenarios such as resource generation,assignment distribution,and learning analytics.By substituting traditional tools,enhancing teaching interactions,and reconstructing instructional processes,AI facilitates a shift in teaching strategies from“teacher-led”to“human-AI collaboration”.Teachers generally recognized the potential of this model for improving efficiency and supporting personalized learning,but also pointed out challenges,including data bias,hardware limitations,and a lack of emotional interaction.The study suggests that achieving deep human-AI collaboration requires balancing technological efficacy with humanistic care relying on blended instructional design and teacher training to optimize teachers’knowledge structures.This research preliminary constructs a practical model of human-AI collaboration in secondary school English education,providing insights for teacher professional development.
文摘The rapid integration of artificial intelligence(AI)into software development,driven by large language models(LLMs),is reshaping the role of programmers from traditional coders into strategic collaborators within Industry 4.0 ecosystems.This qualitative study employs a hermeneutic phenomenological approach to explore the lived experiences of Information Technology(IT)professionals as they navigate a dynamic technological landscape marked by intelligent automation,shifting professional identities,and emerging ethical concerns.Findings indicate that developers are actively adapting to AI-augmented environments by engaging in continuous upskilling,prompt engineering,interdisciplinary collaboration,and heightened ethical awareness.However,participants also voiced growing concerns about the reliability and security of AI-generated code,noting that these tools can introduce hidden vulnerabilities and reduce critical engagement due to automation bias.Many described instances of flawed logic,insecure patterns,or syntactically correct but contextually inappropriate suggestions,underscoring the need for rigorous human oversight.Additionally,the study reveals anxieties around job displacement and the gradual erosion of fundamental coding skills,particularly in environments where AI tools dominate routine development tasks.These findings highlight an urgent need for educational reforms,industry standards,and organizational policies that prioritize both technical robustness and the preservation of human expertise.As AI becomes increasingly embedded in software engineering workflows,this research offers timely insights into how developers and organizations can responsibly integrate intelligent systems to promote accountability,resilience,and innovation across the software development lifecycle.
基金This research was supported by"Zhejiang Soft Science Research Program,Grant no:2021C35016".
文摘Using the differences and complementarities between human intelligence and artificial intelligence(AI),a hybrid-augmented intelligence,that is both stronger than human intelligence and AI,is created through Human-AI Cooperation(HAC)for teaching and learning.Human-AI Cooperation is infiltrating into all links of education,and recent research has focused a lot on the impact of teaching,learning,management,and evaluation with Human-AI Cooperation.However,AI still has its limits of intelligence,and cannot cooperate as humans.Thus,it is critical to study the obstacles of Human-AI Cooperation in education,as AI plays a role as a partner,not a tool.This study discussed for the first time how teachers and AI cooperate based on Multiple Intelligences of AI proposed by Andrzej Cichocki and puts forward a new Human-AI Cooperation teaching mode:human in the loop and teaching as leadership.It is proposed that humans in the loop and teaching as leadership can solve the problem that AI cannot cope with complex and dynamic teaching tasks in open situations,as well as the limits of intelligence for AI.
基金supported by the National Natural Science Foundation of China(Grant Nos.62506159,62495093,U24A20324)the Natural Science Foundation of Jiangsu Province(Grant Nos.BK20241199,BK20243039)the AI&AI for Science Project of Nanjing University。
文摘Developing intelligent agents that can effectively coordinate with diverse human partners is a fundamental goal of artificial general intelligence.Previous approaches typically generate a variety of partners to cover human policies,and then either train a single universal agent or maintain multiple best-response(BR)policies for different partners.However,the first direction struggles with the stochastic and multimodal nature of human behaviors,and the second relies on costly few-shot adaptations during policy deployment,which is unbearable in real-world applications such as healthcare and autonomous driving.Recognizing that human partners can easily articulate their preferences or behavioral styles through natural languages(NLs)and make conventions beforehand,we propose a framework for Human-AI Coordination via Policy Generation from Language-guided Diffusion(Haland).Haland first trains BR policies for various partners using reinforcement learning,and then compresses policy parameters into a single latent diffusion model,conditioned on task-relevant language derived from their behaviors.Finally,the alignment between task-relevant and NLs is achieved to facilitate efficient human-AI coordination.Empirical evaluations across diverse cooperative environments demonstrate that Haland generates agents with significantly enhanced zero-shot coordination performance,utilizing only NL instructions from various partners,and outperforms existing methods by approximately 89.64%.
基金supported by the National Key Research and Development Program of China(2020AAA0107200)the National Natural Science Foundation of China(Grant Nos.61921006,61876119,62276126)the Natural Science Foundation of Jiangsu(BK20221442).
文摘Human-AI coordination aims to develop AI agents capable of effectively coordinating with human partners,making it a crucial aspect of cooperative multi-agent reinforcement learning(MARL).Achieving satisfying performance of AI agents poses a long-standing challenge.Recently,ah-hoc teamwork and zero-shot coordination have shown promising advancements in open-world settings,requiring agents to coordinate efficiently with a range of unseen human partners.However,these methods usually assume an overly idealistic scenario by assuming homogeneity between the agent and the partner,which deviates from real-world conditions.To facilitate the practical deployment and application of human-AI coordination in open and real-world environments,we propose the first benchmark for open and real-world human-AI coordination(ORC)called ORCBench.ORCBench includes widely used human-AI coordination environments.Notably,within the context of real-world scenarios,ORCBench considers heterogeneity between AI agents and partners,encompassing variations in capabilities and observations,which aligns more closely with real-world applications.Furthermore,we introduce a framework known as Heterogeneous training with Communication(HeteC)for ORC.HeteC builds upon a heterogeneous training framework and enhances partner population diversity by using mixed partner training and frozen historical partners.Additionally,HeteC incorporates a communication module that enables human partners to communicate with AI agents,mitigating the adverse effects of partially observable environments.Through a series of experiments,we demonstrate the effectiveness of HeteC in improving coordination performance.Our contribution serves as an initial but important step towards addressing the challenges of ORC.
文摘This paper explores effective human-AI collaboration in academic writing using Large Language Models(LLMs).Focusing on the two critical stages of ideation and revision,the article argues that higher education institutions must develop specific pedagogical strategies to guide students in leveraging the benefits of LLMs while mitigat-ing risks such as academic integrity issues,over-reliance,and bias.The core of these strategies is to emphasize the primacy of human agency,critical thinking,and ethical responsibility.The ultimate goal is to transform AI from a potential pitfall into a powerful tool that enhances scholarly skills and depth of thought,rather than being used as a simple text generator.