In the era of intelligent media,the interaction between teachers and students in higher education is undergoing a profound transformation.The model has shifted from one-way transmission to multi-agent,two-way collabor...In the era of intelligent media,the interaction between teachers and students in higher education is undergoing a profound transformation.The model has shifted from one-way transmission to multi-agent,two-way collaboration involving“teacher-student-AI(artificial intelligence)”.Interaction depth moves from surface Q&A to deep thought engagement,supported by instant,precise feedback and a blended virtual-physical space.New forms such as data-driven personalized interaction and immersive collaborative learning have emerged.However,this evolution brings significant challenges:over-reliance on technology may weaken cognitive autonomy;virtual interaction risks emotional detachment and trust erosion;ethical concerns like algorithmic bias and data privacy arise;teachers’roles become blurred;and evaluation systems lag behind technological advances.Future pathways should position AI as a supportive tool while upholding human centrality.Strengthening emotional connection through online-offline blending,reforming assessment to value process and growth,and empowering teachers as digitally literate“learning guides”and“emotional connectors”are key to building a healthy,sustainable interactive ecosystem.展开更多
Background Augmented reality classrooms have become an interesting research topic in the field of education,but there are some limitations.Firstly,most researchers use cards to operate experiments,and a large number o...Background Augmented reality classrooms have become an interesting research topic in the field of education,but there are some limitations.Firstly,most researchers use cards to operate experiments,and a large number of cards cause difficulty and inconvenience for users.Secondly,most users conduct experiments only in the visual modal,and such single-modal interaction greatly reduces the users'real sense of interaction.In order to solve these problems,we propose the Multimodal Interaction Algorithm based on Augmented Reality(ARGEV),which is based on visual and tactile feedback in Augmented Reality.In addition,we design a Virtual and Real Fusion Interactive Tool Suite(VRFITS)with gesture recognition and intelligent equipment.Methods The ARGVE method fuses gesture,intelligent equipment,and virtual models.We use a gesture recognition model trained by a convolutional neural network to recognize the gestures in AR,and to trigger a vibration feedback after a recognizing a five finger grasp gesture.We establish a coordinate mapping relationship between real hands and the virtual model to achieve the fusion of gestures and the virtual model.Results The average accuracy rate of gesture recognition was 99.04%.We verify and apply VRFITS in the Augmented Reality Chemistry Lab(ARCL),and the overall operation load of ARCL is thus reduced by 29.42%,in comparison to traditional simulation virtual experiments.Conclusions We achieve real-time fusion of the gesture,virtual model,and intelligent equipment in ARCL.Compared with the NOBOOK virtual simulation experiment,ARCL improves the users'real sense of operation and interaction efficiency.展开更多
Background A large number of robots have put forward the new requirements for human robot interaction.One of the problems in human-swarm robot interaction is how to naturally achieve an efficient and accurate interact...Background A large number of robots have put forward the new requirements for human robot interaction.One of the problems in human-swarm robot interaction is how to naturally achieve an efficient and accurate interaction between humans and swarm robot systems.To address this,this paper proposes a new type of human-swarm natural interaction system.Methods Through the cooperation between three-dimensional(3D)gesture interaction channel and natural language instruction channel,a natural and efficient interaction between a human and swarm robots is achieved.Results First,A 3D lasso technology realizes a batch-picking interaction of swarm robots through oriented bounding boxes.Second,control instruction labels for swarm-oriented robots are defined.The instruction label is integrated with the 3D gesture and natural language through instruction label filling.Finally,the understanding of natural language instructions is realized through a text classifier based on the maximum entropy model.A head-mounted augmented reality display device is used as a visual feedback channel.Conclusions The experiments on selecting robots verify the feasibility and availability of the system.展开更多
Background With an increasing number of vehicles becoming autonomous,intelligent,and connected,paying attention to the future usage of car human-machine interface with these vehicles should become more relevant.Severa...Background With an increasing number of vehicles becoming autonomous,intelligent,and connected,paying attention to the future usage of car human-machine interface with these vehicles should become more relevant.Several studies have addressed car HMI but were less attentive to designing and implementing interactive glazing for every day(autonomous)driving contexts.Methods Reflecting on the literature,we describe an engineering psychology practice and the design of six novel future user scenarios,which envision the application of a specific set of augmented reality(AR)support user interactions.Additionally,we conduct evaluations on specific scenarios and experiential prototypes,which reveal that these AR scenarios aid the target user groups in experiencing a new type of interaction.The overall evaluation is positive with valuable assessment results and suggestions.Conclusions This study can interest applied psychology educators who aspire to teach how AR can be operationalized in a human-centered design process to students with minimal pre-existing expertise or minimal scientific knowledge in engineering psychology.展开更多
Deep learning-based methods have achieved remarkable success in object detection,but this success requires the availability of a large number of training images.Collecting sufficient training images is difficult in de...Deep learning-based methods have achieved remarkable success in object detection,but this success requires the availability of a large number of training images.Collecting sufficient training images is difficult in detecting damages of airplane engines.Directly augmenting images by rotation,flipping,and random cropping cannot further improve the generalization ability of existing deep models.We propose an interactive augmentation method for airplane engine damage images using a prior-guided GAN to augment training images.Our method can generate many types of damages on arbitrary image regions according to the strokes of users.The proposed model consists of a prior network and a GAN.The Prior network generates a shape prior vector,which is used to encode the information of user strokes.The GAN takes the shape prior vector and random noise vectors to generate candidate damages.Final damages are pasted on the given positions of background images with an improved Poisson fusion.We compare the proposed method with traditional data augmentation methods by training airplane engine damage detectors with state-ofthe-art object detectors,namely,Mask R-CNN,SSD,and YOLO v5.Experimental results show that training with images generated by our proposed data augmentation method achieves a better detection performance than that by traditional data augmentation methods.展开更多
Background Gesture is a basic interaction channel that is frequently used by humans to communicate in daily life. In this paper, we explore to use gesture-based approaches for target acquisition in virtual and augment...Background Gesture is a basic interaction channel that is frequently used by humans to communicate in daily life. In this paper, we explore to use gesture-based approaches for target acquisition in virtual and augmented reality. A typical process of gesture-based target acquisition is: when a user intends to acquire a target, she performs a gesture with her hands, head or other parts of the body, the computer senses and recognizes the gesture and infers the most possible target. Methods We build mental model and behavior model of the user to study two key parts of the interaction process. Mental model describes how user thinks up a gesture for acquiring a target, and can be the intuitive mapping between gestures and targets. Behavior model describes how user moves the body parts to perform the gestures, and the relationship between the gesture that user intends to perform and signals that computer senses. Results In this paper, we present and discuss three pieces of research that focus on the mental model and behavior model of gesture-based target acquisition in VR and AR. Conclusions We show that leveraging these two models, interaction experience and performance can be improved in VR and AR environments.展开更多
Because of the evolution of markets and technologies, prototyping concerns should be kept updated almost day by day. Moreover, user centered design moves the focus towards interaction issues. Prototyping activities ma...Because of the evolution of markets and technologies, prototyping concerns should be kept updated almost day by day. Moreover, user centered design moves the focus towards interaction issues. Prototyping activities matching such characteristics are already available, but they are not so diffused in the industrial domain. This is due to many reasons;an important one is that a rigorous classification of them is missing, as well as an effective helping tool for the selection of the best activities, given the design context. The research described in this paper aims at defining a new classification of prototyping activities, as well as at developing a selection algorithm to choose the best ones in an automatic way. These goals are pursued by defining a set of characteristics that allow describing accurately the prototyping activities. The resulting classification is made by five classes, based on eighteen characteristics. This classification is exploited by the first release of an algorithm for the selection of the best activities, chosen in order to satisfy design situations described thanks to a different set of eleven indices. Five experiences in the field have been used up to now as a starting point for validating the research outcomes.展开更多
Six degrees of freedom(6DoF)input interfaces are essential formanipulating virtual objects through translation or rotation in three-dimensional(3D)space.A traditional outside-in tracking controller requires the instal...Six degrees of freedom(6DoF)input interfaces are essential formanipulating virtual objects through translation or rotation in three-dimensional(3D)space.A traditional outside-in tracking controller requires the installation of expensive hardware in advance.While inside-out tracking controllers have been proposed,they often suffer from limitations such as interaction limited to the tracking range of the sensor(e.g.,a sensor on the head-mounted display(HMD))or the need for pose value modification to function as an input interface(e.g.,a sensor on the controller).This study investigates 6DoF pose estimation methods without restricting the tracking range,using a smartphone as a controller in augmented reality(AR)environments.Our approach involves proposing methods for estimating the initial pose of the controller and correcting the pose using an inside-out tracking approach.In addition,seven pose estimation algorithms were presented as candidates depending on the tracking range of the device sensor,the tracking method(e.g.,marker recognition,visual-inertial odometry(VIO)),and whether modification of the initial pose is necessary.Through two experiments(discrete and continuous data),the performance of the algorithms was evaluated.The results demonstrate enhanced final pose accuracy achieved by correcting the initial pose.Furthermore,the importance of selecting the tracking algorithm based on the tracking range of the devices and the actual input value of the 3D interaction was emphasized.展开更多
This paper investigates the application of Natural Language Processing (NLP) in AI interaction design for virtual experiences. It analyzes the impact of various interaction methods on user experience, integrating Virt...This paper investigates the application of Natural Language Processing (NLP) in AI interaction design for virtual experiences. It analyzes the impact of various interaction methods on user experience, integrating Virtual Reality (VR) and Augmented Reality (AR) technologies to achieve more natural and intuitive interaction models through NLP techniques. Through experiments and data analysis across multiple technical models, this study proposes an innovative design solution based on natural language interaction and summarizes its advantages and limitations in immersive experiences.展开更多
Due to the narrowness of space and the complexity of structure,the assembly of aircraft cabin has become one of the major bottlenecks in the whole manufacturing process.To solve the problem,at the beginning of aircraf...Due to the narrowness of space and the complexity of structure,the assembly of aircraft cabin has become one of the major bottlenecks in the whole manufacturing process.To solve the problem,at the beginning of aircraft design,the different stages of the lifecycle of aircraft must be thought about,which include the trial manufacture,assembly,maintenance,recycling and destruction of the product.Recently,thanks to the development of the virtual reality and augmented reality,some low-cost and fast solutions are found for the product assembly.This paper presents a mixed reality-based interactive technology for the aircraft cabin assembly,which can enhance the efficiency of the assemblage in a virtual environment in terms of vision,information and operation.In the mixed reality-based assembly environment,the physical scene can be obtained by a camera and then generated by a computer.The virtual parts,the features of visual assembly,the navigation information,the physical parts and the physical assembly environment will be mixed and presented in the same assembly scene.The mixed or the augmented information will provide some assembling information as a detailed assembly instruction in the mixed reality-based assembly environment.Constraint proxy and its match rules help to reconstruct and visualize the restriction relationship among different parts,and to avoid the complex calculation of constraint's match.Finally,a desktop prototype system of virtual assembly has been built to assist the assembly verification and training with the virtual hand.展开更多
This paper presents a data fusion algorithm for dynamic system with multi-sensor and uncertain system models. The algorithm is mainly based on Kalman filter and interacting multiple model(IMM). It processes crosscorre...This paper presents a data fusion algorithm for dynamic system with multi-sensor and uncertain system models. The algorithm is mainly based on Kalman filter and interacting multiple model(IMM). It processes crosscorrelated sensor noises by using augmented fusion before model interacting. And eigenvalue decomposition is utilized to reduce calculation complexity and implement parallel computing. In simulation part, the feasibility of the algorithm was tested and verified, and the relationship between sensor number and the estimation precision was studied. Results show that simply increasing the number of sensor cannot always improve the performance of the estimation. Type and number of sensors should be optimized in practical applications.展开更多
Cultural relics visualization brings digital archives of relics to broader audiences in many applications,such as education,historical research,and virtual museums.However,previous research mainly focused on modeling ...Cultural relics visualization brings digital archives of relics to broader audiences in many applications,such as education,historical research,and virtual museums.However,previous research mainly focused on modeling and rendering the relics.While enhancing accessibility,these techniques still provide limited ability to improve user engagement.In this paper,we introduce RelicCARD,a semantics-based augmented reality(AR)tangible interaction design for exploring cultural relics.Our design uses an easily available tangible interface to encourage the users to interact with a large collection of relics.The tangible interface allows users to explore,select,and arrange relics to form customized scenes.To guide the design of the interface,we formalize a design space by connecting the semantics in relics,the tangible interaction patterns,and the exploration tasks.We realize the design space as a tangible interactive prototype and examine its feasibility and effectiveness using multiple case studies and an expert evaluation.Finally,we discuss the findings in the evaluation and future directions to improve the design and implementation of the interactive design space.展开更多
Augmented Reality is a technique that allows users to overlap digital information with their physical world.The Augmented Reality(AR)displays have an exceptional characteristic from the Human–Computer Interaction(HCI...Augmented Reality is a technique that allows users to overlap digital information with their physical world.The Augmented Reality(AR)displays have an exceptional characteristic from the Human–Computer Interaction(HCI)perspective.Due to its increasing popularity and application in diverse domains,increasing user-friendliness and AR usage are critical.Context-aware is one approach since an AR application can adapt to the user,environment,needs and enhance ergonomic principles and functionality.This paper proposes the Intelligent Contextaware Augmented Reality Model(ICAARM)for Human–Computer Interaction systems.This study explores and reduces interaction uncertainty by semantically modeling user-specific interaction with context,allowing personalised interaction.Sensory information is captured from an AR device to understand user interactions and context.These depictions carry semantics to Augmented Reality applications about the user’s intention to interact with a specific device affordance.Thus,this study describes personalised gesture interaction in VR/AR applications for immersive/intelligent environments.展开更多
文摘In the era of intelligent media,the interaction between teachers and students in higher education is undergoing a profound transformation.The model has shifted from one-way transmission to multi-agent,two-way collaboration involving“teacher-student-AI(artificial intelligence)”.Interaction depth moves from surface Q&A to deep thought engagement,supported by instant,precise feedback and a blended virtual-physical space.New forms such as data-driven personalized interaction and immersive collaborative learning have emerged.However,this evolution brings significant challenges:over-reliance on technology may weaken cognitive autonomy;virtual interaction risks emotional detachment and trust erosion;ethical concerns like algorithmic bias and data privacy arise;teachers’roles become blurred;and evaluation systems lag behind technological advances.Future pathways should position AI as a supportive tool while upholding human centrality.Strengthening emotional connection through online-offline blending,reforming assessment to value process and growth,and empowering teachers as digitally literate“learning guides”and“emotional connectors”are key to building a healthy,sustainable interactive ecosystem.
基金the National Key R&D Program of China(2018YFB1004901)the Independent Innovation Team Project of Jinan City(2019GXRC013).
文摘Background Augmented reality classrooms have become an interesting research topic in the field of education,but there are some limitations.Firstly,most researchers use cards to operate experiments,and a large number of cards cause difficulty and inconvenience for users.Secondly,most users conduct experiments only in the visual modal,and such single-modal interaction greatly reduces the users'real sense of interaction.In order to solve these problems,we propose the Multimodal Interaction Algorithm based on Augmented Reality(ARGEV),which is based on visual and tactile feedback in Augmented Reality.In addition,we design a Virtual and Real Fusion Interactive Tool Suite(VRFITS)with gesture recognition and intelligent equipment.Methods The ARGVE method fuses gesture,intelligent equipment,and virtual models.We use a gesture recognition model trained by a convolutional neural network to recognize the gestures in AR,and to trigger a vibration feedback after a recognizing a five finger grasp gesture.We establish a coordinate mapping relationship between real hands and the virtual model to achieve the fusion of gestures and the virtual model.Results The average accuracy rate of gesture recognition was 99.04%.We verify and apply VRFITS in the Augmented Reality Chemistry Lab(ARCL),and the overall operation load of ARCL is thus reduced by 29.42%,in comparison to traditional simulation virtual experiments.Conclusions We achieve real-time fusion of the gesture,virtual model,and intelligent equipment in ARCL.Compared with the NOBOOK virtual simulation experiment,ARCL improves the users'real sense of operation and interaction efficiency.
基金Key-Area Research and Development Program of Guangdong Province(2019B090915002).
文摘Background A large number of robots have put forward the new requirements for human robot interaction.One of the problems in human-swarm robot interaction is how to naturally achieve an efficient and accurate interaction between humans and swarm robot systems.To address this,this paper proposes a new type of human-swarm natural interaction system.Methods Through the cooperation between three-dimensional(3D)gesture interaction channel and natural language instruction channel,a natural and efficient interaction between a human and swarm robots is achieved.Results First,A 3D lasso technology realizes a batch-picking interaction of swarm robots through oriented bounding boxes.Second,control instruction labels for swarm-oriented robots are defined.The instruction label is integrated with the 3D gesture and natural language through instruction label filling.Finally,the understanding of natural language instructions is realized through a text classifier based on the maximum entropy model.A head-mounted augmented reality display device is used as a visual feedback channel.Conclusions The experiments on selecting robots verify the feasibility and availability of the system.
基金Supported by the‘Automotive Glazing Application in Intelligent Cockpit Human-Machine Interface’project(SKHX2021049)a collaboration between the Saint-Go Bain Research and the Beijing Normal University。
文摘Background With an increasing number of vehicles becoming autonomous,intelligent,and connected,paying attention to the future usage of car human-machine interface with these vehicles should become more relevant.Several studies have addressed car HMI but were less attentive to designing and implementing interactive glazing for every day(autonomous)driving contexts.Methods Reflecting on the literature,we describe an engineering psychology practice and the design of six novel future user scenarios,which envision the application of a specific set of augmented reality(AR)support user interactions.Additionally,we conduct evaluations on specific scenarios and experiential prototypes,which reveal that these AR scenarios aid the target user groups in experiencing a new type of interaction.The overall evaluation is positive with valuable assessment results and suggestions.Conclusions This study can interest applied psychology educators who aspire to teach how AR can be operationalized in a human-centered design process to students with minimal pre-existing expertise or minimal scientific knowledge in engineering psychology.
基金Natural Science Foundation of Tianjin,China(No.20JCQNJC00720)。
文摘Deep learning-based methods have achieved remarkable success in object detection,but this success requires the availability of a large number of training images.Collecting sufficient training images is difficult in detecting damages of airplane engines.Directly augmenting images by rotation,flipping,and random cropping cannot further improve the generalization ability of existing deep models.We propose an interactive augmentation method for airplane engine damage images using a prior-guided GAN to augment training images.Our method can generate many types of damages on arbitrary image regions according to the strokes of users.The proposed model consists of a prior network and a GAN.The Prior network generates a shape prior vector,which is used to encode the information of user strokes.The GAN takes the shape prior vector and random noise vectors to generate candidate damages.Final damages are pasted on the given positions of background images with an improved Poisson fusion.We compare the proposed method with traditional data augmentation methods by training airplane engine damage detectors with state-ofthe-art object detectors,namely,Mask R-CNN,SSD,and YOLO v5.Experimental results show that training with images generated by our proposed data augmentation method achieves a better detection performance than that by traditional data augmentation methods.
文摘Background Gesture is a basic interaction channel that is frequently used by humans to communicate in daily life. In this paper, we explore to use gesture-based approaches for target acquisition in virtual and augmented reality. A typical process of gesture-based target acquisition is: when a user intends to acquire a target, she performs a gesture with her hands, head or other parts of the body, the computer senses and recognizes the gesture and infers the most possible target. Methods We build mental model and behavior model of the user to study two key parts of the interaction process. Mental model describes how user thinks up a gesture for acquiring a target, and can be the intuitive mapping between gestures and targets. Behavior model describes how user moves the body parts to perform the gestures, and the relationship between the gesture that user intends to perform and signals that computer senses. Results In this paper, we present and discuss three pieces of research that focus on the mental model and behavior model of gesture-based target acquisition in VR and AR. Conclusions We show that leveraging these two models, interaction experience and performance can be improved in VR and AR environments.
文摘Because of the evolution of markets and technologies, prototyping concerns should be kept updated almost day by day. Moreover, user centered design moves the focus towards interaction issues. Prototyping activities matching such characteristics are already available, but they are not so diffused in the industrial domain. This is due to many reasons;an important one is that a rigorous classification of them is missing, as well as an effective helping tool for the selection of the best activities, given the design context. The research described in this paper aims at defining a new classification of prototyping activities, as well as at developing a selection algorithm to choose the best ones in an automatic way. These goals are pursued by defining a set of characteristics that allow describing accurately the prototyping activities. The resulting classification is made by five classes, based on eighteen characteristics. This classification is exploited by the first release of an algorithm for the selection of the best activities, chosen in order to satisfy design situations described thanks to a different set of eleven indices. Five experiences in the field have been used up to now as a starting point for validating the research outcomes.
文摘Six degrees of freedom(6DoF)input interfaces are essential formanipulating virtual objects through translation or rotation in three-dimensional(3D)space.A traditional outside-in tracking controller requires the installation of expensive hardware in advance.While inside-out tracking controllers have been proposed,they often suffer from limitations such as interaction limited to the tracking range of the sensor(e.g.,a sensor on the head-mounted display(HMD))or the need for pose value modification to function as an input interface(e.g.,a sensor on the controller).This study investigates 6DoF pose estimation methods without restricting the tracking range,using a smartphone as a controller in augmented reality(AR)environments.Our approach involves proposing methods for estimating the initial pose of the controller and correcting the pose using an inside-out tracking approach.In addition,seven pose estimation algorithms were presented as candidates depending on the tracking range of the device sensor,the tracking method(e.g.,marker recognition,visual-inertial odometry(VIO)),and whether modification of the initial pose is necessary.Through two experiments(discrete and continuous data),the performance of the algorithms was evaluated.The results demonstrate enhanced final pose accuracy achieved by correcting the initial pose.Furthermore,the importance of selecting the tracking algorithm based on the tracking range of the devices and the actual input value of the 3D interaction was emphasized.
文摘This paper investigates the application of Natural Language Processing (NLP) in AI interaction design for virtual experiences. It analyzes the impact of various interaction methods on user experience, integrating Virtual Reality (VR) and Augmented Reality (AR) technologies to achieve more natural and intuitive interaction models through NLP techniques. Through experiments and data analysis across multiple technical models, this study proposes an innovative design solution based on natural language interaction and summarizes its advantages and limitations in immersive experiences.
基金supported by National Defence Basic Research Foundation of China(Grant No.B1420060173)National Hi-tech Research and Development Program of China(863 Program,Grant No.2006AA04Z138)
文摘Due to the narrowness of space and the complexity of structure,the assembly of aircraft cabin has become one of the major bottlenecks in the whole manufacturing process.To solve the problem,at the beginning of aircraft design,the different stages of the lifecycle of aircraft must be thought about,which include the trial manufacture,assembly,maintenance,recycling and destruction of the product.Recently,thanks to the development of the virtual reality and augmented reality,some low-cost and fast solutions are found for the product assembly.This paper presents a mixed reality-based interactive technology for the aircraft cabin assembly,which can enhance the efficiency of the assemblage in a virtual environment in terms of vision,information and operation.In the mixed reality-based assembly environment,the physical scene can be obtained by a camera and then generated by a computer.The virtual parts,the features of visual assembly,the navigation information,the physical parts and the physical assembly environment will be mixed and presented in the same assembly scene.The mixed or the augmented information will provide some assembling information as a detailed assembly instruction in the mixed reality-based assembly environment.Constraint proxy and its match rules help to reconstruct and visualize the restriction relationship among different parts,and to avoid the complex calculation of constraint's match.Finally,a desktop prototype system of virtual assembly has been built to assist the assembly verification and training with the virtual hand.
基金the National Natural Science Foundation of China(No.61374160)the Shanghai Aerospace Science and Technology Innovation Fund(No.SAST201237)
文摘This paper presents a data fusion algorithm for dynamic system with multi-sensor and uncertain system models. The algorithm is mainly based on Kalman filter and interacting multiple model(IMM). It processes crosscorrelated sensor noises by using augmented fusion before model interacting. And eigenvalue decomposition is utilized to reduce calculation complexity and implement parallel computing. In simulation part, the feasibility of the algorithm was tested and verified, and the relationship between sensor number and the estimation precision was studied. Results show that simply increasing the number of sensor cannot always improve the performance of the estimation. Type and number of sensors should be optimized in practical applications.
基金supported by National Natural Science Foundation of China through grant 62172456Research and Development Plan in Key Areas of Guangdong Province through grant 2022B0101020002.
文摘Cultural relics visualization brings digital archives of relics to broader audiences in many applications,such as education,historical research,and virtual museums.However,previous research mainly focused on modeling and rendering the relics.While enhancing accessibility,these techniques still provide limited ability to improve user engagement.In this paper,we introduce RelicCARD,a semantics-based augmented reality(AR)tangible interaction design for exploring cultural relics.Our design uses an easily available tangible interface to encourage the users to interact with a large collection of relics.The tangible interface allows users to explore,select,and arrange relics to form customized scenes.To guide the design of the interface,we formalize a design space by connecting the semantics in relics,the tangible interaction patterns,and the exploration tasks.We realize the design space as a tangible interactive prototype and examine its feasibility and effectiveness using multiple case studies and an expert evaluation.Finally,we discuss the findings in the evaluation and future directions to improve the design and implementation of the interactive design space.
文摘Augmented Reality is a technique that allows users to overlap digital information with their physical world.The Augmented Reality(AR)displays have an exceptional characteristic from the Human–Computer Interaction(HCI)perspective.Due to its increasing popularity and application in diverse domains,increasing user-friendliness and AR usage are critical.Context-aware is one approach since an AR application can adapt to the user,environment,needs and enhance ergonomic principles and functionality.This paper proposes the Intelligent Contextaware Augmented Reality Model(ICAARM)for Human–Computer Interaction systems.This study explores and reduces interaction uncertainty by semantically modeling user-specific interaction with context,allowing personalised interaction.Sensory information is captured from an AR device to understand user interactions and context.These depictions carry semantics to Augmented Reality applications about the user’s intention to interact with a specific device affordance.Thus,this study describes personalised gesture interaction in VR/AR applications for immersive/intelligent environments.