Human Interaction Recognition(HIR)was one of the challenging issues in computer vision research due to the involvement of multiple individuals and their mutual interactions within video frames generated from their mov...Human Interaction Recognition(HIR)was one of the challenging issues in computer vision research due to the involvement of multiple individuals and their mutual interactions within video frames generated from their movements.HIR requires more sophisticated analysis than Human Action Recognition(HAR)since HAR focuses solely on individual activities like walking or running,while HIR involves the interactions between people.This research aims to develop a robust system for recognizing five common human interactions,such as hugging,kicking,pushing,pointing,and no interaction,from video sequences using multiple cameras.In this study,a hybrid Deep Learning(DL)and Machine Learning(ML)model was employed to improve classification accuracy and generalizability.The dataset was collected in an indoor environment with four-channel cameras capturing the five types of interactions among 13 participants.The data was processed using a DL model with a fine-tuned ResNet(Residual Networks)architecture based on 2D Convolutional Neural Network(CNN)layers for feature extraction.Subsequently,machine learning models were trained and utilized for interaction classification using six commonly used ML algorithms,including SVM,KNN,RF,DT,NB,and XGBoost.The results demonstrate a high accuracy of 95.45%in classifying human interactions.The hybrid approach enabled effective learning,resulting in highly accurate performance across different interaction types.Future work will explore more complex scenarios involving multiple individuals based on the application of this architecture.展开更多
Human-human interaction recognition is crucial in computer vision fields like surveillance,human-computer interaction,and social robotics.It enhances systems’ability to interpret and respond to human behavior precise...Human-human interaction recognition is crucial in computer vision fields like surveillance,human-computer interaction,and social robotics.It enhances systems’ability to interpret and respond to human behavior precisely.This research focuses on recognizing human interaction behaviors using a static image,which is challenging due to the complexity of diverse actions.The overall purpose of this study is to develop a robust and accurate system for human interaction recognition.This research presents a novel image-based human interaction recognition method using a Hidden Markov Model(HMM).The technique employs hue,saturation,and intensity(HSI)color transformation to enhance colors in video frames,making them more vibrant and visually appealing,especially in low-contrast or washed-out scenes.Gaussian filters reduce noise and smooth imperfections followed by silhouette extraction using a statistical method.Feature extraction uses the features from Accelerated Segment Test(FAST),Oriented FAST,and Rotated BRIEF(ORB)techniques.The application of Quadratic Discriminant Analysis(QDA)for feature fusion and discrimination enables high-dimensional data to be effectively analyzed,thus further enhancing the classification process.It ensures that the final features loaded into the HMM classifier accurately represent the relevant human activities.The impressive accuracy rates of 93%and 94.6%achieved in the BIT-Interaction and UT-Interaction datasets respectively,highlight the success and reliability of the proposed technique.The proposed approach addresses challenges in various domains by focusing on frame improvement,silhouette and feature extraction,feature fusion,and HMM classification.This enhances data quality,accuracy,adaptability,reliability,and reduction of errors.展开更多
Impressive advancements and novel techniques have been witnessed in AI-based Human Intelligent-Things Interaction(HITI)systems.Several technological breakthroughs have contributed to HITI,such as Internet of Things(Io...Impressive advancements and novel techniques have been witnessed in AI-based Human Intelligent-Things Interaction(HITI)systems.Several technological breakthroughs have contributed to HITI,such as Internet of Things(IoT),deep and edge learning for deducing intelligence,and 6G for ultra-fast and ultralow-latency communication between cyber-physical HITI systems.However,human-AI teaming presents several challenges that are yet to be addressed,despite the many advancements that have been made towards human-AI teaming.Allowing human stakeholders to understand AI’s decision-making process is a novel challenge.Artificial Intelligence(AI)needs to adopt diversified human understandable features,such as ethics,non-biases,trustworthiness,explainability,safety guarantee,data privacy,system security,and auditability.While adopting these features,high system performance should be maintained,and transparent processing involved in the‘human intelligent-things teaming’should be conveyed.To this end,we introduce the fusion of four key technologies,namely an ensemble of deep learning,6G,IoT,and corresponding security/privacy techniques to support HITI.This paper presents a framework that integrates the aforementioned four key technologies to support AI-based Human Intelligent-Things Interaction.Additionally,this paper demonstrates two security applications as proof of the concept,namely intelligent smart city surveillance and handling emergency services.The paper proposes to fuse four key technologies(deep learning,6G,IoT,and security and privacy techniques)to support Human Intelligent-Things interaction,applying the proposed framework to two security applications(surveillance and emergency handling).In this research paper,we will present a comprehensive review of the existing techniques of fusing security and privacy within future HITI applications.Moreover,we will showcase two security applications as proof of concept that use the fusion of the four key technologies to offer next-generation HITI services,namely intelligent smart city surveillance and handling emergency services.This proposed research outcome is envisioned to democratize the use of AI within smart city surveillance applications.展开更多
Humans regularly interact with their surrounding objects.Such interactions often result in strongly correlated motions between humans and the interacting objects.We thus ask:"Is it possible to infer object proper...Humans regularly interact with their surrounding objects.Such interactions often result in strongly correlated motions between humans and the interacting objects.We thus ask:"Is it possible to infer object properties from skeletal motion alone,even without seeing the interacting object itself?"In this paper,we present a fine-grained action recognition method that learns to infer such latent object properties from human interaction motion alone.This inference allows us to disentangle the motion from the object property and transfer object properties to a given motion.We collected a large number of videos and 3 D skeletal motions of performing actors using an inertial motion capture device.We analyzed similar actions and learned subtle differences between them to reveal latent properties of the interacting objects.In particular,we learned to identify the interacting object,by estimating its weight,or its spillability.Our results clearly demonstrate that motions and interacting objects are highly correlated and that related object latent properties can be inferred from 3 D skeleton sequences alone,leading to new synthesis possibilities for motions involving human interaction.Our dataset is available at http://vcc.szu.edu.cn/research/2020/IT.html.展开更多
Identifying human actions and interactions finds its use in manyareas, such as security, surveillance, assisted living, patient monitoring, rehabilitation,sports, and e-learning. This wide range of applications has at...Identifying human actions and interactions finds its use in manyareas, such as security, surveillance, assisted living, patient monitoring, rehabilitation,sports, and e-learning. This wide range of applications has attractedmany researchers to this field. Inspired by the existing recognition systems,this paper proposes a new and efficient human-object interaction recognition(HOIR) model which is based on modeling human pose and scene featureinformation. There are different aspects involved in an interaction, includingthe humans, the objects, the various body parts of the human, and the backgroundscene. Themain objectives of this research include critically examiningthe importance of all these elements in determining the interaction, estimatinghuman pose through image foresting transform (IFT), and detecting the performedinteractions based on an optimizedmulti-feature vector. The proposedmethodology has six main phases. The first phase involves preprocessing theimages. During preprocessing stages, the videos are converted into imageframes. Then their contrast is adjusted, and noise is removed. In the secondphase, the human-object pair is detected and extracted from each image frame.The third phase involves the identification of key body parts of the detectedhumans using IFT. The fourth phase relates to three different kinds of featureextraction techniques. Then these features are combined and optimized duringthe fifth phase. The optimized vector is used to classify the interactions in thelast phase. TheMSRDaily Activity 3D dataset has been used to test this modeland to prove its efficiency. The proposed system obtains an average accuracyof 91.7% on this dataset.展开更多
In the new era of technology,daily human activities are becoming more challenging in terms of monitoring complex scenes and backgrounds.To understand the scenes and activities from human life logs,human-object interac...In the new era of technology,daily human activities are becoming more challenging in terms of monitoring complex scenes and backgrounds.To understand the scenes and activities from human life logs,human-object interaction(HOI)is important in terms of visual relationship detection and human pose estimation.Activities understanding and interaction recognition between human and object along with the pose estimation and interaction modeling have been explained.Some existing algorithms and feature extraction procedures are complicated including accurate detection of rare human postures,occluded regions,and unsatisfactory detection of objects,especially small-sized objects.The existing HOI detection techniques are instancecentric(object-based)where interaction is predicted between all the pairs.Such estimation depends on appearance features and spatial information.Therefore,we propose a novel approach to demonstrate that the appearance features alone are not sufficient to predict the HOI.Furthermore,we detect the human body parts by using the Gaussian Matric Model(GMM)followed by object detection using YOLO.We predict the interaction points which directly classify the interaction and pair them with densely predicted HOI vectors by using the interaction algorithm.The interactions are linked with the human and object to predict the actions.The experiments have been performed on two benchmark HOI datasets demonstrating the proposed approach.展开更多
In this paper,we present an RFID based human and Unmanned Aerial Vehicle(UAV)Interaction system,termed RFHUI,to provide an intuitive and easy-to-operate method to navigate a UAV in an indoor environment.It relies on t...In this paper,we present an RFID based human and Unmanned Aerial Vehicle(UAV)Interaction system,termed RFHUI,to provide an intuitive and easy-to-operate method to navigate a UAV in an indoor environment.It relies on the passive Radio-Frequency IDentification(RFID)technology to precisely track the pose of a handheld controller,and then transfer the pose information to navigate the UAV.A prototype of the handheld controller is created by attaching three or more Ultra High Frequency(UHF)RFID tags to a board.A Commercial Off-The-Shelf(COTS)RFID reader with multiple antennas is deployed to collect the observations of the tags.First,the precise positions of all the tags can be obtained by our proposed method,which leverages a Bayesian filter and Channel State Information(CSI)phase measurements collected from the RFID reader.Second,we introduce a Singular Value Decomposition(SVD)based approach to obtain a 6-DoF(Degrees of Freedom)pose of the controller from estimated positions of the tags.Furthermore,the pose of the controller can be precisely tracked in a real-time manner,while the user moves the controller.Finally,control commands will be generated from the controller's pose and sent to the UAV for navigation.The performance of the RFHUI is evaluated by several experiments.The results show that it provides precise poses with 0.045m mean error in position and 2.5∘mean error in orientation for the controller,and enables the controller to precisely and intuitively navigate the UAV in an indoor environment.展开更多
Recent advancements in the Internet of Things IoT and cloud computing have paved the way for mobile Healthcare(mHealthcare)services.A patient within the hospital is monitored by several devices.Moreover,upon leaving t...Recent advancements in the Internet of Things IoT and cloud computing have paved the way for mobile Healthcare(mHealthcare)services.A patient within the hospital is monitored by several devices.Moreover,upon leaving the hospital,the patient can be remotely monitored whether directly using body wearable sensors or using a smartphone equipped with sensors to monitor different user-health parameters.This raises potential challenges for intelligent monitoring of patient's health.In this paper,an improved architecture for smart mHealthcare is proposed that is supported by HCI design principles.The HCI also provides the support for the User-Centric Design(UCD)for smart mHealthcare models.Furthermore,the HCI along with IoT's(Internet of Things)5-layered architecture has the potential of improving User Experience(UX)in mHealthcare design and help saving lives.The intelligent mHealthcare system is supported by the IoT sensing and communication layers and health care providers are supported by the application layer for the medical,behavioral,and health-related information.Health care providers and users are further supported by an intelligent layer performing critical situation assessment and performing a multi-modal communication using an intelligent assistant.The HCI design focuses on the ease-of-use,including user experience and safety,alarms,and error-resistant displays of the end-user,and improves user's experience and user satisfaction.展开更多
G-quadruplex ligands have been accepted as potential therapeutic agents for anticancer treatment. Thioflavin T (ThT), a highly selective G-quadruplex ligand, can bind G-quadruplex with a fluorescent light-up signal ...G-quadruplex ligands have been accepted as potential therapeutic agents for anticancer treatment. Thioflavin T (ThT), a highly selective G-quadruplex ligand, can bind G-quadruplex with a fluorescent light-up signal change and high specificity against DNA duplex. However, there are still different opinions that ThT induces/stabilizes G-quadruplex foldings/topologies in human telomere sequence. Here, a sensitive single-molecule nanopore technology was utilized to analyze the interactions between human telomeric DNA (Tel DNA) and ThT. Both translocation time and current blockade were measured to investigate the translocation behaviors. Furthermore, the effects of metal ion (K~ and Na~) and pH on the translocation behaviors were studied. Based on the single-molecule level analysis, there are some conclusions: (1) In the electrolyte solution containing 50 mmol/L I(Cl and 450 mmol/L NaCl, ThT can bind strongly with Tel DNA but nearly does not change the G-quadruplex form; (2) in the presence of high concentration K~, the ThT binding induces the structural change of hybrid G-quadruplex into antiparallel topology with an enhanced structural stability; (3) In either alkaline or acidic buffer, G-quadruplex form will change in certain degree. All above conclusions are helpful to deeply understand the interaction behaviors between Tel DNA and ThT. This nanopore platform, investigating G-quadruplex ligands at the single-molecule level, has great potential for the design of new drugs and sensors.展开更多
With COM,VB and VC,we develop a visual human computer interaction system,which is used to mine association rules.It can mine association rules from the database which is created by Access and SQL server,as well as the...With COM,VB and VC,we develop a visual human computer interaction system,which is used to mine association rules.It can mine association rules from the database which is created by Access and SQL server,as well as the text mode.With the interaction interface,user participates in the process of data mining,making the system mine the satisfying rules.展开更多
This paper discusses some issues on human reliability model of time dependent human behavior. Some results of the crew reliability experiment on Tsinghua training simulator in China are given, Meanwhile, a case of ca...This paper discusses some issues on human reliability model of time dependent human behavior. Some results of the crew reliability experiment on Tsinghua training simulator in China are given, Meanwhile, a case of calculation for human error probability during anticipated transient without scram (ATWS) based on the data drew from the recent experiment is offered.展开更多
Despite the availability of advanced security software and hardware mechanisms available, still, there has been a breach in the defence system of an organization or individual. Social engineering mostly targets the we...Despite the availability of advanced security software and hardware mechanisms available, still, there has been a breach in the defence system of an organization or individual. Social engineering mostly targets the weakest link in the security system </span><i style="font-family:"font-size:10pt;"><span style="font-size:12px;font-family:Verdana;">i.e.</span></i><span style="font-family:Verdana;font-size:12px;"> “Humans” for gaining access to sensitive information by manipulating human psychology. Social engineering attacks are arduous to defend as such attacks are not easily detected by available security software or hardware. This article surveys recent studies on social engineering attacks with discussion on the social engineering phases and categorizing the various attacks into two groups. The main aim of this survey is to examine the various social engineering attacks on individuals and countermeasures against social engineering attacks are also discussed.展开更多
Many human-machine collaborative support scheduling systems are used to aid human decision making by providing several optimal scheduling algorithms that do not take operator's attention into consideration.However...Many human-machine collaborative support scheduling systems are used to aid human decision making by providing several optimal scheduling algorithms that do not take operator's attention into consideration.However, the current systems should take advantage of the operator's attention to obtain the optimal solution.In this paper, we innovatively propose a human-machine collaborative support scheduling system of intelligence information from multi-UAVs based on eye-tracker. Firstly, the target recognition algorithm is applied to the images from the multiple unmanned aerial vehicles(multi-UAVs) to recognize the targets in the images. Then,the support system utilizes the eye tracker to gain the eye-gaze points which are intended to obtain the focused targets in the images. Finally, the heuristic scheduling algorithms take both the attributes of targets and the operator's attention into consideration to obtain the sequence of the images. As the processing time of the images collected by the multi-UAVs is uncertain, however the upper bounds and lower bounds of the processing time are known before. So the processing time of the images is modeled by the interval processing time. The objective of the scheduling problem is to minimize mean weighted completion time. This paper proposes some new polynomial time heuristic scheduling algorithms which firstly schedule the images including the focused targets. We conduct the scheduling experiments under six different distributions. The results indicate that the proposed algorithm is not sensitive to the different distributions of the processing time and has a negligible computational time. The absolute error of the best performing heuristic solution is only about 1%. Then, we incorporate the best performing heuristic algorithm into the human-machine collaborative support systems to verify the performance of the system.展开更多
In the Anthropocene,health is necessary to achieve global sustainable development.This is a challenge because health issues are complex and span from humans to ecosystems and the environment through dynamic interac-ti...In the Anthropocene,health is necessary to achieve global sustainable development.This is a challenge because health issues are complex and span from humans to ecosystems and the environment through dynamic interac-tions across scales.We find that the health issues have been mainly addressed by disciplinary endeavors which unfortunately will not result in panoramic theories or effective solutions.We recommend focusing on the intri-cate interactions between humans,ecosystems and the environment for developing common theoretical under-standings and practical solutions for safeguarding planetary health,with human health as the key indicator and endpoint.To facilitate this paradigm shift,a holistic framework is formulated that incorporates disturbances from inner Earth and our solar system,and accommodates interactions between humans,ecosystems and the environ-ment in a nested hierarchy.An integrative and transdisciplinary health science is advocated along with holistic thinking to resolve our current health challenges and to achieve the health-related sustainable development goals.展开更多
With the mindset of constant improvement in efficiency and safety in the workspace and training in Singapore,there is a need to explore varying technologies and their capabilities to fulfil this need.The ability of Vi...With the mindset of constant improvement in efficiency and safety in the workspace and training in Singapore,there is a need to explore varying technologies and their capabilities to fulfil this need.The ability of Virtual Reality(VR)and Augmented Reality(AR)to create an immersive experience of tying the virtual and physical environments coupled with information filtering capabilities brings a possibility of introducing this technology into the training process and workspace.This paper surveys current research trends,findings and limitation of VR and AR in its effect on human performance,specifically in Singapore,and our experience in the National University of Singapore(NUS).展开更多
Background Large screen visualization sys tems have been widely utilized in many industries.Such systems can help illustrate the working states of different production systems.However,efficient interaction with such s...Background Large screen visualization sys tems have been widely utilized in many industries.Such systems can help illustrate the working states of different production systems.However,efficient interaction with such systems is still a focus of related research.Methods In this paper,we propose a touchless interaction system based on RGB-D camera using a novel bone-length constraining method.The proposed method optimizes the joint data collected from RGB-D cameras with more accurate and more stable results on very noisy data.The user can customize the system by modifying the finite-state machine in the system and reuse the gestures in multiple scenarios,reducing the number of gestures that need to be designed and memorized.Results/Conclusions The authors tested the system in two cases.In the first case,we illustrated a process in which we improved the gesture designs on our system and tested the system through user study.In the second case,we utilized the system in the mining industry and conducted a user study,where users say that they think the system is easy to use.展开更多
Human-Computer Interaction(HCI)is a sub-area within computer science focused on the study of the communication between people(users)and computers and the evaluation,implementation,and design of user interfaces for com...Human-Computer Interaction(HCI)is a sub-area within computer science focused on the study of the communication between people(users)and computers and the evaluation,implementation,and design of user interfaces for computer systems.HCI has accomplished effective incorporation of the human factors and software engineering of computing systems through the methods and concepts of cognitive science.Usability is an aspect of HCI dedicated to guar-anteeing that human–computer communication is,amongst other things,efficient,effective,and sustaining for the user.Simultaneously,Human activity recognition(HAR)aim is to identify actions from a sequence of observations on the activities of subjects and the environmental conditions.The vision-based HAR study is the basis of several applications involving health care,HCI,and video surveillance.This article develops a Fire Hawk Optimizer with Deep Learning Enabled Activ-ity Recognition(FHODL-AR)on HCI driven usability.In the presented FHODL-AR technique,the input images are investigated for the identification of different human activities.For feature extraction,a modified SqueezeNet model is intro-duced by the inclusion of few bypass connections to the SqueezeNet among Fire modules.Besides,the FHO algorithm is utilized as a hyperparameter optimization algorithm,which in turn boosts the classification performance.To detect and cate-gorize different kinds of activities,probabilistic neural network(PNN)classifier is applied.The experimental validation of the FHODL-AR technique is tested using benchmark datasets,and the outcomes reported the improvements of the FHODL-AR technique over other recent approaches.展开更多
Mobile applications are being used in a great range of fields and application areas. As a result, many research fields have focused on the study and improvement of such devices. The current Smartphones are the best ex...Mobile applications are being used in a great range of fields and application areas. As a result, many research fields have focused on the study and improvement of such devices. The current Smartphones are the best example of the research and the evolution of these technologies. Moreover, the software design and development is progressively more focused on the user; finding and developing new mobile interaction models. In order to do so, knowing what kind of problems the users could have is vital to enhance a bad interaction design. Unfortunately, a good software quality evaluation takes more time than the companies can invest. The contribution revealed in this work is a new approach to quality testing methodology focused on mobile interactions and their context in use where external capturing tools, such as cameras, are suppressed and the evaluation environments are the same as the user will use the application. By this approach, the interactions can be captured without changing the context and consequently, the data will be more accurate, enabling the evaluation of the quality-in-use in real environments.展开更多
Purpose: Patient-specific quality assurance (PSQA) requires manual operation of different workstations, which is time-consuming and error-prone. Therefore, developing automated solutions to improve efficiency and accu...Purpose: Patient-specific quality assurance (PSQA) requires manual operation of different workstations, which is time-consuming and error-prone. Therefore, developing automated solutions to improve efficiency and accuracy is a priority. The purpose of this study was to develop a general software interface with scripting on a human interactive device (HID) for improving the efficiency and accuracy of manual quality assurance (QA) procedures. Methods: As an initial application, we aimed to automate our PSQA workflow that involves Varian Eclipse treatment planning system, Elekta MOSAIQ oncology information system and PTW Verisoft application. A general platform, the AutoFrame interface with two imbedded subsystems—the AutoFlow and the PyFlow, was developed with a scripting language for automating human operations of aforementioned systems. The interface included three functional modules: GUI module, UDF script interpreter and TCP/IP communication module. All workstations in the PSQA process were connected, and most manual operations were automated by AutoFrame sequentially or in parallel. Results: More than 20 PSQA tasks were performed both manually and using the developed AutoFrame interface. On average, 175 (±12) manual operations of the PSQA procedure were eliminated and performed by the automated process. The time to complete a PSQA task was 8.23 (±0.78) minutes for the automated workflow, in comparison to 13.91 (±3.01) minutes needed for manual operations. Conclusion: We have developed the AutoFrame interface framework that successfully automated our PSQA procedure, and significantly reduced the time, human (control/clicking/typing) errors, and operators’ stress. Future work will focus on improving the system’s flexibility and stability and extending its operations to other QA procedures.展开更多
基金supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.RS-2023-00218176)and the Soonchunhyang University Research Fund.
文摘Human Interaction Recognition(HIR)was one of the challenging issues in computer vision research due to the involvement of multiple individuals and their mutual interactions within video frames generated from their movements.HIR requires more sophisticated analysis than Human Action Recognition(HAR)since HAR focuses solely on individual activities like walking or running,while HIR involves the interactions between people.This research aims to develop a robust system for recognizing five common human interactions,such as hugging,kicking,pushing,pointing,and no interaction,from video sequences using multiple cameras.In this study,a hybrid Deep Learning(DL)and Machine Learning(ML)model was employed to improve classification accuracy and generalizability.The dataset was collected in an indoor environment with four-channel cameras capturing the five types of interactions among 13 participants.The data was processed using a DL model with a fine-tuned ResNet(Residual Networks)architecture based on 2D Convolutional Neural Network(CNN)layers for feature extraction.Subsequently,machine learning models were trained and utilized for interaction classification using six commonly used ML algorithms,including SVM,KNN,RF,DT,NB,and XGBoost.The results demonstrate a high accuracy of 95.45%in classifying human interactions.The hybrid approach enabled effective learning,resulting in highly accurate performance across different interaction types.Future work will explore more complex scenarios involving multiple individuals based on the application of this architecture.
基金funding this work under the Research Group Funding Program Grant Code(NU/RG/SERC/12/6)supported via funding from Prince Satam bin Abdulaziz University Project Number(PSAU/2023/R/1444)+1 种基金Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2023R348)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia,and this work was also supported by the Ministry of Science and ICT(MSIT),South Korea,through the ICT Creative Consilience Program supervised by the Institute for Information and Communications Technology Planning and Evaluation(IITP)under Grant IITP-2023-2020-0-01821.
文摘Human-human interaction recognition is crucial in computer vision fields like surveillance,human-computer interaction,and social robotics.It enhances systems’ability to interpret and respond to human behavior precisely.This research focuses on recognizing human interaction behaviors using a static image,which is challenging due to the complexity of diverse actions.The overall purpose of this study is to develop a robust and accurate system for human interaction recognition.This research presents a novel image-based human interaction recognition method using a Hidden Markov Model(HMM).The technique employs hue,saturation,and intensity(HSI)color transformation to enhance colors in video frames,making them more vibrant and visually appealing,especially in low-contrast or washed-out scenes.Gaussian filters reduce noise and smooth imperfections followed by silhouette extraction using a statistical method.Feature extraction uses the features from Accelerated Segment Test(FAST),Oriented FAST,and Rotated BRIEF(ORB)techniques.The application of Quadratic Discriminant Analysis(QDA)for feature fusion and discrimination enables high-dimensional data to be effectively analyzed,thus further enhancing the classification process.It ensures that the final features loaded into the HMM classifier accurately represent the relevant human activities.The impressive accuracy rates of 93%and 94.6%achieved in the BIT-Interaction and UT-Interaction datasets respectively,highlight the success and reliability of the proposed technique.The proposed approach addresses challenges in various domains by focusing on frame improvement,silhouette and feature extraction,feature fusion,and HMM classification.This enhances data quality,accuracy,adaptability,reliability,and reduction of errors.
文摘Impressive advancements and novel techniques have been witnessed in AI-based Human Intelligent-Things Interaction(HITI)systems.Several technological breakthroughs have contributed to HITI,such as Internet of Things(IoT),deep and edge learning for deducing intelligence,and 6G for ultra-fast and ultralow-latency communication between cyber-physical HITI systems.However,human-AI teaming presents several challenges that are yet to be addressed,despite the many advancements that have been made towards human-AI teaming.Allowing human stakeholders to understand AI’s decision-making process is a novel challenge.Artificial Intelligence(AI)needs to adopt diversified human understandable features,such as ethics,non-biases,trustworthiness,explainability,safety guarantee,data privacy,system security,and auditability.While adopting these features,high system performance should be maintained,and transparent processing involved in the‘human intelligent-things teaming’should be conveyed.To this end,we introduce the fusion of four key technologies,namely an ensemble of deep learning,6G,IoT,and corresponding security/privacy techniques to support HITI.This paper presents a framework that integrates the aforementioned four key technologies to support AI-based Human Intelligent-Things Interaction.Additionally,this paper demonstrates two security applications as proof of the concept,namely intelligent smart city surveillance and handling emergency services.The paper proposes to fuse four key technologies(deep learning,6G,IoT,and security and privacy techniques)to support Human Intelligent-Things interaction,applying the proposed framework to two security applications(surveillance and emergency handling).In this research paper,we will present a comprehensive review of the existing techniques of fusing security and privacy within future HITI applications.Moreover,we will showcase two security applications as proof of concept that use the fusion of the four key technologies to offer next-generation HITI services,namely intelligent smart city surveillance and handling emergency services.This proposed research outcome is envisioned to democratize the use of AI within smart city surveillance applications.
基金supported in part by Shenzhen Innovation Program(JCYJ20180305125709986)National Natural Science Foundation of China(61861130365,61761146002)+1 种基金GD Science and Technology Program(2020A0505100064,2015A030312015)DEGP Key Project(2018KZDXM058)。
文摘Humans regularly interact with their surrounding objects.Such interactions often result in strongly correlated motions between humans and the interacting objects.We thus ask:"Is it possible to infer object properties from skeletal motion alone,even without seeing the interacting object itself?"In this paper,we present a fine-grained action recognition method that learns to infer such latent object properties from human interaction motion alone.This inference allows us to disentangle the motion from the object property and transfer object properties to a given motion.We collected a large number of videos and 3 D skeletal motions of performing actors using an inertial motion capture device.We analyzed similar actions and learned subtle differences between them to reveal latent properties of the interacting objects.In particular,we learned to identify the interacting object,by estimating its weight,or its spillability.Our results clearly demonstrate that motions and interacting objects are highly correlated and that related object latent properties can be inferred from 3 D skeleton sequences alone,leading to new synthesis possibilities for motions involving human interaction.Our dataset is available at http://vcc.szu.edu.cn/research/2020/IT.html.
基金This research was supported by the MSIT(Ministry of Science and ICT),Korea,under the ITRC(Information Technology Research Center)support program(IITP-2023-2018-0-01426)supervised by the IITP(Institute for Information&Communications Technology Planning&Evaluation)This work has also been supported by PrincessNourah bint Abdulrahman UniversityResearchers Supporting Project Number(PNURSP2022R239),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.Alsothis work was partially supported by the Taif University Researchers Supporting Project Number(TURSP-2020/115),Taif University,Taif,Saudi Arabia.
文摘Identifying human actions and interactions finds its use in manyareas, such as security, surveillance, assisted living, patient monitoring, rehabilitation,sports, and e-learning. This wide range of applications has attractedmany researchers to this field. Inspired by the existing recognition systems,this paper proposes a new and efficient human-object interaction recognition(HOIR) model which is based on modeling human pose and scene featureinformation. There are different aspects involved in an interaction, includingthe humans, the objects, the various body parts of the human, and the backgroundscene. Themain objectives of this research include critically examiningthe importance of all these elements in determining the interaction, estimatinghuman pose through image foresting transform (IFT), and detecting the performedinteractions based on an optimizedmulti-feature vector. The proposedmethodology has six main phases. The first phase involves preprocessing theimages. During preprocessing stages, the videos are converted into imageframes. Then their contrast is adjusted, and noise is removed. In the secondphase, the human-object pair is detected and extracted from each image frame.The third phase involves the identification of key body parts of the detectedhumans using IFT. The fourth phase relates to three different kinds of featureextraction techniques. Then these features are combined and optimized duringthe fifth phase. The optimized vector is used to classify the interactions in thelast phase. TheMSRDaily Activity 3D dataset has been used to test this modeland to prove its efficiency. The proposed system obtains an average accuracyof 91.7% on this dataset.
基金supported by Priority Research Centers Program through NRF funded by MEST(2018R1A6A1A03024003)the Grand Information Technology Research Center support program IITP-2020-2020-0-01612 supervised by the IITP by MSIT,Korea.
文摘In the new era of technology,daily human activities are becoming more challenging in terms of monitoring complex scenes and backgrounds.To understand the scenes and activities from human life logs,human-object interaction(HOI)is important in terms of visual relationship detection and human pose estimation.Activities understanding and interaction recognition between human and object along with the pose estimation and interaction modeling have been explained.Some existing algorithms and feature extraction procedures are complicated including accurate detection of rare human postures,occluded regions,and unsatisfactory detection of objects,especially small-sized objects.The existing HOI detection techniques are instancecentric(object-based)where interaction is predicted between all the pairs.Such estimation depends on appearance features and spatial information.Therefore,we propose a novel approach to demonstrate that the appearance features alone are not sufficient to predict the HOI.Furthermore,we detect the human body parts by using the Gaussian Matric Model(GMM)followed by object detection using YOLO.We predict the interaction points which directly classify the interaction and pair them with densely predicted HOI vectors by using the interaction algorithm.The interactions are linked with the human and object to predict the actions.The experiments have been performed on two benchmark HOI datasets demonstrating the proposed approach.
文摘In this paper,we present an RFID based human and Unmanned Aerial Vehicle(UAV)Interaction system,termed RFHUI,to provide an intuitive and easy-to-operate method to navigate a UAV in an indoor environment.It relies on the passive Radio-Frequency IDentification(RFID)technology to precisely track the pose of a handheld controller,and then transfer the pose information to navigate the UAV.A prototype of the handheld controller is created by attaching three or more Ultra High Frequency(UHF)RFID tags to a board.A Commercial Off-The-Shelf(COTS)RFID reader with multiple antennas is deployed to collect the observations of the tags.First,the precise positions of all the tags can be obtained by our proposed method,which leverages a Bayesian filter and Channel State Information(CSI)phase measurements collected from the RFID reader.Second,we introduce a Singular Value Decomposition(SVD)based approach to obtain a 6-DoF(Degrees of Freedom)pose of the controller from estimated positions of the tags.Furthermore,the pose of the controller can be precisely tracked in a real-time manner,while the user moves the controller.Finally,control commands will be generated from the controller's pose and sent to the UAV for navigation.The performance of the RFHUI is evaluated by several experiments.The results show that it provides precise poses with 0.045m mean error in position and 2.5∘mean error in orientation for the controller,and enables the controller to precisely and intuitively navigate the UAV in an indoor environment.
文摘Recent advancements in the Internet of Things IoT and cloud computing have paved the way for mobile Healthcare(mHealthcare)services.A patient within the hospital is monitored by several devices.Moreover,upon leaving the hospital,the patient can be remotely monitored whether directly using body wearable sensors or using a smartphone equipped with sensors to monitor different user-health parameters.This raises potential challenges for intelligent monitoring of patient's health.In this paper,an improved architecture for smart mHealthcare is proposed that is supported by HCI design principles.The HCI also provides the support for the User-Centric Design(UCD)for smart mHealthcare models.Furthermore,the HCI along with IoT's(Internet of Things)5-layered architecture has the potential of improving User Experience(UX)in mHealthcare design and help saving lives.The intelligent mHealthcare system is supported by the IoT sensing and communication layers and health care providers are supported by the application layer for the medical,behavioral,and health-related information.Health care providers and users are further supported by an intelligent layer performing critical situation assessment and performing a multi-modal communication using an intelligent assistant.The HCI design focuses on the ease-of-use,including user experience and safety,alarms,and error-resistant displays of the end-user,and improves user's experience and user satisfaction.
基金financially supported by the National Natural Science Foundation of China(No. 21475091)the Science andTechnology Department of Sichuan Province(No. 2015GZ0301)
文摘G-quadruplex ligands have been accepted as potential therapeutic agents for anticancer treatment. Thioflavin T (ThT), a highly selective G-quadruplex ligand, can bind G-quadruplex with a fluorescent light-up signal change and high specificity against DNA duplex. However, there are still different opinions that ThT induces/stabilizes G-quadruplex foldings/topologies in human telomere sequence. Here, a sensitive single-molecule nanopore technology was utilized to analyze the interactions between human telomeric DNA (Tel DNA) and ThT. Both translocation time and current blockade were measured to investigate the translocation behaviors. Furthermore, the effects of metal ion (K~ and Na~) and pH on the translocation behaviors were studied. Based on the single-molecule level analysis, there are some conclusions: (1) In the electrolyte solution containing 50 mmol/L I(Cl and 450 mmol/L NaCl, ThT can bind strongly with Tel DNA but nearly does not change the G-quadruplex form; (2) in the presence of high concentration K~, the ThT binding induces the structural change of hybrid G-quadruplex into antiparallel topology with an enhanced structural stability; (3) In either alkaline or acidic buffer, G-quadruplex form will change in certain degree. All above conclusions are helpful to deeply understand the interaction behaviors between Tel DNA and ThT. This nanopore platform, investigating G-quadruplex ligands at the single-molecule level, has great potential for the design of new drugs and sensors.
文摘With COM,VB and VC,we develop a visual human computer interaction system,which is used to mine association rules.It can mine association rules from the database which is created by Access and SQL server,as well as the text mode.With the interaction interface,user participates in the process of data mining,making the system mine the satisfying rules.
文摘This paper discusses some issues on human reliability model of time dependent human behavior. Some results of the crew reliability experiment on Tsinghua training simulator in China are given, Meanwhile, a case of calculation for human error probability during anticipated transient without scram (ATWS) based on the data drew from the recent experiment is offered.
文摘Despite the availability of advanced security software and hardware mechanisms available, still, there has been a breach in the defence system of an organization or individual. Social engineering mostly targets the weakest link in the security system </span><i style="font-family:"font-size:10pt;"><span style="font-size:12px;font-family:Verdana;">i.e.</span></i><span style="font-family:Verdana;font-size:12px;"> “Humans” for gaining access to sensitive information by manipulating human psychology. Social engineering attacks are arduous to defend as such attacks are not easily detected by available security software or hardware. This article surveys recent studies on social engineering attacks with discussion on the social engineering phases and categorizing the various attacks into two groups. The main aim of this survey is to examine the various social engineering attacks on individuals and countermeasures against social engineering attacks are also discussed.
基金the National Natural Science Foundation of China(No.61403410)
文摘Many human-machine collaborative support scheduling systems are used to aid human decision making by providing several optimal scheduling algorithms that do not take operator's attention into consideration.However, the current systems should take advantage of the operator's attention to obtain the optimal solution.In this paper, we innovatively propose a human-machine collaborative support scheduling system of intelligence information from multi-UAVs based on eye-tracker. Firstly, the target recognition algorithm is applied to the images from the multiple unmanned aerial vehicles(multi-UAVs) to recognize the targets in the images. Then,the support system utilizes the eye tracker to gain the eye-gaze points which are intended to obtain the focused targets in the images. Finally, the heuristic scheduling algorithms take both the attributes of targets and the operator's attention into consideration to obtain the sequence of the images. As the processing time of the images collected by the multi-UAVs is uncertain, however the upper bounds and lower bounds of the processing time are known before. So the processing time of the images is modeled by the interval processing time. The objective of the scheduling problem is to minimize mean weighted completion time. This paper proposes some new polynomial time heuristic scheduling algorithms which firstly schedule the images including the focused targets. We conduct the scheduling experiments under six different distributions. The results indicate that the proposed algorithm is not sensitive to the different distributions of the processing time and has a negligible computational time. The absolute error of the best performing heuristic solution is only about 1%. Then, we incorporate the best performing heuristic algorithm into the human-machine collaborative support systems to verify the performance of the system.
基金This work was financially supported by the Strategic Priority Research Program of the Chinese Academy of Sciences(Grant No.XDA23070201)The Science-based Advisory Program of the Alliance of International Science Organizations。
文摘In the Anthropocene,health is necessary to achieve global sustainable development.This is a challenge because health issues are complex and span from humans to ecosystems and the environment through dynamic interac-tions across scales.We find that the health issues have been mainly addressed by disciplinary endeavors which unfortunately will not result in panoramic theories or effective solutions.We recommend focusing on the intri-cate interactions between humans,ecosystems and the environment for developing common theoretical under-standings and practical solutions for safeguarding planetary health,with human health as the key indicator and endpoint.To facilitate this paradigm shift,a holistic framework is formulated that incorporates disturbances from inner Earth and our solar system,and accommodates interactions between humans,ecosystems and the environ-ment in a nested hierarchy.An integrative and transdisciplinary health science is advocated along with holistic thinking to resolve our current health challenges and to achieve the health-related sustainable development goals.
文摘With the mindset of constant improvement in efficiency and safety in the workspace and training in Singapore,there is a need to explore varying technologies and their capabilities to fulfil this need.The ability of Virtual Reality(VR)and Augmented Reality(AR)to create an immersive experience of tying the virtual and physical environments coupled with information filtering capabilities brings a possibility of introducing this technology into the training process and workspace.This paper surveys current research trends,findings and limitation of VR and AR in its effect on human performance,specifically in Singapore,and our experience in the National University of Singapore(NUS).
基金the National Key Research and Development Project of China(2017 YFC 0804401)National Natural Science Foundation of China(U 1909204).
文摘Background Large screen visualization sys tems have been widely utilized in many industries.Such systems can help illustrate the working states of different production systems.However,efficient interaction with such systems is still a focus of related research.Methods In this paper,we propose a touchless interaction system based on RGB-D camera using a novel bone-length constraining method.The proposed method optimizes the joint data collected from RGB-D cameras with more accurate and more stable results on very noisy data.The user can customize the system by modifying the finite-state machine in the system and reuse the gestures in multiple scenarios,reducing the number of gestures that need to be designed and memorized.Results/Conclusions The authors tested the system in two cases.In the first case,we illustrated a process in which we improved the gesture designs on our system and tested the system through user study.In the second case,we utilized the system in the mining industry and conducted a user study,where users say that they think the system is easy to use.
文摘Human-Computer Interaction(HCI)is a sub-area within computer science focused on the study of the communication between people(users)and computers and the evaluation,implementation,and design of user interfaces for computer systems.HCI has accomplished effective incorporation of the human factors and software engineering of computing systems through the methods and concepts of cognitive science.Usability is an aspect of HCI dedicated to guar-anteeing that human–computer communication is,amongst other things,efficient,effective,and sustaining for the user.Simultaneously,Human activity recognition(HAR)aim is to identify actions from a sequence of observations on the activities of subjects and the environmental conditions.The vision-based HAR study is the basis of several applications involving health care,HCI,and video surveillance.This article develops a Fire Hawk Optimizer with Deep Learning Enabled Activ-ity Recognition(FHODL-AR)on HCI driven usability.In the presented FHODL-AR technique,the input images are investigated for the identification of different human activities.For feature extraction,a modified SqueezeNet model is intro-duced by the inclusion of few bypass connections to the SqueezeNet among Fire modules.Besides,the FHO algorithm is utilized as a hyperparameter optimization algorithm,which in turn boosts the classification performance.To detect and cate-gorize different kinds of activities,probabilistic neural network(PNN)classifier is applied.The experimental validation of the FHODL-AR technique is tested using benchmark datasets,and the outcomes reported the improvements of the FHODL-AR technique over other recent approaches.
文摘Mobile applications are being used in a great range of fields and application areas. As a result, many research fields have focused on the study and improvement of such devices. The current Smartphones are the best example of the research and the evolution of these technologies. Moreover, the software design and development is progressively more focused on the user; finding and developing new mobile interaction models. In order to do so, knowing what kind of problems the users could have is vital to enhance a bad interaction design. Unfortunately, a good software quality evaluation takes more time than the companies can invest. The contribution revealed in this work is a new approach to quality testing methodology focused on mobile interactions and their context in use where external capturing tools, such as cameras, are suppressed and the evaluation environments are the same as the user will use the application. By this approach, the interactions can be captured without changing the context and consequently, the data will be more accurate, enabling the evaluation of the quality-in-use in real environments.
文摘Purpose: Patient-specific quality assurance (PSQA) requires manual operation of different workstations, which is time-consuming and error-prone. Therefore, developing automated solutions to improve efficiency and accuracy is a priority. The purpose of this study was to develop a general software interface with scripting on a human interactive device (HID) for improving the efficiency and accuracy of manual quality assurance (QA) procedures. Methods: As an initial application, we aimed to automate our PSQA workflow that involves Varian Eclipse treatment planning system, Elekta MOSAIQ oncology information system and PTW Verisoft application. A general platform, the AutoFrame interface with two imbedded subsystems—the AutoFlow and the PyFlow, was developed with a scripting language for automating human operations of aforementioned systems. The interface included three functional modules: GUI module, UDF script interpreter and TCP/IP communication module. All workstations in the PSQA process were connected, and most manual operations were automated by AutoFrame sequentially or in parallel. Results: More than 20 PSQA tasks were performed both manually and using the developed AutoFrame interface. On average, 175 (±12) manual operations of the PSQA procedure were eliminated and performed by the automated process. The time to complete a PSQA task was 8.23 (±0.78) minutes for the automated workflow, in comparison to 13.91 (±3.01) minutes needed for manual operations. Conclusion: We have developed the AutoFrame interface framework that successfully automated our PSQA procedure, and significantly reduced the time, human (control/clicking/typing) errors, and operators’ stress. Future work will focus on improving the system’s flexibility and stability and extending its operations to other QA procedures.