Frequency-modulated continuous-wave radar enables the non-contact and privacy-preserving recognition of human behavior.However,the accuracy of behavior recognition is directly influenced by the spatial relationship be...Frequency-modulated continuous-wave radar enables the non-contact and privacy-preserving recognition of human behavior.However,the accuracy of behavior recognition is directly influenced by the spatial relationship between human posture and the radar.To address the issue of low accuracy in behavior recognition when the human body is not directly facing the radar,a method combining local outlier factor with Doppler information is proposed for the correction of multi-classifier recognition results.Initially,the information such as distance,velocity,and micro-Doppler spectrogram of the target is obtained using the fast Fourier transform and histogram of oriented gradients-support vector machine methods,followed by preliminary recognition.Subsequently,Platt scaling is employed to transform recognition results into confidence scores,and finally,the Doppler-local outlier factor method is utilized to calibrate the confidence scores,with the highest confidence classifier result considered as the recognition outcome.Experimental results demonstrate that this approach achieves an average recognition accuracy of 96.23%for comprehensive human behavior recognition in various orientations.展开更多
The Internet of Things(IoT)and mobile technology have significantly transformed healthcare by enabling real-time monitoring and diagnosis of patients.Recognizing Medical-Related Human Activities(MRHA)is pivotal for he...The Internet of Things(IoT)and mobile technology have significantly transformed healthcare by enabling real-time monitoring and diagnosis of patients.Recognizing Medical-Related Human Activities(MRHA)is pivotal for healthcare systems,particularly for identifying actions critical to patient well-being.However,challenges such as high computational demands,low accuracy,and limited adaptability persist in Human Motion Recognition(HMR).While some studies have integrated HMR with IoT for real-time healthcare applications,limited research has focused on recognizing MRHA as essential for effective patient monitoring.This study proposes a novel HMR method tailored for MRHA detection,leveraging multi-stage deep learning techniques integrated with IoT.The approach employs EfficientNet to extract optimized spatial features from skeleton frame sequences using seven Mobile Inverted Bottleneck Convolutions(MBConv)blocks,followed by Convolutional Long Short Term Memory(ConvLSTM)to capture spatio-temporal patterns.A classification module with global average pooling,a fully connected layer,and a dropout layer generates the final predictions.The model is evaluated on the NTU RGB+D 120 and HMDB51 datasets,focusing on MRHA such as sneezing,falling,walking,sitting,etc.It achieves 94.85%accuracy for cross-subject evaluations and 96.45%for cross-view evaluations on NTU RGB+D 120,along with 89.22%accuracy on HMDB51.Additionally,the system integrates IoT capabilities using a Raspberry Pi and GSM module,delivering real-time alerts via Twilios SMS service to caregivers and patients.This scalable and efficient solution bridges the gap between HMR and IoT,advancing patient monitoring,improving healthcare outcomes,and reducing costs.展开更多
Video action recognition(VAR)aims to analyze dynamic behaviors in videos and achieve semantic understanding.VAR faces challenges such as temporal dynamics,action-scene coupling,and the complexity of human interactions...Video action recognition(VAR)aims to analyze dynamic behaviors in videos and achieve semantic understanding.VAR faces challenges such as temporal dynamics,action-scene coupling,and the complexity of human interactions.Existing methods can be categorized into motion-level,event-level,and story-level ones based on spatiotemporal granularity.However,single-modal approaches struggle to capture complex behavioral semantics and human factors.Therefore,in recent years,vision-language models(VLMs)have been introduced into this field,providing new research perspectives for VAR.In this paper,we systematically review spatiotemporal hierarchical methods in VAR and explore how the introduction of large models has advanced the field.Additionally,we propose the concept of“Factor”to identify and integrate key information from both visual and textual modalities,enhancing multimodal alignment.We also summarize various multimodal alignment methods and provide in-depth analysis and insights into future research directions.展开更多
A common but flawed design in existing CNN architectures is using strided convolutions and/or pooling layer,which will result in the loss of fine-grained feature information,especially for low-resolution images and sm...A common but flawed design in existing CNN architectures is using strided convolutions and/or pooling layer,which will result in the loss of fine-grained feature information,especially for low-resolution images and small objects.In this paper,a new CNN building block named SPD-Conv was used,which completely eliminated stride and pooling operations and replaced them with a space-to-depth convolution and a non-strided convolution.Such new design has the advantage of downsampling feature maps while retaining discriminant feature information.It also represents a general unified method,which can be easily applied to any CNN architectures,and can also be applied to strided conversion and pooling in the same way.展开更多
The aerial deployment method enables Unmanned Aerial Vehicles(UAVs)to be directly positioned at the required altitude for their mission.This method typically employs folding technology to improve loading efficiency,wi...The aerial deployment method enables Unmanned Aerial Vehicles(UAVs)to be directly positioned at the required altitude for their mission.This method typically employs folding technology to improve loading efficiency,with applications such as the gravity-only aerial deployment of high-aspect-ratio solar-powered UAVs,and aerial takeoff of fixed-wing drones in Mars research.However,the significant morphological changes during deployment are accompanied by strong nonlinear dynamic aerodynamic forces,which result in multiple degrees of freedom and an unstable character.This hinders the description and analysis of unknown dynamic behaviors,further leading to difficulties in the design of deployment strategies and flight control.To address this issue,this paper proposes an analysis method for dynamic behaviors during aerial deployment based on the Variational Autoencoder(VAE).Focusing on the gravity-only deployment problem of highaspect-ratio foldable-wing UAVs,the method encodes the multi-degree-of-freedom unstable motion signals into a low-dimensional feature space through a data-driven approach.By clustering in the feature space,this paper identifies and studies several dynamic behaviors during aerial deployment.The research presented in this paper offers a new method and perspective for feature extraction and analysis of complex and difficult-to-describe extreme flight dynamics,guiding the research on aerial deployment drones design and control strategies.展开更多
Objective:This study aimed to explore undergraduates’knowledge,attitude,and practice/behavior of human papillomavirus(HPV)vaccination,as well as the essential influencing factors for vaccination decision-making.Metho...Objective:This study aimed to explore undergraduates’knowledge,attitude,and practice/behavior of human papillomavirus(HPV)vaccination,as well as the essential influencing factors for vaccination decision-making.Methods:Through cluster and convenience sampling,2000 undergraduates from the Nursing and Language department of a university in Shanghai were sent a self-designed questionnaire.Chi-square tests,independent sample t-test/ANOVE,and multiple linear regression were used to investigate participants’knowledge and attitude on HPV vaccination,as well as the factors that predicted potential action to receive and promote HPV vaccination in the future.Results:The mean HPV knowledge score was 5.027 out of 10.Health science students showed a significantly higher knowledge mean score than the non-health science students(P<0.000).There was a statistically difference in HPV vaccination attitude among undergraduates in different grades(P<0.05).Awareness of cervical cancer and worries about the risk of cervical cancer were the significant predictors of willingness to receive and promote HPV vaccination in the future.Conclusions:It would take time for a new health product to be aware,understood,accepted,and received.Education providing and information sharing are expected to break the dawn and make the procedure processed.展开更多
Real-time surveillance is attributed to recognizing the variety of actions performed by humans.Human Action Recognition(HAR)is a technique that recognizes human actions from a video stream.A range of variations in hum...Real-time surveillance is attributed to recognizing the variety of actions performed by humans.Human Action Recognition(HAR)is a technique that recognizes human actions from a video stream.A range of variations in human actions makes it difficult to recognize with considerable accuracy.This paper presents a novel deep neural network architecture called Attention RB-Net for HAR using video frames.The input is provided to the model in the form of video frames.The proposed deep architecture is based on the unique structuring of residual blocks with several filter sizes.Features are extracted from each frame via several operations with specific parameters defined in the presented novel Attention-based Residual Bottleneck(Attention-RB)DCNN architecture.A fully connected layer receives an attention-based features matrix,and final classification is performed.Several hyperparameters of the proposed model are initialized using Bayesian Optimization(BO)and later utilized in the trained model for testing.In testing,features are extracted from the self-attention layer and passed to neural network classifiers for the final action classification.Two highly cited datasets,HMDB51 and UCF101,were used to validate the proposed architecture and obtained an average accuracy of 87.70%and 97.30%,respectively.The deep convolutional neural network(DCNN)architecture is compared with state-of-the-art(SOTA)methods,including pre-trained models,inside blocks,and recently published techniques,and performs better.展开更多
Activity recognition is a challenging topic in the field of computer vision that has various applications,including surveillance systems,industrial automation,and human-computer interaction.Today,the demand for automa...Activity recognition is a challenging topic in the field of computer vision that has various applications,including surveillance systems,industrial automation,and human-computer interaction.Today,the demand for automation has greatly increased across industries worldwide.Real-time detection requires edge devices with limited computational time.This study proposes a novel hybrid deep learning system for human activity recognition(HAR),aiming to enhance the recognition accuracy and reduce the computational time.The proposed system combines a pretrained image classification model with a sequence analysis model.First,the dataset was divided into a training set(70%),validation set(10%),and test set(20%).Second,all the videos were converted into frames and deep-based features were extracted from each frame using convolutional neural networks(CNNs)with a vision transformer.Following that,bidirectional long short-term memory(BiLSTM)-and temporal convolutional network(TCN)-based models were trained using the training set,and their performances were evaluated using the validation set and test set.Four benchmark datasets(UCF11,UCF50,UCF101,and JHMDB)were used to evaluate the performance of the proposed HAR-based system.The experimental results showed that the combination of ConvNeXt and the TCN-based model achieved a recognition accuracy of 97.73%for UCF11,98.81%for UCF50,98.46%for UCF101,and 83.38%for JHMDB,respectively.This represents improvements in the recognition accuracy of 4%,2.67%,3.67%,and 7.08%for the UCF11,UCF50,UCF101,and JHMDB datasets,respectively,over existing models.Moreover,the proposed HAR-based system obtained superior recognition accuracy,shorter computational times,and minimal memory usage compared to the existing models.展开更多
Human activity recognition is a significant area of research in artificial intelligence for surveillance,healthcare,sports,and human-computer interaction applications.The article benchmarks the performance of You Only...Human activity recognition is a significant area of research in artificial intelligence for surveillance,healthcare,sports,and human-computer interaction applications.The article benchmarks the performance of You Only Look Once version 11-based(YOLOv11-based)architecture for multi-class human activity recognition.The article benchmarks the performance of You Only Look Once version 11-based(YOLOv11-based)architecture for multi-class human activity recognition.The dataset consists of 14,186 images across 19 activity classes,from dynamic activities such as running and swimming to static activities such as sitting and sleeping.Preprocessing included resizing all images to 512512 pixels,annotating them in YOLO’s bounding box format,and applying data augmentation methods such as flipping,rotation,and cropping to enhance model generalization.The proposed model was trained for 100 epochs with adaptive learning rate methods and hyperparameter optimization for performance improvement,with a mAP@0.5 of 74.93%and a mAP@0.5-0.95 of 64.11%,outperforming previous versions of YOLO(v10,v9,and v8)and general-purpose architectures like ResNet50 and EfficientNet.It exhibited improved precision and recall for all activity classes with high precision values of 0.76 for running,0.79 for swimming,0.80 for sitting,and 0.81 for sleeping,and was tested for real-time deployment with an inference time of 8.9 ms per image,being computationally light.Proposed YOLOv11’s improvements are attributed to architectural advancements like a more complex feature extraction process,better attention modules,and an anchor-free detection mechanism.While YOLOv10 was extremely stable in static activity recognition,YOLOv9 performed well in dynamic environments but suffered from overfitting,and YOLOv8,while being a decent baseline,failed to differentiate between overlapping static activities.The experimental results determine proposed YOLOv11 to be the most appropriate model,providing an ideal balance between accuracy,computational efficiency,and robustness for real-world deployment.Nevertheless,there exist certain issues to be addressed,particularly in discriminating against visually similar activities and the use of publicly available datasets.Future research will entail the inclusion of 3D data and multimodal sensor inputs,such as depth and motion information,for enhancing recognition accuracy and generalizability to challenging real-world environments.展开更多
Background Human behaviors and tumors go hand in hand.The wave of globalization has brought about a global homogenization of human behaviors,which further triggers a potential global human behavior-related cancer burd...Background Human behaviors and tumors go hand in hand.The wave of globalization has brought about a global homogenization of human behaviors,which further triggers a potential global human behavior-related cancer burden(HBRCB)convergence.Methods This study systematically evaluated the global,regional,and national metrics of HBRCBs over the last 30 years using data from the Global Burden of Diseases(GBD)2019 results and the WHO Global Health Observatory(GHO)data repository.Results The results showed the global remission and convergence of HBRCB in the last three decades and the foreseeable future(2020–2044).Overall,HBRCBs are decreasing with the global emphasis on positive dietary habits,safe sex,substance addiction withdrawal,and active physical exercise habits.Globally,from 1990 to 2019,with the development of social development index(SDI)level from 0.511 to 0.651,the HBRCBs had been decreasing from 1507.908 to 1145.344 in age-standardized disability-adjusted life years(ASDALY)and from 61.467 to 49.449 in(age-standardized death rates)ASDR per 100,000 population with changes of−24.04%and−19.55%,respectively.Meanwhile,the variance in HBRCBs among countries and territories generally showed a decreasing or flat trend.The variance of HBRCBs among 204 countries and territories in 2019–2044 decreased from 1495.210 to 449.202 in males and from 214.640 to 78.848 in females for ASDR due to all behavior risks,and from 911,211.676 to 317,233.590 in males and from 146,171.660 to 62,926.660 in females for ASDALY.The global HBRCBs was becoming more uniform due to the globalization of human behaviors.Conclusions This study revealed the significance of addressing HBRCBs as a uniform and continuous issue in future global health promotion.It also demonstrated the potential existence of a chain effect in global health,where globalization leads to human behavior homogenization,which in turn results in HBRCB convergence.Properly measuring the commonalities and individualities among different regions and finding a balance when designing and evaluating HBRCB-related global policies in the global convergence trend of HBRCBs will be major concerns in the future.展开更多
The understanding of the impact of high-velocity microparticles on human skin tissue is important for the ad-ministration of drugs during transdermal drug delivery.This paper aims to numerically investigate the dynami...The understanding of the impact of high-velocity microparticles on human skin tissue is important for the ad-ministration of drugs during transdermal drug delivery.This paper aims to numerically investigate the dynamic behavior of human skin tissue under micro-particle impact in transdermal drug delivery.The numerical model was developed based on a coupled smoothed particle hydrodynamics(SPH)and FEM method via commercial FE software RADIOSS.Analytical analysis was conducted applying the Poncelet model and was used as validation data.A hyperelastic one-term Ogden model with one pair of material parameters(μ,α)was implemented for the skin tissue.Sensitivity studies reveal that the effect of parameter α on the penetration process is much more significant than μ.Numerical results correlate well with the analytical curves with various particle diameters and impact velocities,its capability of predicting the penetration process of micro-particle impacts into skin tissues.This work can be further investigated to guide the design of transdermal drug delivery equipment.展开更多
This research investigates the application of multisource data fusion using a Multi-Layer Perceptron (MLP) for Human Activity Recognition (HAR). The study integrates four distinct open-source datasets—WISDM, DaLiAc, ...This research investigates the application of multisource data fusion using a Multi-Layer Perceptron (MLP) for Human Activity Recognition (HAR). The study integrates four distinct open-source datasets—WISDM, DaLiAc, MotionSense, and PAMAP2—to develop a generalized MLP model for classifying six human activities. Performance analysis of the fused model for each dataset reveals accuracy rates of 95.83 for WISDM, 97 for DaLiAc, 94.65 for MotionSense, and 98.54 for PAMAP2. A comparative evaluation was conducted between the fused MLP model and the individual dataset models, with the latter tested on separate validation sets. The results indicate that the MLP model, trained on the fused dataset, exhibits superior performance relative to the models trained on individual datasets. This finding suggests that multisource data fusion significantly enhances the generalization and accuracy of HAR systems. The improved performance underscores the potential of integrating diverse data sources to create more robust and comprehensive models for activity recognition.展开更多
Human Activity Recognition(HAR)represents a rapidly advancing research domain,propelled by continuous developments in sensor technologies and the Internet of Things(IoT).Deep learning has become the dominant paradigm ...Human Activity Recognition(HAR)represents a rapidly advancing research domain,propelled by continuous developments in sensor technologies and the Internet of Things(IoT).Deep learning has become the dominant paradigm in sensor-based HAR systems,offering significant advantages over traditional machine learning methods by eliminating manual feature extraction,enhancing recognition accuracy for complex activities,and enabling the exploitation of unlabeled data through generative models.This paper provides a comprehensive review of recent advancements and emerging trends in deep learning models developed for sensor-based human activity recognition(HAR)systems.We begin with an overview of fundamental HAR concepts in sensor-driven contexts,followed by a systematic categorization and summary of existing research.Our survey encompasses a wide range of deep learning approaches,including Multi-Layer Perceptrons(MLP),Convolutional Neural Networks(CNN),Recurrent Neural Networks(RNN),Long Short-Term Memory networks(LSTM),Gated Recurrent Units(GRU),Transformers,Deep Belief Networks(DBN),and hybrid architectures.A comparative evaluation of these models is provided,highlighting their performance,architectural complexity,and contributions to the field.Beyond Centralized deep learning models,we examine the role of Federated Learning(FL)in HAR,highlighting current applications and research directions.Finally,we discuss the growing importance of Explainable Artificial Intelligence(XAI)in sensor-based HAR,reviewing recent studies that integrate interpretability methods to enhance transparency and trustworthiness in deep learning-based HAR systems.展开更多
In recent years,skeleton-based action recognition has made great achievements in Computer Vision.A graph convolutional network(GCN)is effective for action recognition,modelling the human skeleton as a spatio-temporal ...In recent years,skeleton-based action recognition has made great achievements in Computer Vision.A graph convolutional network(GCN)is effective for action recognition,modelling the human skeleton as a spatio-temporal graph.Most GCNs define the graph topology by physical relations of the human joints.However,this predefined graph ignores the spatial relationship between non-adjacent joint pairs in special actions and the behavior dependence between joint pairs,resulting in a low recognition rate for specific actions with implicit correlation between joint pairs.In addition,existing methods ignore the trend correlation between adjacent frames within an action and context clues,leading to erroneous action recognition with similar poses.Therefore,this study proposes a learnable GCN based on behavior dependence,which considers implicit joint correlation by constructing a dynamic learnable graph with extraction of specific behavior dependence of joint pairs.By using the weight relationship between the joint pairs,an adaptive model is constructed.It also designs a self-attention module to obtain their inter-frame topological relationship for exploring the context of actions.Combining the shared topology and the multi-head self-attention map,the module obtains the context-based clue topology to update the dynamic graph convolution,achieving accurate recognition of different actions with similar poses.Detailed experiments on public datasets demonstrate that the proposed method achieves better results and realizes higher quality representation of actions under various evaluation protocols compared to state-of-the-art methods.展开更多
Artificial intelligence(AI)technology has become integral in the realm of medicine and healthcare,particularly in human activity recognition(HAR)applications such as fitness and rehabilitation tracking.This study intr...Artificial intelligence(AI)technology has become integral in the realm of medicine and healthcare,particularly in human activity recognition(HAR)applications such as fitness and rehabilitation tracking.This study introduces a robust coupling analysis framework that integrates four AI-enabled models,combining both machine learning(ML)and deep learning(DL)approaches to evaluate their effectiveness in HAR.The analytical dataset comprises 561 features sourced from the UCI-HAR database,forming the foundation for training the models.Additionally,the MHEALTH database is employed to replicate the modeling process for comparative purposes,while inclusion of the WISDM database,renowned for its challenging features,supports the framework’s resilience and adaptability.The ML-based models employ the methodologies including adaptive neuro-fuzzy inference system(ANFIS),support vector machine(SVM),and random forest(RF),for data training.In contrast,a DL-based model utilizes one-dimensional convolution neural network(1dCNN)to automate feature extraction.Furthermore,the recursive feature elimination(RFE)algorithm,which drives an ML-based estimator to eliminate low-participation features,helps identify the optimal features for enhancing model performance.The best accuracies of the ANFIS,SVM,RF,and 1dCNN models with meticulous featuring process achieve around 90%,96%,91%,and 93%,respectively.Comparative analysis using the MHEALTH dataset showcases the 1dCNN model’s remarkable perfect accuracy(100%),while the RF,SVM,and ANFIS models equipped with selected features achieve accuracies of 99.8%,99.7%,and 96.5%,respectively.Finally,when applied to the WISDM dataset,the DL-based and ML-based models attain accuracies of 91.4%and 87.3%,respectively,aligning with prior research findings.In conclusion,the proposed framework yields HAR models with commendable performance metrics,exhibiting its suitability for integration into the healthcare services system through AI-driven applications.展开更多
Image semantic segmentation is an essential technique for studying human behavior through image data.This paper proposes an image semantic segmentation method for human behavior research.Firstly,an end-to-end convolut...Image semantic segmentation is an essential technique for studying human behavior through image data.This paper proposes an image semantic segmentation method for human behavior research.Firstly,an end-to-end convolutional neural network architecture is proposed,which consists of a depth-separable jump-connected fully convolutional network and a conditional random field network;then jump-connected convolution is used to classify each pixel in the image,and an image semantic segmentation method based on convolu-tional neural network is proposed;and then a conditional random field network is used to improve the effect of image segmentation of hu-man behavior and a linear modeling and nonlinear modeling method based on the semantic segmentation of conditional random field im-age is proposed.Finally,using the proposed image segmentation network,the input entrepreneurial image data is semantically segmented to obtain the contour features of the person;and the segmentation of the images in the medical field.The experimental results show that the image semantic segmentation method is effective.It is a new way to use image data to study human behavior and can be extended to other research areas.展开更多
In the process of human behavior recognition, the traditional dense optical flow method has too many pixels and too much overhead, which limits the running speed. This paper proposed a method combing YOLOv3 (You Only ...In the process of human behavior recognition, the traditional dense optical flow method has too many pixels and too much overhead, which limits the running speed. This paper proposed a method combing YOLOv3 (You Only Look Once v3) and local optical flow method. Based on the dense optical flow method, the optical flow modulus of the area where the human target is detected is calculated to reduce the amount of computation and save the cost in terms of time. And then, a threshold value is set to complete the human behavior identification. Through design algorithm, experimental verification and other steps, the walking, running and falling state of human body in real life indoor sports video was identified. Experimental results show that this algorithm is more advantageous for jogging behavior recognition.展开更多
Multimodal-based action recognition methods have achieved high success using pose and RGB modality.However,skeletons sequences lack appearance depiction and RGB images suffer irrelevant noise due to modality limitatio...Multimodal-based action recognition methods have achieved high success using pose and RGB modality.However,skeletons sequences lack appearance depiction and RGB images suffer irrelevant noise due to modality limitations.To address this,the authors introduce human parsing feature map as a novel modality,since it can selectively retain effective semantic features of the body parts while filtering out most irrelevant noise.The authors propose a new dual-branch framework called ensemble human parsing and pose network(EPP-Net),which is the first to leverage both skeletons and human parsing modalities for action recognition.The first human pose branch feeds robust skeletons in the graph convolutional network to model pose features,while the second human parsing branch also leverages depictive parsing feature maps to model parsing features via convolutional backbones.The two high-level features will be effectively combined through a late fusion strategy for better action recognition.Extensive experiments on NTU RGB t D and NTU RGB t D 120 benchmarks consistently verify the effectiveness of our proposed EPP-Net,which outperforms the existing action recognition methods.Our code is available at https://github.com/liujf69/EPP-Net-Action.展开更多
Safety production is of great significance to the development of enterprises and society.Accidents often cause great losses because of the particularity environment of electric power.Therefore,it is important to impro...Safety production is of great significance to the development of enterprises and society.Accidents often cause great losses because of the particularity environment of electric power.Therefore,it is important to improve the safety supervision and protection in the electric power environment.In this paper,we simulate the actual electric power operation scenario by monitoring equipment and propose a real-time detection method of illegal actions based on human body key points to ensure safety behavior in real time.In this method,the human body key points in video frames were first extracted by the high-resolution network,and then classified in real time by spatial-temporal graph convolutional network.Experimental results show that this method can effectively detect illegal actions in the simulated scene.展开更多
RFID-based human activity recognition(HAR)attracts attention due to its convenience,noninvasiveness,and privacy protection.Existing RFID-based HAR methods use modeling,CNN,or LSTM to extract features effectively.Still...RFID-based human activity recognition(HAR)attracts attention due to its convenience,noninvasiveness,and privacy protection.Existing RFID-based HAR methods use modeling,CNN,or LSTM to extract features effectively.Still,they have shortcomings:1)requiring complex hand-crafted data cleaning processes and 2)only addressing single-person activity recognition based on specific RF signals.To solve these problems,this paper proposes a novel device-free method based on Time-streaming Multiscale Transformer called TransTM.This model leverages the Transformer's powerful data fitting capabilities to take raw RFID RSSI data as input without pre-processing.Concretely,we propose a multiscale convolutional hybrid Transformer to capture behavioral features that recognizes singlehuman activities and human-to-human interactions.Compared with existing CNN-and LSTM-based methods,the Transformer-based method has more data fitting power,generalization,and scalability.Furthermore,using RF signals,our method achieves an excellent classification effect on human behaviorbased classification tasks.Experimental results on the actual RFID datasets show that this model achieves a high average recognition accuracy(99.1%).The dataset we collected for detecting RFID-based indoor human activities will be published.展开更多
基金the National Key Research and Development Program of China(No.2022YFC3601400)。
文摘Frequency-modulated continuous-wave radar enables the non-contact and privacy-preserving recognition of human behavior.However,the accuracy of behavior recognition is directly influenced by the spatial relationship between human posture and the radar.To address the issue of low accuracy in behavior recognition when the human body is not directly facing the radar,a method combining local outlier factor with Doppler information is proposed for the correction of multi-classifier recognition results.Initially,the information such as distance,velocity,and micro-Doppler spectrogram of the target is obtained using the fast Fourier transform and histogram of oriented gradients-support vector machine methods,followed by preliminary recognition.Subsequently,Platt scaling is employed to transform recognition results into confidence scores,and finally,the Doppler-local outlier factor method is utilized to calibrate the confidence scores,with the highest confidence classifier result considered as the recognition outcome.Experimental results demonstrate that this approach achieves an average recognition accuracy of 96.23%for comprehensive human behavior recognition in various orientations.
基金funded by the ICT Division of theMinistry of Posts,Telecommunications,and Information Technology of Bangladesh under Grant Number 56.00.0000.052.33.005.21-7(Tracking No.22FS15306)support from the University of Rajshahi.
文摘The Internet of Things(IoT)and mobile technology have significantly transformed healthcare by enabling real-time monitoring and diagnosis of patients.Recognizing Medical-Related Human Activities(MRHA)is pivotal for healthcare systems,particularly for identifying actions critical to patient well-being.However,challenges such as high computational demands,low accuracy,and limited adaptability persist in Human Motion Recognition(HMR).While some studies have integrated HMR with IoT for real-time healthcare applications,limited research has focused on recognizing MRHA as essential for effective patient monitoring.This study proposes a novel HMR method tailored for MRHA detection,leveraging multi-stage deep learning techniques integrated with IoT.The approach employs EfficientNet to extract optimized spatial features from skeleton frame sequences using seven Mobile Inverted Bottleneck Convolutions(MBConv)blocks,followed by Convolutional Long Short Term Memory(ConvLSTM)to capture spatio-temporal patterns.A classification module with global average pooling,a fully connected layer,and a dropout layer generates the final predictions.The model is evaluated on the NTU RGB+D 120 and HMDB51 datasets,focusing on MRHA such as sneezing,falling,walking,sitting,etc.It achieves 94.85%accuracy for cross-subject evaluations and 96.45%for cross-view evaluations on NTU RGB+D 120,along with 89.22%accuracy on HMDB51.Additionally,the system integrates IoT capabilities using a Raspberry Pi and GSM module,delivering real-time alerts via Twilios SMS service to caregivers and patients.This scalable and efficient solution bridges the gap between HMR and IoT,advancing patient monitoring,improving healthcare outcomes,and reducing costs.
基金supported by the Zhejiang Provincial Natural Science Foundation of China(No.LQ23F030001)the National Natural Science Foundation of China(No.62406280)+5 种基金the Autism Research Special Fund of Zhejiang Foundation for Disabled Persons(No.2023008)the Liaoning Province Higher Education Innovative Talents Program Support Project(No.LR2019058)the Liaoning Province Joint Open Fund for Key Scientific and Technological Innovation Bases(No.2021-KF-12-05)the Central Guidance on Local Science and Technology Development Fund of Liaoning Province(No.2023JH6/100100066)the Key Laboratory for Biomedical Engineering of Ministry of Education,Zhejiang University,Chinain part by the Open Research Fund of the State Key Laboratory of Cognitive Neuroscience and Learning.
文摘Video action recognition(VAR)aims to analyze dynamic behaviors in videos and achieve semantic understanding.VAR faces challenges such as temporal dynamics,action-scene coupling,and the complexity of human interactions.Existing methods can be categorized into motion-level,event-level,and story-level ones based on spatiotemporal granularity.However,single-modal approaches struggle to capture complex behavioral semantics and human factors.Therefore,in recent years,vision-language models(VLMs)have been introduced into this field,providing new research perspectives for VAR.In this paper,we systematically review spatiotemporal hierarchical methods in VAR and explore how the introduction of large models has advanced the field.Additionally,we propose the concept of“Factor”to identify and integrate key information from both visual and textual modalities,enhancing multimodal alignment.We also summarize various multimodal alignment methods and provide in-depth analysis and insights into future research directions.
文摘A common but flawed design in existing CNN architectures is using strided convolutions and/or pooling layer,which will result in the loss of fine-grained feature information,especially for low-resolution images and small objects.In this paper,a new CNN building block named SPD-Conv was used,which completely eliminated stride and pooling operations and replaced them with a space-to-depth convolution and a non-strided convolution.Such new design has the advantage of downsampling feature maps while retaining discriminant feature information.It also represents a general unified method,which can be easily applied to any CNN architectures,and can also be applied to strided conversion and pooling in the same way.
基金co-supported by the Natural Science Basic Research Program of Shaanxi,China(No.2023-JC-QN-0043)the ND Basic Research Funds,China(No.G2022WD).
文摘The aerial deployment method enables Unmanned Aerial Vehicles(UAVs)to be directly positioned at the required altitude for their mission.This method typically employs folding technology to improve loading efficiency,with applications such as the gravity-only aerial deployment of high-aspect-ratio solar-powered UAVs,and aerial takeoff of fixed-wing drones in Mars research.However,the significant morphological changes during deployment are accompanied by strong nonlinear dynamic aerodynamic forces,which result in multiple degrees of freedom and an unstable character.This hinders the description and analysis of unknown dynamic behaviors,further leading to difficulties in the design of deployment strategies and flight control.To address this issue,this paper proposes an analysis method for dynamic behaviors during aerial deployment based on the Variational Autoencoder(VAE).Focusing on the gravity-only deployment problem of highaspect-ratio foldable-wing UAVs,the method encodes the multi-degree-of-freedom unstable motion signals into a low-dimensional feature space through a data-driven approach.By clustering in the feature space,this paper identifies and studies several dynamic behaviors during aerial deployment.The research presented in this paper offers a new method and perspective for feature extraction and analysis of complex and difficult-to-describe extreme flight dynamics,guiding the research on aerial deployment drones design and control strategies.
文摘Objective:This study aimed to explore undergraduates’knowledge,attitude,and practice/behavior of human papillomavirus(HPV)vaccination,as well as the essential influencing factors for vaccination decision-making.Methods:Through cluster and convenience sampling,2000 undergraduates from the Nursing and Language department of a university in Shanghai were sent a self-designed questionnaire.Chi-square tests,independent sample t-test/ANOVE,and multiple linear regression were used to investigate participants’knowledge and attitude on HPV vaccination,as well as the factors that predicted potential action to receive and promote HPV vaccination in the future.Results:The mean HPV knowledge score was 5.027 out of 10.Health science students showed a significantly higher knowledge mean score than the non-health science students(P<0.000).There was a statistically difference in HPV vaccination attitude among undergraduates in different grades(P<0.05).Awareness of cervical cancer and worries about the risk of cervical cancer were the significant predictors of willingness to receive and promote HPV vaccination in the future.Conclusions:It would take time for a new health product to be aware,understood,accepted,and received.Education providing and information sharing are expected to break the dawn and make the procedure processed.
基金Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government(*MSIT)(No.2018R1A5A7059549)the Competitive Research Fund of The University of Aizu,Japan.
文摘Real-time surveillance is attributed to recognizing the variety of actions performed by humans.Human Action Recognition(HAR)is a technique that recognizes human actions from a video stream.A range of variations in human actions makes it difficult to recognize with considerable accuracy.This paper presents a novel deep neural network architecture called Attention RB-Net for HAR using video frames.The input is provided to the model in the form of video frames.The proposed deep architecture is based on the unique structuring of residual blocks with several filter sizes.Features are extracted from each frame via several operations with specific parameters defined in the presented novel Attention-based Residual Bottleneck(Attention-RB)DCNN architecture.A fully connected layer receives an attention-based features matrix,and final classification is performed.Several hyperparameters of the proposed model are initialized using Bayesian Optimization(BO)and later utilized in the trained model for testing.In testing,features are extracted from the self-attention layer and passed to neural network classifiers for the final action classification.Two highly cited datasets,HMDB51 and UCF101,were used to validate the proposed architecture and obtained an average accuracy of 87.70%and 97.30%,respectively.The deep convolutional neural network(DCNN)architecture is compared with state-of-the-art(SOTA)methods,including pre-trained models,inside blocks,and recently published techniques,and performs better.
基金funded by the Ongoing Research Funding Program(ORF-2025-890),King Saud University,Riyadh,Saudi Arabia.
文摘Activity recognition is a challenging topic in the field of computer vision that has various applications,including surveillance systems,industrial automation,and human-computer interaction.Today,the demand for automation has greatly increased across industries worldwide.Real-time detection requires edge devices with limited computational time.This study proposes a novel hybrid deep learning system for human activity recognition(HAR),aiming to enhance the recognition accuracy and reduce the computational time.The proposed system combines a pretrained image classification model with a sequence analysis model.First,the dataset was divided into a training set(70%),validation set(10%),and test set(20%).Second,all the videos were converted into frames and deep-based features were extracted from each frame using convolutional neural networks(CNNs)with a vision transformer.Following that,bidirectional long short-term memory(BiLSTM)-and temporal convolutional network(TCN)-based models were trained using the training set,and their performances were evaluated using the validation set and test set.Four benchmark datasets(UCF11,UCF50,UCF101,and JHMDB)were used to evaluate the performance of the proposed HAR-based system.The experimental results showed that the combination of ConvNeXt and the TCN-based model achieved a recognition accuracy of 97.73%for UCF11,98.81%for UCF50,98.46%for UCF101,and 83.38%for JHMDB,respectively.This represents improvements in the recognition accuracy of 4%,2.67%,3.67%,and 7.08%for the UCF11,UCF50,UCF101,and JHMDB datasets,respectively,over existing models.Moreover,the proposed HAR-based system obtained superior recognition accuracy,shorter computational times,and minimal memory usage compared to the existing models.
基金supported by King Saud University,Riyadh,Saudi Arabia,under Ongoing Research Funding Program(ORF-2025-951).
文摘Human activity recognition is a significant area of research in artificial intelligence for surveillance,healthcare,sports,and human-computer interaction applications.The article benchmarks the performance of You Only Look Once version 11-based(YOLOv11-based)architecture for multi-class human activity recognition.The article benchmarks the performance of You Only Look Once version 11-based(YOLOv11-based)architecture for multi-class human activity recognition.The dataset consists of 14,186 images across 19 activity classes,from dynamic activities such as running and swimming to static activities such as sitting and sleeping.Preprocessing included resizing all images to 512512 pixels,annotating them in YOLO’s bounding box format,and applying data augmentation methods such as flipping,rotation,and cropping to enhance model generalization.The proposed model was trained for 100 epochs with adaptive learning rate methods and hyperparameter optimization for performance improvement,with a mAP@0.5 of 74.93%and a mAP@0.5-0.95 of 64.11%,outperforming previous versions of YOLO(v10,v9,and v8)and general-purpose architectures like ResNet50 and EfficientNet.It exhibited improved precision and recall for all activity classes with high precision values of 0.76 for running,0.79 for swimming,0.80 for sitting,and 0.81 for sleeping,and was tested for real-time deployment with an inference time of 8.9 ms per image,being computationally light.Proposed YOLOv11’s improvements are attributed to architectural advancements like a more complex feature extraction process,better attention modules,and an anchor-free detection mechanism.While YOLOv10 was extremely stable in static activity recognition,YOLOv9 performed well in dynamic environments but suffered from overfitting,and YOLOv8,while being a decent baseline,failed to differentiate between overlapping static activities.The experimental results determine proposed YOLOv11 to be the most appropriate model,providing an ideal balance between accuracy,computational efficiency,and robustness for real-world deployment.Nevertheless,there exist certain issues to be addressed,particularly in discriminating against visually similar activities and the use of publicly available datasets.Future research will entail the inclusion of 3D data and multimodal sensor inputs,such as depth and motion information,for enhancing recognition accuracy and generalizability to challenging real-world environments.
基金supported by the Development Center for Medical Science and Technology National Health Commission of the People's Republic of China(WA2021RW27).
文摘Background Human behaviors and tumors go hand in hand.The wave of globalization has brought about a global homogenization of human behaviors,which further triggers a potential global human behavior-related cancer burden(HBRCB)convergence.Methods This study systematically evaluated the global,regional,and national metrics of HBRCBs over the last 30 years using data from the Global Burden of Diseases(GBD)2019 results and the WHO Global Health Observatory(GHO)data repository.Results The results showed the global remission and convergence of HBRCB in the last three decades and the foreseeable future(2020–2044).Overall,HBRCBs are decreasing with the global emphasis on positive dietary habits,safe sex,substance addiction withdrawal,and active physical exercise habits.Globally,from 1990 to 2019,with the development of social development index(SDI)level from 0.511 to 0.651,the HBRCBs had been decreasing from 1507.908 to 1145.344 in age-standardized disability-adjusted life years(ASDALY)and from 61.467 to 49.449 in(age-standardized death rates)ASDR per 100,000 population with changes of−24.04%and−19.55%,respectively.Meanwhile,the variance in HBRCBs among countries and territories generally showed a decreasing or flat trend.The variance of HBRCBs among 204 countries and territories in 2019–2044 decreased from 1495.210 to 449.202 in males and from 214.640 to 78.848 in females for ASDR due to all behavior risks,and from 911,211.676 to 317,233.590 in males and from 146,171.660 to 62,926.660 in females for ASDALY.The global HBRCBs was becoming more uniform due to the globalization of human behaviors.Conclusions This study revealed the significance of addressing HBRCBs as a uniform and continuous issue in future global health promotion.It also demonstrated the potential existence of a chain effect in global health,where globalization leads to human behavior homogenization,which in turn results in HBRCB convergence.Properly measuring the commonalities and individualities among different regions and finding a balance when designing and evaluating HBRCB-related global policies in the global convergence trend of HBRCBs will be major concerns in the future.
基金supported by the Nanjing Institute of Technology(Grant No.YKJ202301).
文摘The understanding of the impact of high-velocity microparticles on human skin tissue is important for the ad-ministration of drugs during transdermal drug delivery.This paper aims to numerically investigate the dynamic behavior of human skin tissue under micro-particle impact in transdermal drug delivery.The numerical model was developed based on a coupled smoothed particle hydrodynamics(SPH)and FEM method via commercial FE software RADIOSS.Analytical analysis was conducted applying the Poncelet model and was used as validation data.A hyperelastic one-term Ogden model with one pair of material parameters(μ,α)was implemented for the skin tissue.Sensitivity studies reveal that the effect of parameter α on the penetration process is much more significant than μ.Numerical results correlate well with the analytical curves with various particle diameters and impact velocities,its capability of predicting the penetration process of micro-particle impacts into skin tissues.This work can be further investigated to guide the design of transdermal drug delivery equipment.
基金supported by the Royal Golden Jubilee(RGJ)Ph.D.Programme(Grant No.PHD/0079/2561)through the National Research Council of Thailand(NRCT)and Thailand Research Fund(TRF).
文摘This research investigates the application of multisource data fusion using a Multi-Layer Perceptron (MLP) for Human Activity Recognition (HAR). The study integrates four distinct open-source datasets—WISDM, DaLiAc, MotionSense, and PAMAP2—to develop a generalized MLP model for classifying six human activities. Performance analysis of the fused model for each dataset reveals accuracy rates of 95.83 for WISDM, 97 for DaLiAc, 94.65 for MotionSense, and 98.54 for PAMAP2. A comparative evaluation was conducted between the fused MLP model and the individual dataset models, with the latter tested on separate validation sets. The results indicate that the MLP model, trained on the fused dataset, exhibits superior performance relative to the models trained on individual datasets. This finding suggests that multisource data fusion significantly enhances the generalization and accuracy of HAR systems. The improved performance underscores the potential of integrating diverse data sources to create more robust and comprehensive models for activity recognition.
文摘Human Activity Recognition(HAR)represents a rapidly advancing research domain,propelled by continuous developments in sensor technologies and the Internet of Things(IoT).Deep learning has become the dominant paradigm in sensor-based HAR systems,offering significant advantages over traditional machine learning methods by eliminating manual feature extraction,enhancing recognition accuracy for complex activities,and enabling the exploitation of unlabeled data through generative models.This paper provides a comprehensive review of recent advancements and emerging trends in deep learning models developed for sensor-based human activity recognition(HAR)systems.We begin with an overview of fundamental HAR concepts in sensor-driven contexts,followed by a systematic categorization and summary of existing research.Our survey encompasses a wide range of deep learning approaches,including Multi-Layer Perceptrons(MLP),Convolutional Neural Networks(CNN),Recurrent Neural Networks(RNN),Long Short-Term Memory networks(LSTM),Gated Recurrent Units(GRU),Transformers,Deep Belief Networks(DBN),and hybrid architectures.A comparative evaluation of these models is provided,highlighting their performance,architectural complexity,and contributions to the field.Beyond Centralized deep learning models,we examine the role of Federated Learning(FL)in HAR,highlighting current applications and research directions.Finally,we discuss the growing importance of Explainable Artificial Intelligence(XAI)in sensor-based HAR,reviewing recent studies that integrate interpretability methods to enhance transparency and trustworthiness in deep learning-based HAR systems.
基金supported in part by the 2023 Key Supported Project of the 14th Five Year Plan for Education and Science in Hunan Province with No.ND230795.
文摘In recent years,skeleton-based action recognition has made great achievements in Computer Vision.A graph convolutional network(GCN)is effective for action recognition,modelling the human skeleton as a spatio-temporal graph.Most GCNs define the graph topology by physical relations of the human joints.However,this predefined graph ignores the spatial relationship between non-adjacent joint pairs in special actions and the behavior dependence between joint pairs,resulting in a low recognition rate for specific actions with implicit correlation between joint pairs.In addition,existing methods ignore the trend correlation between adjacent frames within an action and context clues,leading to erroneous action recognition with similar poses.Therefore,this study proposes a learnable GCN based on behavior dependence,which considers implicit joint correlation by constructing a dynamic learnable graph with extraction of specific behavior dependence of joint pairs.By using the weight relationship between the joint pairs,an adaptive model is constructed.It also designs a self-attention module to obtain their inter-frame topological relationship for exploring the context of actions.Combining the shared topology and the multi-head self-attention map,the module obtains the context-based clue topology to update the dynamic graph convolution,achieving accurate recognition of different actions with similar poses.Detailed experiments on public datasets demonstrate that the proposed method achieves better results and realizes higher quality representation of actions under various evaluation protocols compared to state-of-the-art methods.
基金funded by the National Science and Technology Council,Taiwan(Grant No.NSTC 112-2121-M-039-001)by China Medical University(Grant No.CMU112-MF-79).
文摘Artificial intelligence(AI)technology has become integral in the realm of medicine and healthcare,particularly in human activity recognition(HAR)applications such as fitness and rehabilitation tracking.This study introduces a robust coupling analysis framework that integrates four AI-enabled models,combining both machine learning(ML)and deep learning(DL)approaches to evaluate their effectiveness in HAR.The analytical dataset comprises 561 features sourced from the UCI-HAR database,forming the foundation for training the models.Additionally,the MHEALTH database is employed to replicate the modeling process for comparative purposes,while inclusion of the WISDM database,renowned for its challenging features,supports the framework’s resilience and adaptability.The ML-based models employ the methodologies including adaptive neuro-fuzzy inference system(ANFIS),support vector machine(SVM),and random forest(RF),for data training.In contrast,a DL-based model utilizes one-dimensional convolution neural network(1dCNN)to automate feature extraction.Furthermore,the recursive feature elimination(RFE)algorithm,which drives an ML-based estimator to eliminate low-participation features,helps identify the optimal features for enhancing model performance.The best accuracies of the ANFIS,SVM,RF,and 1dCNN models with meticulous featuring process achieve around 90%,96%,91%,and 93%,respectively.Comparative analysis using the MHEALTH dataset showcases the 1dCNN model’s remarkable perfect accuracy(100%),while the RF,SVM,and ANFIS models equipped with selected features achieve accuracies of 99.8%,99.7%,and 96.5%,respectively.Finally,when applied to the WISDM dataset,the DL-based and ML-based models attain accuracies of 91.4%and 87.3%,respectively,aligning with prior research findings.In conclusion,the proposed framework yields HAR models with commendable performance metrics,exhibiting its suitability for integration into the healthcare services system through AI-driven applications.
基金Supported by the Major Consulting and Research Project of the Chinese Academy of Engineering(2020-CQ-ZD-1)the National Natural Science Foundation of China(72101235)Zhejiang Soft Science Research Program(2023C35012)。
文摘Image semantic segmentation is an essential technique for studying human behavior through image data.This paper proposes an image semantic segmentation method for human behavior research.Firstly,an end-to-end convolutional neural network architecture is proposed,which consists of a depth-separable jump-connected fully convolutional network and a conditional random field network;then jump-connected convolution is used to classify each pixel in the image,and an image semantic segmentation method based on convolu-tional neural network is proposed;and then a conditional random field network is used to improve the effect of image segmentation of hu-man behavior and a linear modeling and nonlinear modeling method based on the semantic segmentation of conditional random field im-age is proposed.Finally,using the proposed image segmentation network,the input entrepreneurial image data is semantically segmented to obtain the contour features of the person;and the segmentation of the images in the medical field.The experimental results show that the image semantic segmentation method is effective.It is a new way to use image data to study human behavior and can be extended to other research areas.
文摘In the process of human behavior recognition, the traditional dense optical flow method has too many pixels and too much overhead, which limits the running speed. This paper proposed a method combing YOLOv3 (You Only Look Once v3) and local optical flow method. Based on the dense optical flow method, the optical flow modulus of the area where the human target is detected is calculated to reduce the amount of computation and save the cost in terms of time. And then, a threshold value is set to complete the human behavior identification. Through design algorithm, experimental verification and other steps, the walking, running and falling state of human body in real life indoor sports video was identified. Experimental results show that this algorithm is more advantageous for jogging behavior recognition.
基金National Natural Science Foundation of China,Grant/Award Number:62203476Natural Science Foundation of Guangdong Province,Grant/Award Number:2024A1515012089+1 种基金Natural Science Foundation of Shenzhen,Grant/Award Number:JCYJ20230807120801002Shenzhen Innovation in Science and Technology Foundation for The Excellent Youth Scholars,Grant/Award Number:RCYX20231211090248064。
文摘Multimodal-based action recognition methods have achieved high success using pose and RGB modality.However,skeletons sequences lack appearance depiction and RGB images suffer irrelevant noise due to modality limitations.To address this,the authors introduce human parsing feature map as a novel modality,since it can selectively retain effective semantic features of the body parts while filtering out most irrelevant noise.The authors propose a new dual-branch framework called ensemble human parsing and pose network(EPP-Net),which is the first to leverage both skeletons and human parsing modalities for action recognition.The first human pose branch feeds robust skeletons in the graph convolutional network to model pose features,while the second human parsing branch also leverages depictive parsing feature maps to model parsing features via convolutional backbones.The two high-level features will be effectively combined through a late fusion strategy for better action recognition.Extensive experiments on NTU RGB t D and NTU RGB t D 120 benchmarks consistently verify the effectiveness of our proposed EPP-Net,which outperforms the existing action recognition methods.Our code is available at https://github.com/liujf69/EPP-Net-Action.
基金the Science and Technology Program of State Grid Corporation of China(No.5211TZ1900S6)。
文摘Safety production is of great significance to the development of enterprises and society.Accidents often cause great losses because of the particularity environment of electric power.Therefore,it is important to improve the safety supervision and protection in the electric power environment.In this paper,we simulate the actual electric power operation scenario by monitoring equipment and propose a real-time detection method of illegal actions based on human body key points to ensure safety behavior in real time.In this method,the human body key points in video frames were first extracted by the high-resolution network,and then classified in real time by spatial-temporal graph convolutional network.Experimental results show that this method can effectively detect illegal actions in the simulated scene.
基金the Strategic Priority Research Program of Chinese Academy of Sciences(Grant No.XDC02040300)for this study.
文摘RFID-based human activity recognition(HAR)attracts attention due to its convenience,noninvasiveness,and privacy protection.Existing RFID-based HAR methods use modeling,CNN,or LSTM to extract features effectively.Still,they have shortcomings:1)requiring complex hand-crafted data cleaning processes and 2)only addressing single-person activity recognition based on specific RF signals.To solve these problems,this paper proposes a novel device-free method based on Time-streaming Multiscale Transformer called TransTM.This model leverages the Transformer's powerful data fitting capabilities to take raw RFID RSSI data as input without pre-processing.Concretely,we propose a multiscale convolutional hybrid Transformer to capture behavioral features that recognizes singlehuman activities and human-to-human interactions.Compared with existing CNN-and LSTM-based methods,the Transformer-based method has more data fitting power,generalization,and scalability.Furthermore,using RF signals,our method achieves an excellent classification effect on human behaviorbased classification tasks.Experimental results on the actual RFID datasets show that this model achieves a high average recognition accuracy(99.1%).The dataset we collected for detecting RFID-based indoor human activities will be published.