The Internet of Things(IoT)and mobile technology have significantly transformed healthcare by enabling real-time monitoring and diagnosis of patients.Recognizing Medical-Related Human Activities(MRHA)is pivotal for he...The Internet of Things(IoT)and mobile technology have significantly transformed healthcare by enabling real-time monitoring and diagnosis of patients.Recognizing Medical-Related Human Activities(MRHA)is pivotal for healthcare systems,particularly for identifying actions critical to patient well-being.However,challenges such as high computational demands,low accuracy,and limited adaptability persist in Human Motion Recognition(HMR).While some studies have integrated HMR with IoT for real-time healthcare applications,limited research has focused on recognizing MRHA as essential for effective patient monitoring.This study proposes a novel HMR method tailored for MRHA detection,leveraging multi-stage deep learning techniques integrated with IoT.The approach employs EfficientNet to extract optimized spatial features from skeleton frame sequences using seven Mobile Inverted Bottleneck Convolutions(MBConv)blocks,followed by Convolutional Long Short Term Memory(ConvLSTM)to capture spatio-temporal patterns.A classification module with global average pooling,a fully connected layer,and a dropout layer generates the final predictions.The model is evaluated on the NTU RGB+D 120 and HMDB51 datasets,focusing on MRHA such as sneezing,falling,walking,sitting,etc.It achieves 94.85%accuracy for cross-subject evaluations and 96.45%for cross-view evaluations on NTU RGB+D 120,along with 89.22%accuracy on HMDB51.Additionally,the system integrates IoT capabilities using a Raspberry Pi and GSM module,delivering real-time alerts via Twilios SMS service to caregivers and patients.This scalable and efficient solution bridges the gap between HMR and IoT,advancing patient monitoring,improving healthcare outcomes,and reducing costs.展开更多
Skeleton-based sign language recognition(SLR)is a challenging research area mainly due to the fast and complex hand movement.Currently,graph convolution networks(GCNs)have been employed in skeleton-based SLR and achie...Skeleton-based sign language recognition(SLR)is a challenging research area mainly due to the fast and complex hand movement.Currently,graph convolution networks(GCNs)have been employed in skeleton-based SLR and achieved remarkable performance.However,existing GCN-based SLR methods suffer from a lack of explicit attention to hand topology which plays an important role in the sign language representation.To address this issue,we propose a novel hand-aware graph convolution network(HA-GCN)to focus on hand topological relationships of skeleton graph.Specifically,a hand-aware graph convolution layer is designed to capture both global body and local hand information,in which two sub-graphs are defined and incorporated to represent hand topology information.In addition,in order to eliminate the over-fitting problem,an adaptive DropGraph is designed in construction of hand-aware graph convolution block to remove the spatial and temporal redundancy in the sign language representation.With the aim to further improve the performance,the joints information,bones,together with their motion information are simultaneously modeled in a multi-stream framework.Extensive experiments on the two open-source datasets,AUTSL and INCLUDE,demonstrate that our proposed algorithm outperforms the state-of-the-art with a significant margin.Our code is available at https://github.com/snorlaxse/HA-SLR-GCN.展开更多
基金funded by the ICT Division of theMinistry of Posts,Telecommunications,and Information Technology of Bangladesh under Grant Number 56.00.0000.052.33.005.21-7(Tracking No.22FS15306)support from the University of Rajshahi.
文摘The Internet of Things(IoT)and mobile technology have significantly transformed healthcare by enabling real-time monitoring and diagnosis of patients.Recognizing Medical-Related Human Activities(MRHA)is pivotal for healthcare systems,particularly for identifying actions critical to patient well-being.However,challenges such as high computational demands,low accuracy,and limited adaptability persist in Human Motion Recognition(HMR).While some studies have integrated HMR with IoT for real-time healthcare applications,limited research has focused on recognizing MRHA as essential for effective patient monitoring.This study proposes a novel HMR method tailored for MRHA detection,leveraging multi-stage deep learning techniques integrated with IoT.The approach employs EfficientNet to extract optimized spatial features from skeleton frame sequences using seven Mobile Inverted Bottleneck Convolutions(MBConv)blocks,followed by Convolutional Long Short Term Memory(ConvLSTM)to capture spatio-temporal patterns.A classification module with global average pooling,a fully connected layer,and a dropout layer generates the final predictions.The model is evaluated on the NTU RGB+D 120 and HMDB51 datasets,focusing on MRHA such as sneezing,falling,walking,sitting,etc.It achieves 94.85%accuracy for cross-subject evaluations and 96.45%for cross-view evaluations on NTU RGB+D 120,along with 89.22%accuracy on HMDB51.Additionally,the system integrates IoT capabilities using a Raspberry Pi and GSM module,delivering real-time alerts via Twilios SMS service to caregivers and patients.This scalable and efficient solution bridges the gap between HMR and IoT,advancing patient monitoring,improving healthcare outcomes,and reducing costs.
文摘识别非驾驶行为是提高驾驶安全性的重要手段之一。目前基于骨架序列和图像的融合识别方法具有计算量大和特征融合困难的问题。针对上述问题,本文提出一种基于多尺度骨架图和局部视觉上下文融合的驾驶员行为识别模型(skeleton-image based behavior recognition network,SIBBR-Net)。SIBBR-Net通过基于多尺度图的图卷积网络和基于局部视觉及注意力机制的卷积神经网络,充分提取运动和外观特征,较好地平衡了模型表征能力和计算量间的关系。基于手部运动的特征双向引导学习策略、自适应特征融合模块和静态特征空间上的辅助损失,使运动和外观特征间互相引导更新并实现自适应融合。最终在Drive&Act数据集进行算法测试,SIBBR-Net在动态标签和静态标签条件下的平均正确率分别为61.78%和80.42%,每秒浮点运算次数为25.92G,较最优方法降低了76.96%。
基金supported by Young Scientists Fund of the National Natural Science Foundation of China(62202356,62302373)Fundamental Research Funds for the Central Universities(ZYTS24092,QTZX24085).
文摘Skeleton-based sign language recognition(SLR)is a challenging research area mainly due to the fast and complex hand movement.Currently,graph convolution networks(GCNs)have been employed in skeleton-based SLR and achieved remarkable performance.However,existing GCN-based SLR methods suffer from a lack of explicit attention to hand topology which plays an important role in the sign language representation.To address this issue,we propose a novel hand-aware graph convolution network(HA-GCN)to focus on hand topological relationships of skeleton graph.Specifically,a hand-aware graph convolution layer is designed to capture both global body and local hand information,in which two sub-graphs are defined and incorporated to represent hand topology information.In addition,in order to eliminate the over-fitting problem,an adaptive DropGraph is designed in construction of hand-aware graph convolution block to remove the spatial and temporal redundancy in the sign language representation.With the aim to further improve the performance,the joints information,bones,together with their motion information are simultaneously modeled in a multi-stream framework.Extensive experiments on the two open-source datasets,AUTSL and INCLUDE,demonstrate that our proposed algorithm outperforms the state-of-the-art with a significant margin.Our code is available at https://github.com/snorlaxse/HA-SLR-GCN.