Human Activity Recognition(HAR)in drone-captured videos has become popular because of the interest in various fields such as video surveillance,sports analysis,and human-robot interaction.However,recognizing actions f...Human Activity Recognition(HAR)in drone-captured videos has become popular because of the interest in various fields such as video surveillance,sports analysis,and human-robot interaction.However,recognizing actions from such videos poses the following challenges:variations of human motion,the complexity of backdrops,motion blurs,occlusions,and restricted camera angles.This research presents a human activity recognition system to address these challenges by working with drones’red-green-blue(RGB)videos.The first step in the proposed system involves partitioning videos into frames and then using bilateral filtering to improve the quality of object foregrounds while reducing background interference before converting from RGB to grayscale images.The YOLO(You Only Look Once)algorithm detects and extracts humans from each frame,obtaining their skeletons for further processing.The joint angles,displacement and velocity,histogram of oriented gradients(HOG),3D points,and geodesic Distance are included.These features are optimized using Quadratic Discriminant Analysis(QDA)and utilized in a Neuro-Fuzzy Classifier(NFC)for activity classification.Real-world evaluations on the Drone-Action,Unmanned Aerial Vehicle(UAV)-Gesture,and Okutama-Action datasets substantiate the proposed system’s superiority in accuracy rates over existing methods.In particular,the system obtains recognition rates of 93%for drone action,97%for UAV gestures,and 81%for Okutama-action,demonstrating the system’s reliability and ability to learn human activity from drone videos.展开更多
Intelligent vehicle tracking and detection are crucial tasks in the realm of highway management.However,vehicles come in a range of sizes,which is challenging to detect,affecting the traffic monitoring system’s overa...Intelligent vehicle tracking and detection are crucial tasks in the realm of highway management.However,vehicles come in a range of sizes,which is challenging to detect,affecting the traffic monitoring system’s overall accuracy.Deep learning is considered to be an efficient method for object detection in vision-based systems.In this paper,we proposed a vision-based vehicle detection and tracking system based on a You Look Only Once version 5(YOLOv5)detector combined with a segmentation technique.The model consists of six steps.In the first step,all the extracted traffic sequence images are subjected to pre-processing to remove noise and enhance the contrast level of the images.These pre-processed images are segmented by labelling each pixel to extract the uniform regions to aid the detection phase.A single-stage detector YOLOv5 is used to detect and locate vehicles in images.Each detection was exposed to Speeded Up Robust Feature(SURF)feature extraction to track multiple vehicles.Based on this,a unique number is assigned to each vehicle to easily locate them in the succeeding image frames by extracting them using the feature-matching technique.Further,we implemented a Kalman filter to track multiple vehicles.In the end,the vehicle path is estimated by using the centroid points of the rectangular bounding box predicted by the tracking algorithm.The experimental results and comparison reveal that our proposed vehicle detection and tracking system outperformed other state-of-the-art systems.The proposed implemented system provided 94.1%detection precision for Roundabout and 96.1%detection precision for Vehicle Aerial Imaging from Drone(VAID)datasets,respectively.展开更多
Inertial Sensor-based Daily Activity Recognition(IS-DAR)requires adaptable,data-efficient methods for effective multi-sensor use.This study presents an advanced detection system using body-worn sensors to accurately r...Inertial Sensor-based Daily Activity Recognition(IS-DAR)requires adaptable,data-efficient methods for effective multi-sensor use.This study presents an advanced detection system using body-worn sensors to accurately recognize activities.A structured pipeline enhances IS-DAR by applying signal preprocessing,feature extraction and optimization,followed by classification.Before segmentation,a Chebyshev filter removes noise,and Blackman window-ing improves signal representation.Discriminative features-Gaussian Mixture Model(GMM)with Mel-Frequency Cepstral Coefficients(MFCC),spectral entropy,quaternion-based features,and Gammatone Cepstral Coefficients(GCC)-are fused to expand the feature space.Unlike existing approaches,the proposed IS-DAR system uniquely inte-grates diverse handcrafted features using a novel fusion strategy combined with Bayesian-based optimization,enabling a more accurate and generalized activity recognition.The key contribution lies in the joint optimization and fusion of features via Bayesian-based subset selection,resulting in a compact and highly discriminative feature representation.These features are then fed into a Convolutional Neural Network(CNN)to effectively detect spatial-temporal patterns in activity signals.Testing on two public datasets-IM-WSHA and ENABL3S-achieved accuracy levels of 93.0%and 92.0%,respectively.The integration of advanced feature extraction methods with fusion and optimization techniques significantly enhanced detection performance,surpassing traditional methods.The obtained results establish the effectiveness of the proposed IS-DAR system for deployment in real-world activity recognition applications.展开更多
As urban landscapes evolve and vehicular volumes soar,traditional traffic monitoring systems struggle to scale,often failing under the complexities of dense,dynamic,and occluded environments.This paper introduces a no...As urban landscapes evolve and vehicular volumes soar,traditional traffic monitoring systems struggle to scale,often failing under the complexities of dense,dynamic,and occluded environments.This paper introduces a novel,unified deep learning framework for vehicle detection,tracking,counting,and classification in aerial imagery designed explicitly for modern smart city infrastructure demands.Our approach begins with adaptive histogram equalization to optimize aerial image clarity,followed by a cutting-edge scene parsing technique using Mask2Former,enabling robust segmentation even in visually congested settings.Vehicle detection leverages the latest YOLOv11 architecture,delivering superior accuracy in aerial contexts by addressing occlusion,scale variance,and fine-grained object differentiation.We incorporate the highly efficient ByteTrack algorithm for tracking,enabling seamless identity preservation across frames.Vehicle counting is achieved through an unsupervised DBSCAN-based method,ensuring adaptability to varying traffic densities.We further introduce a hybrid feature extraction module combining Convolutional Neural Networks(CNNs)with Zernike Moments,capturing both deep semantic and geometric signatures of vehicles.The final classification is powered by NASNet,a neural architecture search-optimized model,ensuring high accuracy across diverse vehicle types and orientations.Extensive evaluations of the VAID benchmark dataset demonstrate the system’s outstanding performance,achieving 96%detection,94%tracking,and 96.4%classification accuracy.On the UAVDT dataset,the system attains 95%detection,93%tracking,and 95%classification accuracy,confirming its robustness across diverse aerial traffic scenarios.These results establish new benchmarks in aerial traffic analysis and validate the framework’s scalability,making it a powerful and adaptable solution for next-generation intelligent transportation systems and urban surveillance.展开更多
基金funded by the Open Access Initiative of the University of Bremen and the DFG via SuUB Bremen.Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2024R348),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Human Activity Recognition(HAR)in drone-captured videos has become popular because of the interest in various fields such as video surveillance,sports analysis,and human-robot interaction.However,recognizing actions from such videos poses the following challenges:variations of human motion,the complexity of backdrops,motion blurs,occlusions,and restricted camera angles.This research presents a human activity recognition system to address these challenges by working with drones’red-green-blue(RGB)videos.The first step in the proposed system involves partitioning videos into frames and then using bilateral filtering to improve the quality of object foregrounds while reducing background interference before converting from RGB to grayscale images.The YOLO(You Only Look Once)algorithm detects and extracts humans from each frame,obtaining their skeletons for further processing.The joint angles,displacement and velocity,histogram of oriented gradients(HOG),3D points,and geodesic Distance are included.These features are optimized using Quadratic Discriminant Analysis(QDA)and utilized in a Neuro-Fuzzy Classifier(NFC)for activity classification.Real-world evaluations on the Drone-Action,Unmanned Aerial Vehicle(UAV)-Gesture,and Okutama-Action datasets substantiate the proposed system’s superiority in accuracy rates over existing methods.In particular,the system obtains recognition rates of 93%for drone action,97%for UAV gestures,and 81%for Okutama-action,demonstrating the system’s reliability and ability to learn human activity from drone videos.
基金This researchwas supported by the Deanship of ScientificResearch at Najran University,under the Research Group Funding Program Grant Code(NU/RG/SERC/12/30)This research is supported and funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2024R410)+1 种基金Princess Nourah bint Abdulrahman University,Riyadh,Saudi ArabiaThis study is supported via funding from Prince Sattam bin Abdulaziz University Project Number(PSAU/2024/R/1445).
文摘Intelligent vehicle tracking and detection are crucial tasks in the realm of highway management.However,vehicles come in a range of sizes,which is challenging to detect,affecting the traffic monitoring system’s overall accuracy.Deep learning is considered to be an efficient method for object detection in vision-based systems.In this paper,we proposed a vision-based vehicle detection and tracking system based on a You Look Only Once version 5(YOLOv5)detector combined with a segmentation technique.The model consists of six steps.In the first step,all the extracted traffic sequence images are subjected to pre-processing to remove noise and enhance the contrast level of the images.These pre-processed images are segmented by labelling each pixel to extract the uniform regions to aid the detection phase.A single-stage detector YOLOv5 is used to detect and locate vehicles in images.Each detection was exposed to Speeded Up Robust Feature(SURF)feature extraction to track multiple vehicles.Based on this,a unique number is assigned to each vehicle to easily locate them in the succeeding image frames by extracting them using the feature-matching technique.Further,we implemented a Kalman filter to track multiple vehicles.In the end,the vehicle path is estimated by using the centroid points of the rectangular bounding box predicted by the tracking algorithm.The experimental results and comparison reveal that our proposed vehicle detection and tracking system outperformed other state-of-the-art systems.The proposed implemented system provided 94.1%detection precision for Roundabout and 96.1%detection precision for Vehicle Aerial Imaging from Drone(VAID)datasets,respectively.
基金funded by the Open Access Initiative of the University of Bremen and the DFG via SuUB BremenThe authors extend their appreciation to the Deanship of Research and Graduate Studies at King Khalid University for funding thiswork through Large Group Project under grant number(RGP.2/568/45)The authors extend their appreciation to the Deanship of Scientific Research at Northern Border University,Arar,KSA for funding this research work through the Project Number“NBU-FFR-2025-231-04”.
文摘Inertial Sensor-based Daily Activity Recognition(IS-DAR)requires adaptable,data-efficient methods for effective multi-sensor use.This study presents an advanced detection system using body-worn sensors to accurately recognize activities.A structured pipeline enhances IS-DAR by applying signal preprocessing,feature extraction and optimization,followed by classification.Before segmentation,a Chebyshev filter removes noise,and Blackman window-ing improves signal representation.Discriminative features-Gaussian Mixture Model(GMM)with Mel-Frequency Cepstral Coefficients(MFCC),spectral entropy,quaternion-based features,and Gammatone Cepstral Coefficients(GCC)-are fused to expand the feature space.Unlike existing approaches,the proposed IS-DAR system uniquely inte-grates diverse handcrafted features using a novel fusion strategy combined with Bayesian-based optimization,enabling a more accurate and generalized activity recognition.The key contribution lies in the joint optimization and fusion of features via Bayesian-based subset selection,resulting in a compact and highly discriminative feature representation.These features are then fed into a Convolutional Neural Network(CNN)to effectively detect spatial-temporal patterns in activity signals.Testing on two public datasets-IM-WSHA and ENABL3S-achieved accuracy levels of 93.0%and 92.0%,respectively.The integration of advanced feature extraction methods with fusion and optimization techniques significantly enhanced detection performance,surpassing traditional methods.The obtained results establish the effectiveness of the proposed IS-DAR system for deployment in real-world activity recognition applications.
基金funded by the Open Access Initiative of the University of Bremen and the DFG via SuUB BremenThe authors extend their appreciation to the Deanship of Research and Graduate Studies at King Khalid University for funding this work through Large Group Project under grant number(RGP2/367/46)+1 种基金This research is supported and funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R410)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘As urban landscapes evolve and vehicular volumes soar,traditional traffic monitoring systems struggle to scale,often failing under the complexities of dense,dynamic,and occluded environments.This paper introduces a novel,unified deep learning framework for vehicle detection,tracking,counting,and classification in aerial imagery designed explicitly for modern smart city infrastructure demands.Our approach begins with adaptive histogram equalization to optimize aerial image clarity,followed by a cutting-edge scene parsing technique using Mask2Former,enabling robust segmentation even in visually congested settings.Vehicle detection leverages the latest YOLOv11 architecture,delivering superior accuracy in aerial contexts by addressing occlusion,scale variance,and fine-grained object differentiation.We incorporate the highly efficient ByteTrack algorithm for tracking,enabling seamless identity preservation across frames.Vehicle counting is achieved through an unsupervised DBSCAN-based method,ensuring adaptability to varying traffic densities.We further introduce a hybrid feature extraction module combining Convolutional Neural Networks(CNNs)with Zernike Moments,capturing both deep semantic and geometric signatures of vehicles.The final classification is powered by NASNet,a neural architecture search-optimized model,ensuring high accuracy across diverse vehicle types and orientations.Extensive evaluations of the VAID benchmark dataset demonstrate the system’s outstanding performance,achieving 96%detection,94%tracking,and 96.4%classification accuracy.On the UAVDT dataset,the system attains 95%detection,93%tracking,and 95%classification accuracy,confirming its robustness across diverse aerial traffic scenarios.These results establish new benchmarks in aerial traffic analysis and validate the framework’s scalability,making it a powerful and adaptable solution for next-generation intelligent transportation systems and urban surveillance.