Atrial Fibrillation(AF)is a cardiac disorder characterized by irregular heart rhythms,typically diagnosed using Electrocardiogram(ECG)signals.In remote regions with limited healthcare personnel,automated AF detection ...Atrial Fibrillation(AF)is a cardiac disorder characterized by irregular heart rhythms,typically diagnosed using Electrocardiogram(ECG)signals.In remote regions with limited healthcare personnel,automated AF detection is extremely important.Although recent studies have explored various machine learning and deep learning approaches,challenges such as signal noise and subtle variations between AF and other cardiac rhythms continue to hinder accurate classification.In this study,we propose a novel framework that integrates robust preprocessing,comprehensive feature extraction,and an ensemble classification strategy.In the first step,ECG signals are divided into equal-sized segments using a 5-s sliding window with 50%overlap,followed by bandpass filtering between 0.5 and 45 Hz for noise removal.After preprocessing,both time and frequency-domain features are extracted,and a custom one-dimensional Convolutional Neural Network—Bidirectional Long Short-Term Memory(1D CNN-BiLSTM)architecture is introduced.Handcrafted and automated features are concatenated into a unified feature vector and classified using Support Vector Machine(SVM),Random Forest(RF),and Long Short-Term Memory(LSTM)models.A Quantum Genetic Algorithm(QGA)optimizes weighted averages of the classifier outputs for multi-class classification,distinguishing among AF,noisy,normal,and other rhythms.Evaluated on the PhysioNet 2017 Cardiology Challenge dataset,the proposed method achieved an accuracy of 94.40%and an F1-score of 92.30%,outperforming several state-of-the-art techniques.展开更多
Arabic Sign Language(ArSL)recognition plays a vital role in enhancing the communication for the Deaf and Hard of Hearing(DHH)community.Researchers have proposed multiple methods for automated recognition of ArSL;howev...Arabic Sign Language(ArSL)recognition plays a vital role in enhancing the communication for the Deaf and Hard of Hearing(DHH)community.Researchers have proposed multiple methods for automated recognition of ArSL;however,these methods face multiple challenges that include high gesture variability,occlusions,limited signer diversity,and the scarcity of large annotated datasets.Existing methods,often relying solely on either skeletal data or video-based features,struggle with generalization and robustness,especially in dynamic and real-world conditions.This paper proposes a novel multimodal ensemble classification framework that integrates geometric features derived from 3D skeletal joint distances and angles with temporal features extracted from RGB videos using the Inflated 3D ConvNet(I3D).By fusing these complementary modalities at the feature level and applying a majority-voting ensemble of XGBoost,Random Forest,and Support Vector Machine classifiers,the framework robustly captures both spatial configurations and motion dynamics of sign gestures.Feature selection using the Pearson Correlation Coefficient further enhances efficiency by reducing redundancy.Extensive experiments on the ArabSign dataset,which includes RGB videos and corresponding skeletal data,demonstrate that the proposed approach significantly outperforms state-of-the-art methods,achieving an average F1-score of 97%using a majority-voting ensemble of XGBoost,Random Forest,and SVM classifiers,and improving recognition accuracy by more than 7%over previous best methods.This work not only advances the technical stateof-the-art in ArSL recognition but also provides a scalable,real-time solution for practical deployment in educational,social,and assistive communication technologies.Even though this study is about Arabic Sign Language,the framework proposed here can be extended to different sign languages,creating possibilities for potentially worldwide applicability in sign language recognition tasks.展开更多
Wheat is a critical crop,extensively consumed worldwide,and its production enhancement is essential to meet escalating demand.The presence of diseases like stem rust,leaf rust,yellow rust,and tan spot significantly di...Wheat is a critical crop,extensively consumed worldwide,and its production enhancement is essential to meet escalating demand.The presence of diseases like stem rust,leaf rust,yellow rust,and tan spot significantly diminishes wheat yield,making the early and precise identification of these diseases vital for effective disease management.With advancements in deep learning algorithms,researchers have proposed many methods for the automated detection of disease pathogens;however,accurately detectingmultiple disease pathogens simultaneously remains a challenge.This challenge arises due to the scarcity of RGB images for multiple diseases,class imbalance in existing public datasets,and the difficulty in extracting features that discriminate between multiple classes of disease pathogens.In this research,a novel method is proposed based on Transfer Generative Adversarial Networks for augmenting existing data,thereby overcoming the problems of class imbalance and data scarcity.This study proposes a customized architecture of Vision Transformers(ViT),where the feature vector is obtained by concatenating features extracted from the custom ViT and Graph Neural Networks.This paper also proposes a Model AgnosticMeta Learning(MAML)based ensemble classifier for accurate classification.The proposedmodel,validated on public datasets for wheat disease pathogen classification,achieved a test accuracy of 99.20%and an F1-score of 97.95%.Compared with existing state-of-the-art methods,this proposed model outperforms in terms of accuracy,F1-score,and the number of disease pathogens detection.In future,more diseases can be included for detection along with some other modalities like pests and weed.展开更多
基金supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(grant number IMSIU-DDRSP2501)。
文摘Atrial Fibrillation(AF)is a cardiac disorder characterized by irregular heart rhythms,typically diagnosed using Electrocardiogram(ECG)signals.In remote regions with limited healthcare personnel,automated AF detection is extremely important.Although recent studies have explored various machine learning and deep learning approaches,challenges such as signal noise and subtle variations between AF and other cardiac rhythms continue to hinder accurate classification.In this study,we propose a novel framework that integrates robust preprocessing,comprehensive feature extraction,and an ensemble classification strategy.In the first step,ECG signals are divided into equal-sized segments using a 5-s sliding window with 50%overlap,followed by bandpass filtering between 0.5 and 45 Hz for noise removal.After preprocessing,both time and frequency-domain features are extracted,and a custom one-dimensional Convolutional Neural Network—Bidirectional Long Short-Term Memory(1D CNN-BiLSTM)architecture is introduced.Handcrafted and automated features are concatenated into a unified feature vector and classified using Support Vector Machine(SVM),Random Forest(RF),and Long Short-Term Memory(LSTM)models.A Quantum Genetic Algorithm(QGA)optimizes weighted averages of the classifier outputs for multi-class classification,distinguishing among AF,noisy,normal,and other rhythms.Evaluated on the PhysioNet 2017 Cardiology Challenge dataset,the proposed method achieved an accuracy of 94.40%and an F1-score of 92.30%,outperforming several state-of-the-art techniques.
基金funding this work through Research Group No.KS-2024-376.
文摘Arabic Sign Language(ArSL)recognition plays a vital role in enhancing the communication for the Deaf and Hard of Hearing(DHH)community.Researchers have proposed multiple methods for automated recognition of ArSL;however,these methods face multiple challenges that include high gesture variability,occlusions,limited signer diversity,and the scarcity of large annotated datasets.Existing methods,often relying solely on either skeletal data or video-based features,struggle with generalization and robustness,especially in dynamic and real-world conditions.This paper proposes a novel multimodal ensemble classification framework that integrates geometric features derived from 3D skeletal joint distances and angles with temporal features extracted from RGB videos using the Inflated 3D ConvNet(I3D).By fusing these complementary modalities at the feature level and applying a majority-voting ensemble of XGBoost,Random Forest,and Support Vector Machine classifiers,the framework robustly captures both spatial configurations and motion dynamics of sign gestures.Feature selection using the Pearson Correlation Coefficient further enhances efficiency by reducing redundancy.Extensive experiments on the ArabSign dataset,which includes RGB videos and corresponding skeletal data,demonstrate that the proposed approach significantly outperforms state-of-the-art methods,achieving an average F1-score of 97%using a majority-voting ensemble of XGBoost,Random Forest,and SVM classifiers,and improving recognition accuracy by more than 7%over previous best methods.This work not only advances the technical stateof-the-art in ArSL recognition but also provides a scalable,real-time solution for practical deployment in educational,social,and assistive communication technologies.Even though this study is about Arabic Sign Language,the framework proposed here can be extended to different sign languages,creating possibilities for potentially worldwide applicability in sign language recognition tasks.
基金Researchers Supporting Project Number(RSPD2024R 553),King Saud University,Riyadh,Saudi Arabia.
文摘Wheat is a critical crop,extensively consumed worldwide,and its production enhancement is essential to meet escalating demand.The presence of diseases like stem rust,leaf rust,yellow rust,and tan spot significantly diminishes wheat yield,making the early and precise identification of these diseases vital for effective disease management.With advancements in deep learning algorithms,researchers have proposed many methods for the automated detection of disease pathogens;however,accurately detectingmultiple disease pathogens simultaneously remains a challenge.This challenge arises due to the scarcity of RGB images for multiple diseases,class imbalance in existing public datasets,and the difficulty in extracting features that discriminate between multiple classes of disease pathogens.In this research,a novel method is proposed based on Transfer Generative Adversarial Networks for augmenting existing data,thereby overcoming the problems of class imbalance and data scarcity.This study proposes a customized architecture of Vision Transformers(ViT),where the feature vector is obtained by concatenating features extracted from the custom ViT and Graph Neural Networks.This paper also proposes a Model AgnosticMeta Learning(MAML)based ensemble classifier for accurate classification.The proposedmodel,validated on public datasets for wheat disease pathogen classification,achieved a test accuracy of 99.20%and an F1-score of 97.95%.Compared with existing state-of-the-art methods,this proposed model outperforms in terms of accuracy,F1-score,and the number of disease pathogens detection.In future,more diseases can be included for detection along with some other modalities like pests and weed.