Face liveness detection is essential for securing biometric authentication systems against spoofing attacks,including printed photos,replay videos,and 3D masks.This study systematically evaluates pre-trained CNN model...Face liveness detection is essential for securing biometric authentication systems against spoofing attacks,including printed photos,replay videos,and 3D masks.This study systematically evaluates pre-trained CNN models—DenseNet201,VGG16,InceptionV3,ResNet50,VGG19,MobileNetV2,Xception,and InceptionResNetV2—leveraging transfer learning and fine-tuning to enhance liveness detection performance.The models were trained and tested on NUAA and Replay-Attack datasets,with cross-dataset generalization validated on SiW-MV2 to assess real-world adaptability.Performance was evaluated using accuracy,precision,recall,FAR,FRR,HTER,and specialized spoof detection metrics(APCER,NPCER,ACER).Fine-tuning significantly improved detection accuracy,with DenseNet201 achieving the highest performance(98.5%on NUAA,97.71%on Replay-Attack),while MobileNetV2 proved the most efficient model for real-time applications(latency:15 ms,memory usage:45 MB,energy consumption:30 mJ).A statistical significance analysis(paired t-tests,confidence intervals)validated these improvements.Cross-dataset experiments identified DenseNet201 and MobileNetV2 as the most generalizable architectures,with DenseNet201 achieving 86.4%accuracy on Replay-Attack when trained on NUAA,demonstrating robust feature extraction and adaptability.In contrast,ResNet50 showed lower generalization capabilities,struggling with dataset variability and complex spoofing attacks.These findings suggest that MobileNetV2 is well-suited for low-power applications,while DenseNet201 is ideal for high-security environments requiring superior accuracy.This research provides a framework for improving real-time face liveness detection,enhancing biometric security,and guiding future advancements in AI-driven anti-spoofing techniques.展开更多
Many applications,including security systems,medical diagnostics,and human-computer interfaces,depend on eye gaze recognition.However,due to factors including individual variations,occlusions,and shifting illumination...Many applications,including security systems,medical diagnostics,and human-computer interfaces,depend on eye gaze recognition.However,due to factors including individual variations,occlusions,and shifting illumination conditions,real-world scenarios continue to provide difficulties for accurate and consistent eye gaze recognition.This work is aimed at investigating the potential benefits of employing transfer learning to improve eye gaze detection ability and efficiency.Transfer learning is the process of fine-tuning pre-trained models on smaller,domain-specific datasets after they have been trained on larger datasets.We study several transfer learning algorithms and evaluate their effectiveness on eye gaze identification,including both Regression and Classification tasks,using a range of deep learning architectures,namely AlexNet,Visual Geometry Group(VGG),InceptionV3,and ResNet.In this study,we evaluate the effectiveness of transfer learning-basedmodels against models that were trained fromscratch using eye-gazing datasets on grounds of various performance and loss metrics such as Precision,Accuracy,and Mean Absolute Error.We investigate the effects of different pre-trainedmodels,dataset sizes,and domain gaps on the transfer learning process,and the findings of our study clarify the efficacy of transfer learning for eye gaze detection and offer suggestions for the most successful transfer learning strategies to apply in real-world situations.展开更多
The precise identification of date palm tree diseases is essential for maintaining agricultural productivity and promoting sustainable farming methods.Conventional approaches rely on visual examination by experts to d...The precise identification of date palm tree diseases is essential for maintaining agricultural productivity and promoting sustainable farming methods.Conventional approaches rely on visual examination by experts to detect infected palm leaves,which is time intensive and susceptible to mistakes.This study proposes an automated leaf classification system that uses deep learning algorithms to identify and categorize diseases in date palm tree leaves with high precision and dependability.The system leverages pretrained convolutional neural network architectures(InceptionV3,DenseNet,and MobileNet)to extract and examine leaf characteristics for classification purposes.A publicly accessible dataset comprising multiple classes of diseased and healthy date palm leaf samples was used for the training and assessment.Data augmentation techniques were implemented to enhance the dataset and improve model resilience.In addition,Synthetic Minority Oversampling Technique(SMOTE)was applied to address class imbalance and further improve the classification performance.The system was trained and evaluated using this dataset,and two of the models,DenseNet and MobileNet,achieved classification accuracies greater than 95%.MobileNetV2 emerged as the top-performing model among those assessed,achieving an overall accuracy of 96.99%and macro-average F1-score of 0.97.All nine categories of date palm leaf conditions were consistently and accurately identified,showing exceptional precision and dependability.Comparative experiments were conducted to assess the performance of the Convolutional Neural Network(CNN)architectures and demonstrate their potential for scalable and automated disease detection.This system has the potential to serve as a valuable agricultural tool for assisting in disease management and monitoring date palm cultivation.展开更多
The prompt spread of COVID-19 has emphasized the necessity for effective and precise diagnostic tools.In this article,a hybrid approach in terms of datasets as well as the methodology by utilizing a previously unexplo...The prompt spread of COVID-19 has emphasized the necessity for effective and precise diagnostic tools.In this article,a hybrid approach in terms of datasets as well as the methodology by utilizing a previously unexplored dataset obtained from a private hospital for detecting COVID-19,pneumonia,and normal conditions in chest X-ray images(CXIs)is proposed coupled with Explainable Artificial Intelligence(XAI).Our study leverages less preprocessing with pre-trained cutting-edge models like InceptionV3,VGG16,and VGG19 that excel in the task of feature extraction.The methodology is further enhanced by the inclusion of the t-SNE(t-Distributed Stochastic Neighbor Embedding)technique for visualizing the extracted image features and Contrast Limited Adaptive Histogram Equalization(CLAHE)to improve images before extraction of features.Additionally,an AttentionMechanism is utilized,which helps clarify how the modelmakes decisions,which builds trust in artificial intelligence(AI)systems.To evaluate the effectiveness of the proposed approach,both benchmark datasets and a private dataset obtained with permissions from Jinnah PostgraduateMedical Center(JPMC)in Karachi,Pakistan,are utilized.In 12 experiments,VGG19 showcased remarkable performance in the hybrid dataset approach,achieving 100%accuracy in COVID-19 vs.pneumonia classification and 97%in distinguishing normal cases.Overall,across all classes,the approach achieved 98%accuracy,demonstrating its efficiency in detecting COVID-19 and differentiating it fromother chest disorders(Pneumonia and healthy)while also providing insights into the decision-making process of the models.展开更多
Handwritten character recognition(HCR)involves identifying characters in images,documents,and various sources such as forms surveys,questionnaires,and signatures,and transforming them into a machine-readable format fo...Handwritten character recognition(HCR)involves identifying characters in images,documents,and various sources such as forms surveys,questionnaires,and signatures,and transforming them into a machine-readable format for subsequent processing.Successfully recognizing complex and intricately shaped handwritten characters remains a significant obstacle.The use of convolutional neural network(CNN)in recent developments has notably advanced HCR,leveraging the ability to extract discriminative features from extensive sets of raw data.Because of the absence of pre-existing datasets in the Kurdish language,we created a Kurdish handwritten dataset called(KurdSet).The dataset consists of Kurdish characters,digits,texts,and symbols.The dataset consists of 1560 participants and contains 45,240 characters.In this study,we chose characters only from our dataset.We utilized a Kurdish dataset for handwritten character recognition.The study also utilizes various models,including InceptionV3,Xception,DenseNet121,and a customCNNmodel.To show the performance of the KurdSet dataset,we compared it to Arabic handwritten character recognition dataset(AHCD).We applied the models to both datasets to show the performance of our dataset.Additionally,the performance of the models is evaluated using test accuracy,which measures the percentage of correctly classified characters in the evaluation phase.All models performed well in the training phase,DenseNet121 exhibited the highest accuracy among the models,achieving a high accuracy of 99.80%on the Kurdish dataset.And Xception model achieved 98.66%using the Arabic dataset.展开更多
基金funded by Centre for Advanced Modelling and Geospatial Information Systems(CAMGIS),Faculty of Engineering and IT,University of Technology Sydney.Moreover,Ongoing Research Funding Program(ORF-2025-14)King Saud University,Riyadh,Saudi Arabia,under Project ORF-2025-。
文摘Face liveness detection is essential for securing biometric authentication systems against spoofing attacks,including printed photos,replay videos,and 3D masks.This study systematically evaluates pre-trained CNN models—DenseNet201,VGG16,InceptionV3,ResNet50,VGG19,MobileNetV2,Xception,and InceptionResNetV2—leveraging transfer learning and fine-tuning to enhance liveness detection performance.The models were trained and tested on NUAA and Replay-Attack datasets,with cross-dataset generalization validated on SiW-MV2 to assess real-world adaptability.Performance was evaluated using accuracy,precision,recall,FAR,FRR,HTER,and specialized spoof detection metrics(APCER,NPCER,ACER).Fine-tuning significantly improved detection accuracy,with DenseNet201 achieving the highest performance(98.5%on NUAA,97.71%on Replay-Attack),while MobileNetV2 proved the most efficient model for real-time applications(latency:15 ms,memory usage:45 MB,energy consumption:30 mJ).A statistical significance analysis(paired t-tests,confidence intervals)validated these improvements.Cross-dataset experiments identified DenseNet201 and MobileNetV2 as the most generalizable architectures,with DenseNet201 achieving 86.4%accuracy on Replay-Attack when trained on NUAA,demonstrating robust feature extraction and adaptability.In contrast,ResNet50 showed lower generalization capabilities,struggling with dataset variability and complex spoofing attacks.These findings suggest that MobileNetV2 is well-suited for low-power applications,while DenseNet201 is ideal for high-security environments requiring superior accuracy.This research provides a framework for improving real-time face liveness detection,enhancing biometric security,and guiding future advancements in AI-driven anti-spoofing techniques.
文摘Many applications,including security systems,medical diagnostics,and human-computer interfaces,depend on eye gaze recognition.However,due to factors including individual variations,occlusions,and shifting illumination conditions,real-world scenarios continue to provide difficulties for accurate and consistent eye gaze recognition.This work is aimed at investigating the potential benefits of employing transfer learning to improve eye gaze detection ability and efficiency.Transfer learning is the process of fine-tuning pre-trained models on smaller,domain-specific datasets after they have been trained on larger datasets.We study several transfer learning algorithms and evaluate their effectiveness on eye gaze identification,including both Regression and Classification tasks,using a range of deep learning architectures,namely AlexNet,Visual Geometry Group(VGG),InceptionV3,and ResNet.In this study,we evaluate the effectiveness of transfer learning-basedmodels against models that were trained fromscratch using eye-gazing datasets on grounds of various performance and loss metrics such as Precision,Accuracy,and Mean Absolute Error.We investigate the effects of different pre-trainedmodels,dataset sizes,and domain gaps on the transfer learning process,and the findings of our study clarify the efficacy of transfer learning for eye gaze detection and offer suggestions for the most successful transfer learning strategies to apply in real-world situations.
基金funded by the Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R821),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The precise identification of date palm tree diseases is essential for maintaining agricultural productivity and promoting sustainable farming methods.Conventional approaches rely on visual examination by experts to detect infected palm leaves,which is time intensive and susceptible to mistakes.This study proposes an automated leaf classification system that uses deep learning algorithms to identify and categorize diseases in date palm tree leaves with high precision and dependability.The system leverages pretrained convolutional neural network architectures(InceptionV3,DenseNet,and MobileNet)to extract and examine leaf characteristics for classification purposes.A publicly accessible dataset comprising multiple classes of diseased and healthy date palm leaf samples was used for the training and assessment.Data augmentation techniques were implemented to enhance the dataset and improve model resilience.In addition,Synthetic Minority Oversampling Technique(SMOTE)was applied to address class imbalance and further improve the classification performance.The system was trained and evaluated using this dataset,and two of the models,DenseNet and MobileNet,achieved classification accuracies greater than 95%.MobileNetV2 emerged as the top-performing model among those assessed,achieving an overall accuracy of 96.99%and macro-average F1-score of 0.97.All nine categories of date palm leaf conditions were consistently and accurately identified,showing exceptional precision and dependability.Comparative experiments were conducted to assess the performance of the Convolutional Neural Network(CNN)architectures and demonstrate their potential for scalable and automated disease detection.This system has the potential to serve as a valuable agricultural tool for assisting in disease management and monitoring date palm cultivation.
基金the Deanship of Graduate Studies and Scientific Research at Qassim University for financial support(QU-APC-2024-9/1).
文摘The prompt spread of COVID-19 has emphasized the necessity for effective and precise diagnostic tools.In this article,a hybrid approach in terms of datasets as well as the methodology by utilizing a previously unexplored dataset obtained from a private hospital for detecting COVID-19,pneumonia,and normal conditions in chest X-ray images(CXIs)is proposed coupled with Explainable Artificial Intelligence(XAI).Our study leverages less preprocessing with pre-trained cutting-edge models like InceptionV3,VGG16,and VGG19 that excel in the task of feature extraction.The methodology is further enhanced by the inclusion of the t-SNE(t-Distributed Stochastic Neighbor Embedding)technique for visualizing the extracted image features and Contrast Limited Adaptive Histogram Equalization(CLAHE)to improve images before extraction of features.Additionally,an AttentionMechanism is utilized,which helps clarify how the modelmakes decisions,which builds trust in artificial intelligence(AI)systems.To evaluate the effectiveness of the proposed approach,both benchmark datasets and a private dataset obtained with permissions from Jinnah PostgraduateMedical Center(JPMC)in Karachi,Pakistan,are utilized.In 12 experiments,VGG19 showcased remarkable performance in the hybrid dataset approach,achieving 100%accuracy in COVID-19 vs.pneumonia classification and 97%in distinguishing normal cases.Overall,across all classes,the approach achieved 98%accuracy,demonstrating its efficiency in detecting COVID-19 and differentiating it fromother chest disorders(Pneumonia and healthy)while also providing insights into the decision-making process of the models.
文摘Handwritten character recognition(HCR)involves identifying characters in images,documents,and various sources such as forms surveys,questionnaires,and signatures,and transforming them into a machine-readable format for subsequent processing.Successfully recognizing complex and intricately shaped handwritten characters remains a significant obstacle.The use of convolutional neural network(CNN)in recent developments has notably advanced HCR,leveraging the ability to extract discriminative features from extensive sets of raw data.Because of the absence of pre-existing datasets in the Kurdish language,we created a Kurdish handwritten dataset called(KurdSet).The dataset consists of Kurdish characters,digits,texts,and symbols.The dataset consists of 1560 participants and contains 45,240 characters.In this study,we chose characters only from our dataset.We utilized a Kurdish dataset for handwritten character recognition.The study also utilizes various models,including InceptionV3,Xception,DenseNet121,and a customCNNmodel.To show the performance of the KurdSet dataset,we compared it to Arabic handwritten character recognition dataset(AHCD).We applied the models to both datasets to show the performance of our dataset.Additionally,the performance of the models is evaluated using test accuracy,which measures the percentage of correctly classified characters in the evaluation phase.All models performed well in the training phase,DenseNet121 exhibited the highest accuracy among the models,achieving a high accuracy of 99.80%on the Kurdish dataset.And Xception model achieved 98.66%using the Arabic dataset.