期刊文献+
共找到7篇文章
< 1 >
每页显示 20 50 100
A Survey on Enhancing Image Captioning with Advanced Strategies and Techniques
1
作者 Alaa Thobhani Beiji Zou +4 位作者 Xiaoyan Kui Amr Abdussalam Muhammad Asim Sajid Shah Mohammed ELAffendi 《Computer Modeling in Engineering & Sciences》 2025年第3期2247-2280,共34页
Image captioning has seen significant research efforts over the last decade.The goal is to generate meaningful semantic sentences that describe visual content depicted in photographs and are syntactically accurate.Man... Image captioning has seen significant research efforts over the last decade.The goal is to generate meaningful semantic sentences that describe visual content depicted in photographs and are syntactically accurate.Many real-world applications rely on image captioning,such as helping people with visual impairments to see their surroundings.To formulate a coherent and relevant textual description,computer vision techniques are utilized to comprehend the visual content within an image,followed by natural language processing methods.Numerous approaches and models have been developed to deal with this multifaceted problem.Several models prove to be stateof-the-art solutions in this field.This work offers an exclusive perspective emphasizing the most critical strategies and techniques for enhancing image caption generation.Rather than reviewing all previous image captioning work,we analyze various techniques that significantly improve image caption generation and achieve significant performance improvements,including encompassing image captioning with visual attention methods,exploring semantic information types in captions,and employing multi-caption generation techniques.Further,advancements such as neural architecture search,few-shot learning,multi-phase learning,and cross-modal embedding within image caption networks are examined for their transformative effects.The comprehensive quantitative analysis conducted in this study identifies cutting-edgemethodologies and sheds light on their profound impact,driving forward the forefront of image captioning technology. 展开更多
关键词 Image captioning semantic attention multi-caption natural language processing visual attention methods
在线阅读 下载PDF
A Novelty Framework in Image-Captioning with Visual Attention-Based Refined Visual Features
2
作者 Alaa Thobhani Beiji Zou +4 位作者 Xiaoyan Kui Amr Abdussalam Muhammad Asim Mohammed ELAffendi Sajid Shah 《Computers, Materials & Continua》 2025年第3期3943-3964,共22页
Image captioning,the task of generating descriptive sentences for images,has advanced significantly with the integration of semantic information.However,traditional models still rely on static visual features that do ... Image captioning,the task of generating descriptive sentences for images,has advanced significantly with the integration of semantic information.However,traditional models still rely on static visual features that do not evolve with the changing linguistic context,which can hinder the ability to form meaningful connections between the image and the generated captions.This limitation often leads to captions that are less accurate or descriptive.In this paper,we propose a novel approach to enhance image captioning by introducing dynamic interactions where visual features continuously adapt to the evolving linguistic context.Our model strengthens the alignment between visual and linguistic elements,resulting in more coherent and contextually appropriate captions.Specifically,we introduce two innovative modules:the Visual Weighting Module(VWM)and the Enhanced Features Attention Module(EFAM).The VWM adjusts visual features using partial attention,enabling dynamic reweighting of the visual inputs,while the EFAM further refines these features to improve their relevance to the generated caption.By continuously adjusting visual features in response to the linguistic context,our model bridges the gap between static visual features and dynamic language generation.We demonstrate the effectiveness of our approach through experiments on the MS-COCO dataset,where our method outperforms state-of-the-art techniques in terms of caption quality and contextual relevance.Our results show that dynamic visual-linguistic alignment significantly enhances image captioning performance. 展开更多
关键词 Image-captioning visual attention deep learning visual features
在线阅读 下载PDF
Deep Learning Models for Detecting Cheating in Online Exams
3
作者 Siham Essahraui Ismail Lamaakal +6 位作者 Yassine Maleh Khalid El Makkaoui Mouncef Filali Bouami Ibrahim Ouahbi May Almousa Ali Abdullah S.Al Qahtani Ahmed A.Abd El-Latif 《Computers, Materials & Continua》 2025年第11期3151-3183,共33页
The rapid shift to online education has introduced significant challenges to maintaining academic integrity in remote assessments,as traditional proctoring methods fall short in preventing cheating.The increase in che... The rapid shift to online education has introduced significant challenges to maintaining academic integrity in remote assessments,as traditional proctoring methods fall short in preventing cheating.The increase in cheating during online exams highlights the need for efficient,adaptable detection models to uphold academic credibility.This paper presents a comprehensive analysis of various deep learning models for cheating detection in online proctoring systems,evaluating their accuracy,efficiency,and adaptability.We benchmark several advanced architectures,including EfficientNet,MobileNetV2,ResNet variants and more,using two specialized datasets(OEP and OP)tailored for online proctoring contexts.Our findings reveal that EfficientNetB1 and YOLOv5 achieve top performance on the OP dataset,with EfficientNetB1 attaining a peak accuracy of 94.59% and YOLOv5 reaching a mean average precision(mAP@0.5)of 98.3%.For the OEP dataset,ResNet50-CBAM,YOLOv5 and EfficientNetB0 stand out,with ResNet50-CBAMachieving an accuracy of 93.61% and EfficientNetB0 showing robust detection performance with balanced accuracy and computational efficiency.These results underscore the importance of selectingmodels that balance accuracy and efficiency,supporting scalable,effective cheating detection in online assessments. 展开更多
关键词 Anti-cheating model computer vision(CV) deep learning(DL) online exam proctoring neural networks facial recognition biometric authentication security of distance education
在线阅读 下载PDF
SGO-DRE:A Squid Game Optimization-Based Ensemble Method for Accurate and Interpretable Skin Disease Diagnosis
4
作者 Areeba Masood Siddiqui Hyder Abbas +2 位作者 Muhammad Asim Abdelhamied A.Ateya Hanaa A.Abdallah 《Computer Modeling in Engineering & Sciences》 2025年第9期3135-3168,共34页
Timely and accurate diagnosis of skin diseases is crucial as conventional methods are time-consuming and prone to errors.Traditional trial-and-error approaches often aggregate multiple models without optimization by r... Timely and accurate diagnosis of skin diseases is crucial as conventional methods are time-consuming and prone to errors.Traditional trial-and-error approaches often aggregate multiple models without optimization by resulting in suboptimal performance.To address these challenges,we propose a novel Squid Game OptimizationDimension Reduction-based Ensemble(SGO-DRE)method for the precise diagnosis of skin diseases.Our approach begins by selecting pre-trained models named MobileNetV1,DenseNet201,and Xception for robust feature extraction.These models are enhanced with dimension reduction blocks to improve efficiency.To tackle the aggregation problem of various models,we leverage the Squid Game Optimization(SGO)algorithm,which iteratively searches for the optimal weightage set to assign the appropriate weightage to each individual model within the proposed weighted average aggregation ensemble approach.The proposed ensemble method effectively utilizes the strengths of each model.We evaluated the proposed method using an 8-class skin disease dataset,a 6-class MSLD dataset,and a 4-class MSID dataset,achieving accuracies of 98.71%,96.34%,and 93.46%,respectively.Additionally,we employed visual tools like Grad-CAM,ROC curves,and Precision-Recall curves to interpret the decision making of models and assess its performance.These evaluations ensure that the proposed method not only provides robust results but also enhances interpretability and reliability in clinical decision-making. 展开更多
关键词 Deep learning squid game optimization ensemble learning skin disease convolutional neural networks
在线阅读 下载PDF
AI-Driven Pattern Recognition in Medicinal Plants: A Comprehensive Review and Comparative Analysis
5
作者 Mohd Asif Hajam Tasleem Arif +2 位作者 Akib Mohi Ud Din Khanday Mudasir Ahmad Wani Muhammad Asim 《Computers, Materials & Continua》 SCIE EI 2024年第11期2077-2131,共55页
The pharmaceutical industry increasingly values medicinal plants due to their perceived safety and costeffectiveness compared to modern drugs.Throughout the extensive history of medicinal plant usage,various plant par... The pharmaceutical industry increasingly values medicinal plants due to their perceived safety and costeffectiveness compared to modern drugs.Throughout the extensive history of medicinal plant usage,various plant parts,including flowers,leaves,and roots,have been acknowledged for their healing properties and employed in plant identification.Leaf images,however,stand out as the preferred and easily accessible source of information.Manual plant identification by plant taxonomists is intricate,time-consuming,and prone to errors,relying heavily on human perception.Artificial intelligence(AI)techniques offer a solution by automating plant recognition processes.This study thoroughly examines cutting-edge AI approaches for leaf image-based plant identification,drawing insights from literature across renowned repositories.This paper critically summarizes relevant literature based on AI algorithms,extracted features,and results achieved.Additionally,it analyzes extensively used datasets in automated plant classification research.It also offers deep insights into implemented techniques and methods employed for medicinal plant recognition.Moreover,this rigorous review study discusses opportunities and challenges in employing these AI-based approaches.Furthermore,in-depth statistical findings and lessons learned from this survey are highlighted with novel research areas with the aim of offering insights to the readers and motivating new research directions.This review is expected to serve as a foundational resource for future researchers in the field of AI-based identification of medicinal plants. 展开更多
关键词 Pattern recognition artificial intelligence machine learning deep learning image processing plant leaf identification
在线阅读 下载PDF
A Concise and Varied Visual Features-Based Image Captioning Model with Visual Selection
6
作者 Alaa Thobhani Beiji Zou +4 位作者 Xiaoyan Kui Amr Abdussalam Muhammad Asim Naveed Ahmed Mohammed Ali Alshara 《Computers, Materials & Continua》 SCIE EI 2024年第11期2873-2894,共22页
Image captioning has gained increasing attention in recent years.Visual characteristics found in input images play a crucial role in generating high-quality captions.Prior studies have used visual attention mechanisms... Image captioning has gained increasing attention in recent years.Visual characteristics found in input images play a crucial role in generating high-quality captions.Prior studies have used visual attention mechanisms to dynamically focus on localized regions of the input image,improving the effectiveness of identifying relevant image regions at each step of caption generation.However,providing image captioning models with the capability of selecting the most relevant visual features from the input image and attending to them can significantly improve the utilization of these features.Consequently,this leads to enhanced captioning network performance.In light of this,we present an image captioning framework that efficiently exploits the extracted representations of the image.Our framework comprises three key components:the Visual Feature Detector module(VFD),the Visual Feature Visual Attention module(VFVA),and the language model.The VFD module is responsible for detecting a subset of the most pertinent features from the local visual features,creating an updated visual features matrix.Subsequently,the VFVA directs its attention to the visual features matrix generated by the VFD,resulting in an updated context vector employed by the language model to generate an informative description.Integrating the VFD and VFVA modules introduces an additional layer of processing for the visual features,thereby contributing to enhancing the image captioning model’s performance.Using the MS-COCO dataset,our experiments show that the proposed framework competes well with state-of-the-art methods,effectively leveraging visual representations to improve performance.The implementation code can be found here:https://github.com/althobhani/VFDICM(accessed on 30 July 2024). 展开更多
关键词 Visual attention image captioning visual feature detector visual feature visual attention
在线阅读 下载PDF
Hybrid Fusion Net with Explanability:A Novel Explainable Deep Learning-Based Hybrid Framework for Enhanced Skin Lesion Classification Using Dermoscopic Images
7
作者 Mohamed Hammad Mohammed El Affendi Souham Meshoul 《Computer Modeling in Engineering & Sciences》 2025年第10期1055-1086,共32页
Skin cancer is among the most common malignancies worldwide,but its mortality burden is largely driven by aggressive subtypes such as melanoma,with outcomes varying across regions and healthcare settings.These variati... Skin cancer is among the most common malignancies worldwide,but its mortality burden is largely driven by aggressive subtypes such as melanoma,with outcomes varying across regions and healthcare settings.These variations emphasize the importance of reliable diagnostic technologies that support clinicians in detecting skin malignancies with higher accuracy.Traditional diagnostic methods often rely on subjective visual assessments,which can lead to misdiagnosis.This study addresses these challenges by developing HybridFusionNet,a novel model that integrates Convolutional Neural Networks(CNN)with 1D feature extraction techniques to enhance diagnostic accuracy.Utilizing two extensive datasets,BCN20000 and HAM10000,the methodology includes data preprocessing,application of Synthetic Minority Oversampling Technique combined with Edited Nearest Neighbors(SMOTEENN)for data balancing,and optimization of feature selection using the Tree-based Pipeline Optimization Tool(TPOT).The results demonstrate significant performance improvements over traditional CNN models,achieving an accuracy of 0.9693 on the BCN20000 dataset and 0.9909 on the HAM10000 dataset.The HybridFusionNet model not only outperforms conventionalmethods but also effectively addresses class imbalance.To enhance transparency,it integrates post-hoc explanation techniques such as LIME,which highlight the features influencing predictions.These findings highlight the potential of HybridFusionNet to support real-world applications,including physician-assist systems,teledermatology,and large-scale skin cancer screening programs.By improving diagnostic efficiency and enabling access to expert-level analysis,the modelmay enhance patient outcomes and foster greater trust in artificial intelligence(AI)-assisted clinical decision-making. 展开更多
关键词 AI CNN deep learning image classification model optimization skin cancer detection
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部