期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
Image Captioning Using Multimodal Deep Learning Approach
1
作者 Rihem Farkh Ghislain Oudinet Yasser Foued 《Computers, Materials & Continua》 SCIE EI 2024年第12期3951-3968,共18页
The process of generating descriptive captions for images has witnessed significant advancements in last years,owing to the progress in deep learning techniques.Despite significant advancements,the task of thoroughly ... The process of generating descriptive captions for images has witnessed significant advancements in last years,owing to the progress in deep learning techniques.Despite significant advancements,the task of thoroughly grasping image content and producing coherent,contextually relevant captions continues to pose a substantial challenge.In this paper,we introduce a novel multimodal method for image captioning by integrating three powerful deep learning architectures:YOLOv8(You Only Look Once)for robust object detection,EfficientNetB7 for efficient feature extraction,and Transformers for effective sequence modeling.Our proposed model combines the strengths of YOLOv8 in detecting objects,the superior feature representation capabilities of EfficientNetB7,and the contextual understanding and sequential generation abilities of Transformers.We conduct extensive experiments on standard benchmark datasets to evaluate the effectiveness of our approach,demonstrating its ability to generate informative and semantically rich captions for diverse images.The experimental results showcase the synergistic benefits of integrating YOLOv8,EfficientNetB7,and Transformers in advancing the state-of-the-art in image captioning tasks.The proposed multimodal approach has yielded impressive outcomes,generating informative and semantically rich captions for a diverse range of images.By combining the strengths of YOLOv8,EfficientNetB7,and Transformers,the model has achieved state-of-the-art results in image captioning tasks.The significance of this approach lies in its ability to address the challenging task of generating coherent and contextually relevant captions while achieving a comprehensive understanding of image content.The integration of three powerful deep learning architectures demonstrates the synergistic benefits of multimodal fusion in advancing the state-of-the-art in image captioning.Furthermore,this approach has a profound impact on the field,opening up new avenues for research in multimodal deep learning and paving the way for more sophisticated and context-aware image captioning systems.These systems have the potential to make significant contributions to various fields,encompassing human-computer interaction,computer vision and natural language processing. 展开更多
关键词 Image caption multimodelmethods YOLOv8 efficientNetB7 features extration TRANSFORMERS ENCODER DECODER Flickr8k
在线阅读 下载PDF
Analysis of Fragrance Composition in Three Cultivars of Osmanthus fragrans Albus Group Flower by Gas Chromatography-Mass Spectrometry 被引量:13
2
作者 LI Fafang HUANG Qizhi 《Wuhan University Journal of Natural Sciences》 CAS 2011年第4期342-348,共7页
With supercritical CO2 fluid extraction(SCFE), essential oil was extracted from three cultivars of Xianning osmanthus. The fresh osmanthus flower was processed with a petroleum ether digestion method to produce the ... With supercritical CO2 fluid extraction(SCFE), essential oil was extracted from three cultivars of Xianning osmanthus. The fresh osmanthus flower was processed with a petroleum ether digestion method to produce the extractum. The yields of essential oil and extractum were 0.19 % and 0.13 % (m/m) respectively. The essential oil and fragrance composition and content extracted were analyzed with gas chromatography-mass spectrometer (GC-MS). The result showed that essential oil contained 36.99%(area/total area) of ionone, ionol and 13.11% of linalool; ionone and ionol contained in extractum were as high as up to 33.33%, while linalool up to 21.92%. Whether essential oil or extractum contains only about 40% fat acid and other ester matters. None of environmental estrogen (phthalic ester) was found in fragrance ingredients. The result also showed that the quality of O. fragrans Albus group fragrance in Xianning is better than that produced in Hangzhou and Anhui districts. 展开更多
关键词 Xianning Osmanthus fragrans supercrifieal CO2 fluid extration(SCFE) extractum gas chromatography-mass spectrometer(GC-MS)
原文传递
Extract of Ginkgo biloba promotes neuronal regeneration in the hippocampus after exposure to acrylamide 被引量:5
3
作者 Wei-ling Huang Yu-xin Ma +6 位作者 Yu-bao Fan Sheng-min Lai Hong-qing Liu Jing Liu Li Luo Guo-ying Li Su-min Tian 《Neural Regeneration Research》 SCIE CAS CSCD 2017年第8期1287-1293,共7页
Previous studies have demonstrated a neuroprotective effect of extract of Ginkgo biloba against neuronal damage, but have mainly focused on antioxidation of extract of Ginkgo biloba. To date, limited studies have dete... Previous studies have demonstrated a neuroprotective effect of extract of Ginkgo biloba against neuronal damage, but have mainly focused on antioxidation of extract of Ginkgo biloba. To date, limited studies have determined whether extrasct of Ginkgo biloba has a protective effect on neuronal damage. In the present study, acrylamide and 30, 60, and 120 mg/kg extract of Ginkgo biloba were administered for 4 weeks by gavage to establish mouse models. Our results showed that 30, 60, and 120 mg/kg extract of Ginkgo biloba effectively alleviated the abnormal gait of poisoned mice, and up-regulated protein expression levels of doublecortin(DCX), brain-derived neurotrophic factor, and growth associated protein-43(GAP-43) in the hippocampus. Simultaneously, DCX-and GAP-43-immunoreactive cells increased. These findings suggest that extract of Ginkgo biloba can mitigate neurotoxicity induced by acrylamide, and thereby promote neuronal regeneration in the hippocampus of acrylamide-treated mice. 展开更多
关键词 nerve regeneration brain injury extrat of Ginkgo biloba ACRYLAMIDE DOUBLECORTIN brain-derived neurotrophic factor growthassociated protein-43 NEURONS damage HIPPOCAMPUS mice neural regeneration
暂未订购
上一页 1 下一页 到第
使用帮助 返回顶部