期刊文献+
共找到4篇文章
< 1 >
每页显示 20 50 100
研究使用GPT构建大语言模型智能母基金决策投资支持系统 被引量:2
1
作者 朱铮雄 周智宏 《计算机应用与软件》 北大核心 2024年第5期21-26,共6页
构建基于GPT的智能大语言模型基金决策支持系统,以有效应对金融市场的复杂性。通过大语言模型技术,系统提供深层次的基金和项目洞察,同时进行风险计量。采用Python与Flask构建,整合了OpenAI GPT模型,通过嵌入式AI模式提升了投前管理、... 构建基于GPT的智能大语言模型基金决策支持系统,以有效应对金融市场的复杂性。通过大语言模型技术,系统提供深层次的基金和项目洞察,同时进行风险计量。采用Python与Flask构建,整合了OpenAI GPT模型,通过嵌入式AI模式提升了投前管理、初筛、评分和风险评估的效能,旨在提高整体决策效率。未来的改进方向包括对模型的优化和系统的扩展。这一创新将推动金融领域数字化转型,进一步提升投资效率。 展开更多
关键词 人工智能大语言模型 gpt transformer OpenAI PYTHON FLASK 母基金
在线阅读 下载PDF
生成式预训练Transformer模型的逻辑性优化方法 被引量:1
2
作者 张兆天 《信息与电脑》 2024年第4期50-52,共3页
生成式预训练Transformer(Generative Pre-Trained Transformer,GPT)模型作为一种基于Transformer架构的预训练模型,在完成自然语言处理任务方面取得了巨大的成功。由于依赖于生成下一个词的局部贪婪过程,使对任务或输出的全局理解、逻... 生成式预训练Transformer(Generative Pre-Trained Transformer,GPT)模型作为一种基于Transformer架构的预训练模型,在完成自然语言处理任务方面取得了巨大的成功。由于依赖于生成下一个词的局部贪婪过程,使对任务或输出的全局理解、逻辑推理和道德法规约束能力不够。为了提升计算的逻辑性和可靠性,结合的生成型计算过程,论述计算结果的逻辑局限性,从而引入一类和逻辑计算模型混合的优化结构。 展开更多
关键词 生成式预训练transformer模型(gpt) 逻辑性 优化结构
在线阅读 下载PDF
Unlocking the Potential:A Comprehensive Systematic Review of ChatGPT in Natural Language Processing Tasks
3
作者 Ebtesam Ahmad Alomari 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第10期43-85,共43页
As Natural Language Processing(NLP)continues to advance,driven by the emergence of sophisticated large language models such as ChatGPT,there has been a notable growth in research activity.This rapid uptake reflects in... As Natural Language Processing(NLP)continues to advance,driven by the emergence of sophisticated large language models such as ChatGPT,there has been a notable growth in research activity.This rapid uptake reflects increasing interest in the field and induces critical inquiries into ChatGPT’s applicability in the NLP domain.This review paper systematically investigates the role of ChatGPT in diverse NLP tasks,including information extraction,Name Entity Recognition(NER),event extraction,relation extraction,Part of Speech(PoS)tagging,text classification,sentiment analysis,emotion recognition and text annotation.The novelty of this work lies in its comprehensive analysis of the existing literature,addressing a critical gap in understanding ChatGPT’s adaptability,limitations,and optimal application.In this paper,we employed a systematic stepwise approach following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses(PRISMA)framework to direct our search process and seek relevant studies.Our review reveals ChatGPT’s significant potential in enhancing various NLP tasks.Its adaptability in information extraction tasks,sentiment analysis,and text classification showcases its ability to comprehend diverse contexts and extract meaningful details.Additionally,ChatGPT’s flexibility in annotation tasks reducesmanual efforts and accelerates the annotation process,making it a valuable asset in NLP development and research.Furthermore,GPT-4 and prompt engineering emerge as a complementary mechanism,empowering users to guide the model and enhance overall accuracy.Despite its promising potential,challenges persist.The performance of ChatGP Tneeds tobe testedusingmore extensivedatasets anddiversedata structures.Subsequently,its limitations in handling domain-specific language and the need for fine-tuning in specific applications highlight the importance of further investigations to address these issues. 展开更多
关键词 Generative AI large languagemodel(LLM) natural language processing(NLP) Chatgpt gpt(generative pretraining transformer) gpt-4 sentiment analysis NER information extraction ANNOTATION text classification
在线阅读 下载PDF
GPT-NAS:Neural Architecture Search Meets Generative Pre-Trained Transformer Model
4
作者 Caiyang Yu Xianggen Liu +5 位作者 Yifan Wang Yun Liu Wentao Feng Xiong Deng Chenwei Tang Jiancheng Lv 《Big Data Mining and Analytics》 2025年第1期45-64,共20页
The pursuit of optimal neural network architectures is foundational to the progression of Neural Architecture Search (NAS). However, the existing NAS methods suffer from the following problem using traditional search ... The pursuit of optimal neural network architectures is foundational to the progression of Neural Architecture Search (NAS). However, the existing NAS methods suffer from the following problem using traditional search strategies, i.e., when facing a large and complex search space, it is difficult to mine more effective architectures within a reasonable time, resulting in inferior search results. This research introduces the Generative Pre-trained Transformer NAS (GPT-NAS), an innovative approach designed to overcome the limitations which are inherent in traditional NAS strategies. This approach improves search efficiency and obtains better architectures by integrating GPT model into the search process. Specifically, we design a reconstruction strategy that utilizes the trained GPT to reorganize the architectures obtained from the search. In addition, to equip the GPT model with the design capabilities of neural architecture, we propose the use of the GPT model for training on a neural architecture dataset. For each architecture, the structural information of its previous layers is utilized to predict the next layer of structure, iteratively traversing the entire architecture. In this way, the GPT model can efficiently learn the key features required for neural architectures. Extensive experimental validation shows that our GPT-NAS approach beats both manually constructed neural architectures and automatically generated architectures by NAS. In addition, we validate the superiority of introducing the GPT model in several ways, and find that the accuracy of the neural architecture on the image dataset obtained from the search after introducing the GPT model is improved by up to about 9%. 展开更多
关键词 Neural Architecture Search(NAS) Generative Pre-trained transformer(gpt)model evolutionary algorithm image classification
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部