期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
An Investigation into the Issues Concerning the Copyright of Content Generated by Text-to-Video AI
1
作者 Zhou Chunguang Yi Jia 《Contemporary Social Sciences》 2024年第5期95-117,共23页
Text-to-video artificial intelligence(AI)is a new product that has arisen from the continuous development of digital technology over recent years.The emergence of various text-to-video AI models,including Sora,is driv... Text-to-video artificial intelligence(AI)is a new product that has arisen from the continuous development of digital technology over recent years.The emergence of various text-to-video AI models,including Sora,is driving the proliferation of content generated through concrete imagery.However,the content generated by text-to-video AI raises significant issues such as unclear work identification,ambiguous copyright ownership,and widespread copyright infringement.These issues can hinder the development of text-to-video AI in the creative fields and impede the prosperity of China’s social and cultural arts.Therefore,this paper proposes three recommendations within a legal framework:(a)categorizing the content generated by text-to-video AI as audiovisual works;(b)clarifying the copyright ownership model for text-to-video AI works;(c)reasonably delineating the responsibilities of the parties who are involved in the text-to-video AI works.The aim is to mitigate the copyright risks associated with content generated by text-to-video AI and to promote the healthy development of text-to-video AI in the creative fields. 展开更多
关键词 text-to-video AI work identification copyright ownership copyright infringement
在线阅读 下载PDF
Optimizing Semantic and Texture Consistency in Video Generation
2
作者 Xian Yu Jianxun Zhang +1 位作者 Siran Tian Xiaobao He 《Computers, Materials & Continua》 2025年第10期1883-1897,共15页
In recent years,diffusion models have achieved remarkable progress in image generation.However,extending them to text-to-video(T2V)generation remains challenging,particularly in maintaining semantic consistency and vi... In recent years,diffusion models have achieved remarkable progress in image generation.However,extending them to text-to-video(T2V)generation remains challenging,particularly in maintaining semantic consistency and visual quality across frames.Existing approaches often overlook the synergy between high-level semantics and low-level texture information,resulting in blurry or temporally inconsistent outputs.To address these issues,we propose Dual Consistency Training(DCT),a novel framework designed to jointly optimize semantic and texture consistency in video generation.Specifically,we introduce a multi-scale spatial adapter to enhance spatial feature extraction,and leverage the complementary strengths of CLIP and VGG—where CLIP focuses on high-level semantics and VGG captures fine-grained texture and detail.During training,a stepwise strategy is adopted to impose semantic and texture losses,constraining discrepancies between generated and ground-truth frames.Furthermore,we propose CLWS,which dynamically adjusts the balance between semantic and texture losses to facilitate more stable and effective optimization.Remarkably,DCT achieves high-quality video generation using only a single training video on a single NVIDIA A6000 GPU.Extensive experiments demonstrate that our method significantly improves temporal coherence and visual fidelity across various video generation tasks,verifying its effectiveness and generalizability. 展开更多
关键词 Diffusion model dynamic weighting text-to-video one-shot
在线阅读 下载PDF
Exploring the Latest Applications of OpenAI and ChatGPT: An In-Depth Survey 被引量:3
3
作者 Hong Zhang Haijian Shao 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第3期2061-2102,共42页
OpenAI and ChatGPT, as state-of-the-art languagemodels driven by cutting-edge artificial intelligence technology,have gained widespread adoption across diverse industries. In the realm of computer vision, these models... OpenAI and ChatGPT, as state-of-the-art languagemodels driven by cutting-edge artificial intelligence technology,have gained widespread adoption across diverse industries. In the realm of computer vision, these models havebeen employed for intricate tasks including object recognition, image generation, and image processing, leveragingtheir advanced capabilities to fuel transformative breakthroughs. Within the gaming industry, they have foundutility in crafting virtual characters and generating plots and dialogues, thereby enabling immersive and interactiveplayer experiences. Furthermore, these models have been harnessed in the realm of medical diagnosis, providinginvaluable insights and support to healthcare professionals in the realmof disease detection. The principal objectiveof this paper is to offer a comprehensive overview of OpenAI, OpenAI Gym, ChatGPT, DALL E, stable diffusion,the pre-trained clip model, and other pertinent models in various domains, encompassing CLIP Text-to-Image,education, medical imaging, computer vision, social influence, natural language processing, software development,coding assistance, and Chatbot, among others. Particular emphasis will be placed on comparative analysis andexamination of popular text-to-image and text-to-video models under diverse stimuli, shedding light on thecurrent research landscape, emerging trends, and existing challenges within the domains of OpenAI and ChatGPT.Through a rigorous literature review, this paper aims to deliver a professional and insightful overview of theadvancements, potentials, and limitations of these pioneering language models. 展开更多
关键词 OpenAI ChatGPT DALL E stable diffusion OpenAI Gym text-to-image text-to-video
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部