Text-to-video artificial intelligence(AI)is a new product that has arisen from the continuous development of digital technology over recent years.The emergence of various text-to-video AI models,including Sora,is driv...Text-to-video artificial intelligence(AI)is a new product that has arisen from the continuous development of digital technology over recent years.The emergence of various text-to-video AI models,including Sora,is driving the proliferation of content generated through concrete imagery.However,the content generated by text-to-video AI raises significant issues such as unclear work identification,ambiguous copyright ownership,and widespread copyright infringement.These issues can hinder the development of text-to-video AI in the creative fields and impede the prosperity of China’s social and cultural arts.Therefore,this paper proposes three recommendations within a legal framework:(a)categorizing the content generated by text-to-video AI as audiovisual works;(b)clarifying the copyright ownership model for text-to-video AI works;(c)reasonably delineating the responsibilities of the parties who are involved in the text-to-video AI works.The aim is to mitigate the copyright risks associated with content generated by text-to-video AI and to promote the healthy development of text-to-video AI in the creative fields.展开更多
In recent years,diffusion models have achieved remarkable progress in image generation.However,extending them to text-to-video(T2V)generation remains challenging,particularly in maintaining semantic consistency and vi...In recent years,diffusion models have achieved remarkable progress in image generation.However,extending them to text-to-video(T2V)generation remains challenging,particularly in maintaining semantic consistency and visual quality across frames.Existing approaches often overlook the synergy between high-level semantics and low-level texture information,resulting in blurry or temporally inconsistent outputs.To address these issues,we propose Dual Consistency Training(DCT),a novel framework designed to jointly optimize semantic and texture consistency in video generation.Specifically,we introduce a multi-scale spatial adapter to enhance spatial feature extraction,and leverage the complementary strengths of CLIP and VGG—where CLIP focuses on high-level semantics and VGG captures fine-grained texture and detail.During training,a stepwise strategy is adopted to impose semantic and texture losses,constraining discrepancies between generated and ground-truth frames.Furthermore,we propose CLWS,which dynamically adjusts the balance between semantic and texture losses to facilitate more stable and effective optimization.Remarkably,DCT achieves high-quality video generation using only a single training video on a single NVIDIA A6000 GPU.Extensive experiments demonstrate that our method significantly improves temporal coherence and visual fidelity across various video generation tasks,verifying its effectiveness and generalizability.展开更多
OpenAI and ChatGPT, as state-of-the-art languagemodels driven by cutting-edge artificial intelligence technology,have gained widespread adoption across diverse industries. In the realm of computer vision, these models...OpenAI and ChatGPT, as state-of-the-art languagemodels driven by cutting-edge artificial intelligence technology,have gained widespread adoption across diverse industries. In the realm of computer vision, these models havebeen employed for intricate tasks including object recognition, image generation, and image processing, leveragingtheir advanced capabilities to fuel transformative breakthroughs. Within the gaming industry, they have foundutility in crafting virtual characters and generating plots and dialogues, thereby enabling immersive and interactiveplayer experiences. Furthermore, these models have been harnessed in the realm of medical diagnosis, providinginvaluable insights and support to healthcare professionals in the realmof disease detection. The principal objectiveof this paper is to offer a comprehensive overview of OpenAI, OpenAI Gym, ChatGPT, DALL E, stable diffusion,the pre-trained clip model, and other pertinent models in various domains, encompassing CLIP Text-to-Image,education, medical imaging, computer vision, social influence, natural language processing, software development,coding assistance, and Chatbot, among others. Particular emphasis will be placed on comparative analysis andexamination of popular text-to-image and text-to-video models under diverse stimuli, shedding light on thecurrent research landscape, emerging trends, and existing challenges within the domains of OpenAI and ChatGPT.Through a rigorous literature review, this paper aims to deliver a professional and insightful overview of theadvancements, potentials, and limitations of these pioneering language models.展开更多
基金This research is supported by“Research on Legal Issues Caused by Sora from the Perspective of Copyright Law”(YK20240094)of the Xihua University Science and Technology Innovation Competition Project for Postgraduate Students(cultivation project).
文摘Text-to-video artificial intelligence(AI)is a new product that has arisen from the continuous development of digital technology over recent years.The emergence of various text-to-video AI models,including Sora,is driving the proliferation of content generated through concrete imagery.However,the content generated by text-to-video AI raises significant issues such as unclear work identification,ambiguous copyright ownership,and widespread copyright infringement.These issues can hinder the development of text-to-video AI in the creative fields and impede the prosperity of China’s social and cultural arts.Therefore,this paper proposes three recommendations within a legal framework:(a)categorizing the content generated by text-to-video AI as audiovisual works;(b)clarifying the copyright ownership model for text-to-video AI works;(c)reasonably delineating the responsibilities of the parties who are involved in the text-to-video AI works.The aim is to mitigate the copyright risks associated with content generated by text-to-video AI and to promote the healthy development of text-to-video AI in the creative fields.
基金supported in part by the National Natural Science Foundation of China[Grant number 62471075]the Major Science and Technology Project Grant of the Chongqing Municipal Education Commission[Grant number KJZD-M202301901]Graduate Innovation Project Funding of Chongqing University of Technology[Grant number gzlcx20253249].
文摘In recent years,diffusion models have achieved remarkable progress in image generation.However,extending them to text-to-video(T2V)generation remains challenging,particularly in maintaining semantic consistency and visual quality across frames.Existing approaches often overlook the synergy between high-level semantics and low-level texture information,resulting in blurry or temporally inconsistent outputs.To address these issues,we propose Dual Consistency Training(DCT),a novel framework designed to jointly optimize semantic and texture consistency in video generation.Specifically,we introduce a multi-scale spatial adapter to enhance spatial feature extraction,and leverage the complementary strengths of CLIP and VGG—where CLIP focuses on high-level semantics and VGG captures fine-grained texture and detail.During training,a stepwise strategy is adopted to impose semantic and texture losses,constraining discrepancies between generated and ground-truth frames.Furthermore,we propose CLWS,which dynamically adjusts the balance between semantic and texture losses to facilitate more stable and effective optimization.Remarkably,DCT achieves high-quality video generation using only a single training video on a single NVIDIA A6000 GPU.Extensive experiments demonstrate that our method significantly improves temporal coherence and visual fidelity across various video generation tasks,verifying its effectiveness and generalizability.
基金the National Natural Science Foundation of China(No.62001197).
文摘OpenAI and ChatGPT, as state-of-the-art languagemodels driven by cutting-edge artificial intelligence technology,have gained widespread adoption across diverse industries. In the realm of computer vision, these models havebeen employed for intricate tasks including object recognition, image generation, and image processing, leveragingtheir advanced capabilities to fuel transformative breakthroughs. Within the gaming industry, they have foundutility in crafting virtual characters and generating plots and dialogues, thereby enabling immersive and interactiveplayer experiences. Furthermore, these models have been harnessed in the realm of medical diagnosis, providinginvaluable insights and support to healthcare professionals in the realmof disease detection. The principal objectiveof this paper is to offer a comprehensive overview of OpenAI, OpenAI Gym, ChatGPT, DALL E, stable diffusion,the pre-trained clip model, and other pertinent models in various domains, encompassing CLIP Text-to-Image,education, medical imaging, computer vision, social influence, natural language processing, software development,coding assistance, and Chatbot, among others. Particular emphasis will be placed on comparative analysis andexamination of popular text-to-image and text-to-video models under diverse stimuli, shedding light on thecurrent research landscape, emerging trends, and existing challenges within the domains of OpenAI and ChatGPT.Through a rigorous literature review, this paper aims to deliver a professional and insightful overview of theadvancements, potentials, and limitations of these pioneering language models.