期刊文献+
共找到6篇文章
< 1 >
每页显示 20 50 100
A Study on the Image Style of the Ancient Print“Presenting the Beauty of the Country with the Elegance of the Dynasty”
1
作者 ZHANG Tong-yuan 《Journal of Literature and Art Studies》 2025年第1期21-25,共5页
The Pingyang ancient woodblock painting“Presenting the Beauty of the Country with the Elegance of the Dynasty”is an important work that cannot be surpassed in the study of the history of Chinese printmaking.This art... The Pingyang ancient woodblock painting“Presenting the Beauty of the Country with the Elegance of the Dynasty”is an important work that cannot be surpassed in the study of the history of Chinese printmaking.This article focuses on the characteristics of the image style of this work,and first combines the process of excavation and discovery of this work to describe its historical background;Secondly,analyze the image information such as the structure and content presented in the work;Finally,the uniqueness of the flat water version production,the distinctive content presented in the work,and the artistic composition of the work were explained,and the importance of studying the work was pointed out. 展开更多
关键词 “Presenting the Beautiful Beauty of the Country with the Elegance of the Dynasty” image style research
在线阅读 下载PDF
PhotoGAN:A Novel Style Transfer Model for Digital Photographs
2
作者 Qiming Li Mengcheng Wu Daozheng Chen 《Computers, Materials & Continua》 2025年第6期4477-4494,共18页
Image style transfer is a research hotspot in the field of computer vision.For this job,many approaches have been put forth.These techniques do,however,still have some drawbacks,such as high computing complexity and c... Image style transfer is a research hotspot in the field of computer vision.For this job,many approaches have been put forth.These techniques do,however,still have some drawbacks,such as high computing complexity and content distortion caused by inadequate stylization.To address these problems,PhotoGAN,a new Generative AdversarialNetwork(GAN)model is proposed in this paper.A deeper feature extraction network has been designed to capture global information and local details better.Introducingmulti-scale attention modules helps the generator focus on important feature areas at different scales,further enhancing the effectiveness of feature extraction.Using a semantic discriminator helps the generator learn quickly and better understand image content,improving the consistency and visual quality of the generated images.Finally,qualitative and quantitative experiments were conducted on a self-built dataset.The experimental results indicate that PhotoGAN outperformed the current state-of-the-art techniques.It not only performed excellently on objective metrics but also appeared more visually appealing,particularly excelling in handling complex scenes and details. 展开更多
关键词 PhotoGAN image style transfer GAN Fuji C200 style Monet style
在线阅读 下载PDF
Implementation of Art Pictures Style Conversion with GAN
3
作者 Xinlong Wu Desheng Zheng +3 位作者 Kexin Zhang Yanling Lai Zhifeng Liu Zhihong Zhang 《Journal of Quantum Computing》 2021年第4期127-136,共10页
Image conversion refers to converting an image from one style to another and ensuring that the content of the image remains unchanged.Using Generative Adversarial Networks(GAN)for image conversion can achieve good res... Image conversion refers to converting an image from one style to another and ensuring that the content of the image remains unchanged.Using Generative Adversarial Networks(GAN)for image conversion can achieve good results.However,if there are enough samples,any image in the target domain can be mapped to the same set of inputs.On this basis,the Cycle Consistency Generative Adversarial Network(CycleGAN)was developed.This article verifies and discusses the advantages and disadvantages of the CycleGAN model in image style conversion.CycleGAN uses two generator networks and two discriminator networks.The purpose is to learn the mapping relationship and inverse mapping relationship between the source domain and the target domain.It can reduce the mapping and improve the quality of the generated image.Through the idea of loop,the loss of information in image style conversion is reduced.When evaluating the results of the experiment,the degree of retention of the input image content will be judged.Through the experimental results,CycleGAN can understand the artist’s overall artistic style and successfully convert real landscape paintings.The advantage is that most of the content of the original picture can be retained,and only the texture line of the picture is changed to a level similar to the artist’s style. 展开更多
关键词 Generative adversary network deep learning image style conversion convolutional neural network adversary learning
在线阅读 下载PDF
Read The film "Three idiots" and reflect the current situation
4
作者 Li Mengjiao Lin Yan 《International Journal of Technology Management》 2014年第10期48-50,共3页
The Three Idiots, has a Strong appreciation ,with excellent script, creative actor, sophisticated production, fine plot, the endless stream of jokes. It uses a variety of narrative technique and also gives insight to ... The Three Idiots, has a Strong appreciation ,with excellent script, creative actor, sophisticated production, fine plot, the endless stream of jokes. It uses a variety of narrative technique and also gives insight to audiences on the various time lines clearly, Dance also added a beautiful landscape to the film. We can learn from the achievements on Artistic and box office and it reflects the present situation of Indian society. 展开更多
关键词 image style Character Analysis Dance teaser Comedy elements REALISTIC
在线阅读 下载PDF
Audio-guided implicit neural representation for local imagestylization
5
作者 Seung Hyun Lee Sieun Kim +7 位作者 Wonmin Byeon Gyeongrok Oh Sumin In Hyeongcheol Park Sang Ho Yoon Sung-Hee Hong Jinkyu Kim Sangpil Kim 《Computational Visual Media》 CSCD 2024年第6期1185-1204,共20页
We present a novel framework for audio-guided localized image stylization.Sound often provides information about the specific context of a scene and is closely related to a certain part of the scene or object.However,... We present a novel framework for audio-guided localized image stylization.Sound often provides information about the specific context of a scene and is closely related to a certain part of the scene or object.However,existing image stylization works have focused on stylizing the entire image using an image or text input.Stylizing a particular part of the image based on audio input is natural but challenging.This work proposes a framework in which a user provides an audio input to localize the target in the input image and another to locally stylize the target object or scene.We first produce a fine localization map using an audio-visual localization network leveraging CLIP embedding space.We then utilize an implicit neural representation(INR)along with the predicted localization map to stylize the target based on sound information.The INR manipulates local pixel values to be semantically consistent with the provided audio input.Our experiments show that the proposed framework outperforms other audio-guided stylization methods.Moreover,we observe that our method constructs concise localization maps and naturally manipulates the target object or scene in accordance with the given audio input. 展开更多
关键词 audio guidance image style transfer implicit neural representations(INR)
原文传递
Reference-guided structure-aware deep sketch colorization for cartoons 被引量:2
6
作者 Xueting Liu Wenliang Wu +2 位作者 Chengze Li Yifan Li Huisi Wu 《Computational Visual Media》 SCIE EI CSCD 2022年第1期135-148,共14页
Digital cartoon production requires extensive manual labor to colorize sketches with visually pleasant color composition and color shading.During colorization,the artist usually takes an existing cartoon image as colo... Digital cartoon production requires extensive manual labor to colorize sketches with visually pleasant color composition and color shading.During colorization,the artist usually takes an existing cartoon image as color guidance,particularly when colorizing related characters or an animation sequence.Reference-guided colorization is more intuitive than colorization with other hints,such as color points or scribbles,or text-based hints.Unfortunately,reference-guided colorization is challenging since the style of the colorized image should match the style of the reference image in terms of both global color composition and local color shading.In this paper,we propose a novel learning-based framework which colorizes a sketch based on a color style feature extracted from a reference color image.Our framework contains a color style extractor to extract the color feature from a color image,a colorization network to generate multi-scale output images by combining a sketch and a color feature,and a multi-scale discriminator to improve the reality of the output image.Extensive qualitative and quantitative evaluations show that our method outperforms existing methods,providing both superior visual quality and style reference consistency in the task of reference-based colorization. 展开更多
关键词 sketch colorization image style editing deep feature understanding reference-based image colorization
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部