The development of generative architectures has resulted in numerous novel deep-learning models that generate images using text inputs.However,humans naturally use speech for visualization prompts.Therefore,this paper...The development of generative architectures has resulted in numerous novel deep-learning models that generate images using text inputs.However,humans naturally use speech for visualization prompts.Therefore,this paper proposes an architecture that integrates speech prompts as input to image-generation Generative Adversarial Networks(GANs)model,leveraging Speech-to-Text translation along with the CLIP+VQGAN model.The proposed method involves translating speech prompts into text,which is then used by the Contrastive Language-Image Pretraining(CLIP)+Vector Quantized Generative Adversarial Network(VQGAN)model to generate images.This paper outlines the steps required to implement such a model and describes in detail the methods used for evaluating the model.The GAN model successfully generates artwork from descriptions using speech and text prompts.Experimental outcomes of synthesized images demonstrate that the proposed methodology can produce beautiful abstract visuals containing elements from the input prompts.The model achieved a Frechet Inception Distance(FID)score of 28.75,showcasing its capability to produce high-quality and diverse images.The proposed model can find numerous applications in educational,artistic,and design spaces due to its ability to generate images using speech and the distinct abstract artistry of the output images.This capability is demonstrated by giving the model out-of-the-box prompts to generate never-before-seen images with plausible realistic qualities.展开更多
基金funded by the Centre for Advanced Modelling and Geospatial Information Systems(CAMGIS),Faculty of Engineering and IT,University of Technology SydneyMoreover,supported by the Researchers Supporting Project,King Saud University,Riyadh,Saudi Arabia,under Ongoing Research Funding(ORF-2025-14).
文摘The development of generative architectures has resulted in numerous novel deep-learning models that generate images using text inputs.However,humans naturally use speech for visualization prompts.Therefore,this paper proposes an architecture that integrates speech prompts as input to image-generation Generative Adversarial Networks(GANs)model,leveraging Speech-to-Text translation along with the CLIP+VQGAN model.The proposed method involves translating speech prompts into text,which is then used by the Contrastive Language-Image Pretraining(CLIP)+Vector Quantized Generative Adversarial Network(VQGAN)model to generate images.This paper outlines the steps required to implement such a model and describes in detail the methods used for evaluating the model.The GAN model successfully generates artwork from descriptions using speech and text prompts.Experimental outcomes of synthesized images demonstrate that the proposed methodology can produce beautiful abstract visuals containing elements from the input prompts.The model achieved a Frechet Inception Distance(FID)score of 28.75,showcasing its capability to produce high-quality and diverse images.The proposed model can find numerous applications in educational,artistic,and design spaces due to its ability to generate images using speech and the distinct abstract artistry of the output images.This capability is demonstrated by giving the model out-of-the-box prompts to generate never-before-seen images with plausible realistic qualities.