This study presents a groundbreaking method named Expo-GAN(Exposition-Generative Adversarial Network)for style transfer in exhibition hall design,using a refined version of the Cycle Generative Adversarial Network(Cyc...This study presents a groundbreaking method named Expo-GAN(Exposition-Generative Adversarial Network)for style transfer in exhibition hall design,using a refined version of the Cycle Generative Adversarial Network(CycleGAN).The primary goal is to enhance the transformation of image styles while maintaining visual consistency,an areawhere current CycleGAN models often fall short.These traditionalmodels typically face difficulties in accurately capturing expansive features as well as the intricate stylistic details necessary for high-quality image transformation.To address these limitations,the research introduces several key modifications to the CycleGAN architecture.Enhancements to the generator involve integrating U-net with SpecTransformer modules.This integration incorporates the use of Fourier transform techniques coupled with multi-head self-attention mechanisms,which collectively improve the generator’s ability to depict both large-scale structural patterns and minute elements meticulously in the generated images.This enhancement allows the generator to achieve a more detailed and coherent fusion of styles,essential for exhibition hall designs where both broad aesthetic strokes and detailed nuances matter significantly.The study also proposes innovative changes to the discriminator by employing dilated convolution and global attention mechanisms.These are derived using the Differentiable Architecture Search(DARTS)Neural Architecture Search framework to expand the receptive field,which is crucial for recognizing comprehensive artistically styled images.By broadening the ability to discern complex artistic features,the model avoids previous pitfalls associated with style inconsistency and missing detailed features.Moreover,the traditional cyde-consistency loss function is replaced with the Learned Perceptual Image Patch Similarity(LPIPS)metric.This shift aims to significantly enhance the perceptual quality of the resultant images by prioritizing human-perceived similarities,which aligns better with user expectations and professional standards in design aesthetics.The experimental phase of this research demonstrates that this novel approach consistently outperforms the conventional CycleGAN across a broad range of datasets.Complementary ablation studies and qualitative assessments underscore its superiority,particularly in maintaining detail fidelity and style continuity.This is critical for creating a visually harmonious exhibitionhall designwhere everydetail contributes to the overall aesthetic appeal.The results illustrate that this refined approach effectively bridges the gap between technical capability and artistic necessity,marking a significant advancement in computational design methodologies.展开更多
文摘This study presents a groundbreaking method named Expo-GAN(Exposition-Generative Adversarial Network)for style transfer in exhibition hall design,using a refined version of the Cycle Generative Adversarial Network(CycleGAN).The primary goal is to enhance the transformation of image styles while maintaining visual consistency,an areawhere current CycleGAN models often fall short.These traditionalmodels typically face difficulties in accurately capturing expansive features as well as the intricate stylistic details necessary for high-quality image transformation.To address these limitations,the research introduces several key modifications to the CycleGAN architecture.Enhancements to the generator involve integrating U-net with SpecTransformer modules.This integration incorporates the use of Fourier transform techniques coupled with multi-head self-attention mechanisms,which collectively improve the generator’s ability to depict both large-scale structural patterns and minute elements meticulously in the generated images.This enhancement allows the generator to achieve a more detailed and coherent fusion of styles,essential for exhibition hall designs where both broad aesthetic strokes and detailed nuances matter significantly.The study also proposes innovative changes to the discriminator by employing dilated convolution and global attention mechanisms.These are derived using the Differentiable Architecture Search(DARTS)Neural Architecture Search framework to expand the receptive field,which is crucial for recognizing comprehensive artistically styled images.By broadening the ability to discern complex artistic features,the model avoids previous pitfalls associated with style inconsistency and missing detailed features.Moreover,the traditional cyde-consistency loss function is replaced with the Learned Perceptual Image Patch Similarity(LPIPS)metric.This shift aims to significantly enhance the perceptual quality of the resultant images by prioritizing human-perceived similarities,which aligns better with user expectations and professional standards in design aesthetics.The experimental phase of this research demonstrates that this novel approach consistently outperforms the conventional CycleGAN across a broad range of datasets.Complementary ablation studies and qualitative assessments underscore its superiority,particularly in maintaining detail fidelity and style continuity.This is critical for creating a visually harmonious exhibitionhall designwhere everydetail contributes to the overall aesthetic appeal.The results illustrate that this refined approach effectively bridges the gap between technical capability and artistic necessity,marking a significant advancement in computational design methodologies.