Facial attribute editing has mainly two objectives:1)translating image from a source domain to a target one,and 2)only changing the facial regions related to a target attribute and preserving the attribute-excluding d...Facial attribute editing has mainly two objectives:1)translating image from a source domain to a target one,and 2)only changing the facial regions related to a target attribute and preserving the attribute-excluding details.In this work,we propose a multi-attention U-Net-based generative adversarial network(MU-GAN).First,we replace a classic convolutional encoder-decoder with a symmetric U-Net-like structure in a generator,and then apply an additive attention mechanism to build attention-based U-Net connections for adaptively transferring encoder representations to complement a decoder with attribute-excluding detail and enhance attribute editing ability.Second,a self-attention(SA)mechanism is incorporated into convolutional layers for modeling long-range and multi-level dependencies across image regions.Experimental results indicate that our method is capable of balancing attribute editing ability and details preservation ability,and can decouple the correlation among attributes.It outperforms the state-of-the-art methods in terms of attribute manipulation accuracy and image quality.Our code is available at https://github.com/SuSir1996/MU-GAN.展开更多
Nowadays,the proliferation of portraits or photographs containing human faces on the internet has created significant risks of illegal privacy collection and analysis by intelligent systems.Previous attempts to protec...Nowadays,the proliferation of portraits or photographs containing human faces on the internet has created significant risks of illegal privacy collection and analysis by intelligent systems.Previous attempts to protect against unauthorized identification by face recognition models have primarily involved manipulating or adding adversarial perturbations to photos.However,it remains a challenge to balance privacy protection effectiveness and maintaining image visual quality.That is,to successfully attack real-world black-box face recognition models,significant manipulation is required for the source image,which will obviously damage the image visual quality.To address these issues,we propose an attribute-guided face identity protection(AG-FIP)approach that can protect facial privacy effectively without introducing meaningless or conspicuous artifacts into the source image.The proposed method involves mapping the images to latent space and subsequently implementing an adversarial attack through attribute editing.An attribute selection module followed by an attribute adversarially editing module is proposed to enhance the efficiency and effectiveness of adversarial attacks.Experimental results demonstrate that our approach outperforms SOTAs in terms of confusing black-box face recognition models,commercial face recognition APIs,and image visual quality.展开更多
基金supported in part by the National Natural Science Foundation of China(NSFC)(62076093,61871182,61302163,61401154)the Beijing Natural Science Foundation(4192055)+3 种基金the Natural Science Foundation of Hebei Province of China(F2015502062,F2016502101,F2017502016)the Fundamental Research Funds for the Central Universities(2020YJ006,2020MS099)the Open Project Program of the National Laboratory of Pattern Recognition(NLPR)(201900051)The authors gratefully acknowledge the support of NVIDIA Corporation with the donation of the GPU used for this research.
文摘Facial attribute editing has mainly two objectives:1)translating image from a source domain to a target one,and 2)only changing the facial regions related to a target attribute and preserving the attribute-excluding details.In this work,we propose a multi-attention U-Net-based generative adversarial network(MU-GAN).First,we replace a classic convolutional encoder-decoder with a symmetric U-Net-like structure in a generator,and then apply an additive attention mechanism to build attention-based U-Net connections for adaptively transferring encoder representations to complement a decoder with attribute-excluding detail and enhance attribute editing ability.Second,a self-attention(SA)mechanism is incorporated into convolutional layers for modeling long-range and multi-level dependencies across image regions.Experimental results indicate that our method is capable of balancing attribute editing ability and details preservation ability,and can decouple the correlation among attributes.It outperforms the state-of-the-art methods in terms of attribute manipulation accuracy and image quality.Our code is available at https://github.com/SuSir1996/MU-GAN.
文摘Nowadays,the proliferation of portraits or photographs containing human faces on the internet has created significant risks of illegal privacy collection and analysis by intelligent systems.Previous attempts to protect against unauthorized identification by face recognition models have primarily involved manipulating or adding adversarial perturbations to photos.However,it remains a challenge to balance privacy protection effectiveness and maintaining image visual quality.That is,to successfully attack real-world black-box face recognition models,significant manipulation is required for the source image,which will obviously damage the image visual quality.To address these issues,we propose an attribute-guided face identity protection(AG-FIP)approach that can protect facial privacy effectively without introducing meaningless or conspicuous artifacts into the source image.The proposed method involves mapping the images to latent space and subsequently implementing an adversarial attack through attribute editing.An attribute selection module followed by an attribute adversarially editing module is proposed to enhance the efficiency and effectiveness of adversarial attacks.Experimental results demonstrate that our approach outperforms SOTAs in terms of confusing black-box face recognition models,commercial face recognition APIs,and image visual quality.