Nowadays,the proliferation of portraits or photographs containing human faces on the internet has created significant risks of illegal privacy collection and analysis by intelligent systems.Previous attempts to protec...Nowadays,the proliferation of portraits or photographs containing human faces on the internet has created significant risks of illegal privacy collection and analysis by intelligent systems.Previous attempts to protect against unauthorized identification by face recognition models have primarily involved manipulating or adding adversarial perturbations to photos.However,it remains a challenge to balance privacy protection effectiveness and maintaining image visual quality.That is,to successfully attack real-world black-box face recognition models,significant manipulation is required for the source image,which will obviously damage the image visual quality.To address these issues,we propose an attribute-guided face identity protection(AG-FIP)approach that can protect facial privacy effectively without introducing meaningless or conspicuous artifacts into the source image.The proposed method involves mapping the images to latent space and subsequently implementing an adversarial attack through attribute editing.An attribute selection module followed by an attribute adversarially editing module is proposed to enhance the efficiency and effectiveness of adversarial attacks.Experimental results demonstrate that our approach outperforms SOTAs in terms of confusing black-box face recognition models,commercial face recognition APIs,and image visual quality.展开更多
The technique of facial attribute manipulation has found increasing application,but it remains challenging to restrict editing of attributes so that a face’s unique details are preserved.In this paper,we introduce ou...The technique of facial attribute manipulation has found increasing application,but it remains challenging to restrict editing of attributes so that a face’s unique details are preserved.In this paper,we introduce our method,which we call a mask-adversarial autoencoder(M-AAE).It combines a variational autoencoder(VAE)and a generative adversarial network(GAN)for photorealistic image generation.We use partial dilated layers to modify a few pixels in the feature maps of an encoder,changing the attribute strength continuously without hindering global information.Our training objectives for the VAE and GAN are reinforced by supervision of face recognition loss and cycle consistency loss,to faithfully preserve facial details.Moreover,we generate facial masks to enforce background consistency,which allows our training to focus on the foreground face rather than the background.Experimental results demonstrate that our method can generate high-quality images with varying attributes,and outperforms existing methods in detail preservation.展开更多
文摘Nowadays,the proliferation of portraits or photographs containing human faces on the internet has created significant risks of illegal privacy collection and analysis by intelligent systems.Previous attempts to protect against unauthorized identification by face recognition models have primarily involved manipulating or adding adversarial perturbations to photos.However,it remains a challenge to balance privacy protection effectiveness and maintaining image visual quality.That is,to successfully attack real-world black-box face recognition models,significant manipulation is required for the source image,which will obviously damage the image visual quality.To address these issues,we propose an attribute-guided face identity protection(AG-FIP)approach that can protect facial privacy effectively without introducing meaningless or conspicuous artifacts into the source image.The proposed method involves mapping the images to latent space and subsequently implementing an adversarial attack through attribute editing.An attribute selection module followed by an attribute adversarially editing module is proposed to enhance the efficiency and effectiveness of adversarial attacks.Experimental results demonstrate that our approach outperforms SOTAs in terms of confusing black-box face recognition models,commercial face recognition APIs,and image visual quality.
基金partially funded by the National Natural Science Foundation of China(No.61972157)the National Social Science Foundation of China(No.18ZD22)+3 种基金the Science and Technology Commission of Shanghai Municipality Program(No.18D1205903)the Science and Technology Commission of Pudong Municipality Program(No.PKJ2018-Y46)the Multidisciplinary Project of Shanghai Jiao Tong University(No.ZH2018ZDA25)partially supported by a joint project of SenseTime and Shanghai Jiao Tong University。
文摘The technique of facial attribute manipulation has found increasing application,but it remains challenging to restrict editing of attributes so that a face’s unique details are preserved.In this paper,we introduce our method,which we call a mask-adversarial autoencoder(M-AAE).It combines a variational autoencoder(VAE)and a generative adversarial network(GAN)for photorealistic image generation.We use partial dilated layers to modify a few pixels in the feature maps of an encoder,changing the attribute strength continuously without hindering global information.Our training objectives for the VAE and GAN are reinforced by supervision of face recognition loss and cycle consistency loss,to faithfully preserve facial details.Moreover,we generate facial masks to enforce background consistency,which allows our training to focus on the foreground face rather than the background.Experimental results demonstrate that our method can generate high-quality images with varying attributes,and outperforms existing methods in detail preservation.