期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
Denoised Internal Models:A Brain-inspired Autoencoder Against Adversarial Attacks
1
作者 Kai-Yuan Liu Xing-Yu Li +6 位作者 yu-rui lai Hang Su Jia-Chen Wang Chun-Xu Guo Hong Xie Ji-Song Guan Yi Zhou 《Machine Intelligence Research》 EI CSCD 2022年第5期456-471,共16页
Despite its great success,deep learning severely suffers from robustness;i.e.,deep neural networks are very vulnerable to adversarial attacks,even the simplest ones.Inspired by recent advances in brain science,we prop... Despite its great success,deep learning severely suffers from robustness;i.e.,deep neural networks are very vulnerable to adversarial attacks,even the simplest ones.Inspired by recent advances in brain science,we propose the denoised internal models(DIM),a novel generative autoencoder-based model to tackle this challenge.Simulating the pipeline in the human brain for visual signal processing,DIM adopts a two-stage approach.In the first stage,DIM uses a denoiser to reduce the noise and the dimensions of inputs,reflecting the information pre-processing in the thalamus.Inspired by the sparse coding of memory-related traces in the primary visual cortex,the second stage produces a set of internal models,one for each category.We evaluate DIM over 42 adversarial attacks,showing that DIM effectively defenses against all the attacks and outperforms the SOTA on the overall robustness on the MNIST(Modified National Institute of Standards and Technology)dataset. 展开更多
关键词 Brain-inspired learning autoencoder ROBUSTNESS adversarial attack generative model
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部