|
|
|
Dapeng Lang, Deyun Chen, Jinjie Huang and Sizhao Li
Small perturbations can make deep models fail. Since deep models are widely used in face recognition systems (FRS) such as surveillance and access control, adversarial examples may introduce more subtle threats to face recognition systems. In this paper,...
ver más
|
|
|
|
|
|
|
Dapeng Lang, Deyun Chen, Sizhao Li and Yongjun He
The deep model is widely used and has been demonstrated to have more hidden security risks. An adversarial attack can bypass the traditional means of defense. By modifying the input data, the attack on the deep model is realized, and it is imperceptible ...
ver más
|
|
|
|