|
|
|
Dapeng Lang, Deyun Chen, Sizhao Li and Yongjun He
The deep model is widely used and has been demonstrated to have more hidden security risks. An adversarial attack can bypass the traditional means of defense. By modifying the input data, the attack on the deep model is realized, and it is imperceptible ...
ver más
|
|
|
|
|
|
|
Dapeng Lang, Deyun Chen, Jinjie Huang and Sizhao Li
Small perturbations can make deep models fail. Since deep models are widely used in face recognition systems (FRS) such as surveillance and access control, adversarial examples may introduce more subtle threats to face recognition systems. In this paper,...
ver más
|
|
|
|
|
|
|
Zhen Li, Heng Yao, Ran Shi, Tong Qiao and Chuan Qin
In daily life, when taking photos of scenes containing glass, the images of the dominant transmission layer and the weak reflection layer are often blended, which are difficult to be uncoupled. Meanwhile, because the reflection layer contains sufficient ...
ver más
|
|
|
|
|
|
|
Min Chang, Shuai Han, Guo Chen and Xuedian Zhang
Both noise and structure matter in single image super-resolution (SISR). Recent researches have benefited from a generative adversarial network (GAN) that promotes the development of SISR by recovering photo-realistic images. However, noise and structura...
ver más
|
|
|
|