Redirigiendo al acceso original de articulo en 15 segundos...
Inicio  /  Future Internet  /  Vol: 13 Par: 11 (2021)  /  Artículo
ARTÍCULO
TITULO

Deepfake-Image Anti-Forensics with Adversarial Examples Attacks

Li Fan    
Wei Li and Xiaohui Cui    

Resumen

Many deepfake-image forensic detectors have been proposed and improved due to the development of synthetic techniques. However, recent studies show that most of these detectors are not immune to adversarial example attacks. Therefore, understanding the impact of adversarial examples on their performance is an important step towards improving deepfake-image detectors. This study developed an anti-forensics case study of two popular general deepfake detectors based on their accuracy and generalization. Herein, we propose the Poisson noise DeepFool (PNDF), an improved iterative adversarial examples generation method. This method can simply and effectively attack forensics detectors by adding perturbations to images in different directions. Our attacks can reduce its AUC from 0.9999 to 0.0331, and the detection accuracy of deepfake images from 0.9997 to 0.0731. Compared with state-of-the-art studies, our work provides an important defense direction for future research on deepfake-image detectors, by focusing on the generalization performance of detectors and their resistance to adversarial example attacks.

 Artículos similares