Redirigiendo al acceso original de articulo en 23 segundos...
Inicio  /  Information  /  Vol: 13 Par: 10 (2022)  /  Artículo
ARTÍCULO
TITULO

An Adversarial Attack Method against Specified Objects Based on Instance Segmentation

Dapeng Lang    
Deyun Chen    
Sizhao Li and Yongjun He    

Resumen

The deep model is widely used and has been demonstrated to have more hidden security risks. An adversarial attack can bypass the traditional means of defense. By modifying the input data, the attack on the deep model is realized, and it is imperceptible to humans. The existing adversarial example generation methods mainly attack the whole image. The optimization iterative direction is easy to predict, and the attack flexibility is low. For more complex scenarios, this paper proposes an edge-restricted adversarial example generation algorithm (Re-AEG) based on semantic segmentation. The algorithm can attack one or more specific objects in the image so that the detector cannot detect the objects. First, the algorithm automatically locates the attack objects according to the application requirements. Through the semantic segmentation algorithm, the attacked object is separated and the mask matrix for the object is generated. The algorithm proposed in this paper can attack the object in the region, converge quickly and successfully deceive the deep detection model. The algorithm only hides some sensitive objects in the image, rather than completely invalidating the detection model and causing reported errors, so it has higher concealment than the previous adversarial example generation algorithms. In this paper, a comparative experiment is carried out on ImageNet and coco2017 datasets, and the attack success rate is higher than 92%.

 Artículos similares

       
 
Viacheslav Moskalenko, Vyacheslav Kharchenko, Alona Moskalenko and Borys Kuzikov    
Artificial intelligence systems are increasingly being used in industrial applications, security and military contexts, disaster response complexes, policing and justice practices, finance, and healthcare systems. However, disruptions to these systems ca... ver más
Revista: Algorithms

 
Mehdi Sadi, Bashir Mohammad Sabquat Bahar Talukder, Kaniz Mishty and Md Tauhidur Rahman    
Universal adversarial perturbations are image-agnostic and model-independent noise that, when added to any image, can mislead the trained deep convolutional neural networks into the wrong prediction. Since these universal adversarial perturbations can se... ver más
Revista: Information

 
Yuwen Fu, E. Xia, Duan Huang and Yumei Jing    
Machine learning has been applied in continuous-variable quantum key distribution (CVQKD) systems to address the growing threat of quantum hacking attacks. However, the use of machine learning algorithms for detecting these attacks has uncovered a vulner... ver más
Revista: Applied Sciences

 
Lei Chen, Zhihao Wang, Ru Huo and Tao Huang    
As an essential piece of infrastructure supporting cyberspace security technology verification, network weapons and equipment testing, attack defense confrontation drills, and network risk assessment, Cyber Range is exceptionally vulnerable to distribute... ver más
Revista: Algorithms

 
Dapeng Lang, Deyun Chen, Jinjie Huang and Sizhao Li    
Small perturbations can make deep models fail. Since deep models are widely used in face recognition systems (FRS) such as surveillance and access control, adversarial examples may introduce more subtle threats to face recognition systems. In this paper,... ver más
Revista: Algorithms