Inicio  /  Applied Sciences  /  Vol: 12 Par: 18 (2022)  /  Artículo
ARTÍCULO
TITULO

Evading Logits-Based Detections to Audio Adversarial Examples by Logits-Traction Attack

Songshen Han    
Kaiyong Xu    
Songhui Guo    
Miao Yu and Bo Yang    

Resumen

Automatic Speech Recognition (ASR) provides a new way of human-computer interaction. However, it is vulnerable to adversarial examples, which are obtained by deliberately adding perturbations to the original audios. Thorough studies on the universal feature of adversarial examples are essential to prevent potential attacks. Previous research has shown classic adversarial examples have different logits distribution compared to normal speech. This paper proposes a logit-traction attack to eliminate this difference at the statistical level. Experiments on the LibriSpeech dataset show that the proposed attack reduces the accuracy of the LOGITS NOISE detection to 52.1%. To further verify the effectiveness of this approach in attacking detection based on logits, three different features quantifying the dispersion of logits are constructed in this paper. Furthermore, a richer target sentence is adopted for experiments. The results indicate that these features can detect baseline adversarial examples with an accuracy of about 90% but cannot effectively detect Logits-Traction adversarial examples, proving that Logits-Traction attack can evade the logits-based detection method.

 Artículos similares

       
 
Wei Liu, Junxing Cao, Jiachun You and Haibo Wang    
Vector decomposition of P- and S-wave modes from elastic seismic wavefields is a key step in elastic reverse-time migration (ERTM) to effectively improve the multi-wave imaging accuracy. Most previously developed methods based on the apparent velocities ... ver más
Revista: Applied Sciences

 
Jiaping Wu, Zhaoqiang Xia and Xiaoyi Feng    
In recent years, adversarial examples have aroused widespread research interest and raised concerns about the safety of CNNs. We study adversarial machine learning inspired by a support vector machine (SVM), where the decision boundary with maximum margi... ver más
Revista: Applied Sciences

 
Dapeng Lang, Deyun Chen, Jinjie Huang and Sizhao Li    
Small perturbations can make deep models fail. Since deep models are widely used in face recognition systems (FRS) such as surveillance and access control, adversarial examples may introduce more subtle threats to face recognition systems. In this paper,... ver más
Revista: Algorithms

 
Dapeng Lang, Deyun Chen, Sizhao Li and Yongjun He    
The deep model is widely used and has been demonstrated to have more hidden security risks. An adversarial attack can bypass the traditional means of defense. By modifying the input data, the attack on the deep model is realized, and it is imperceptible ... ver más
Revista: Information

 
Dejian Guan, Wentao Zhao and Xiao Liu    
Recent studies show that deep neural networks (DNNs)-based object recognition algorithms overly rely on object textures rather than global object shapes, and DNNs are also vulnerable to human-less perceptible adversarial perturbations. Based on these two... ver más
Revista: Applied Sciences