Inicio  /  Applied Sciences  /  Vol: 13 Par: 17 (2023)  /  Artículo
ARTÍCULO
TITULO

Adversarial Attack Defense Method for a Continuous-Variable Quantum Key Distribution System Based on Kernel Robust Manifold Non-Negative Matrix Factorization

Yuwen Fu    
E. Xia    
Duan Huang and Yumei Jing    

Resumen

Machine learning has been applied in continuous-variable quantum key distribution (CVQKD) systems to address the growing threat of quantum hacking attacks. However, the use of machine learning algorithms for detecting these attacks has uncovered a vulnerability to adversarial disturbances that can compromise security. By subtly perturbing the detection networks used in CVQKD, significant misclassifications can occur. To address this issue, we utilize an adversarial sample defense method based on non-negative matrix factorization (NMF), considering the nonlinearity and high-dimensional nature of CVQKD data. Specifically, we employ the Kernel Robust Manifold Non-negative Matrix Factorization (KRMNMF) algorithm to reconstruct input samples, reducing the impact of adversarial perturbations. Firstly, we extract attack features against CVQKD by considering the adversary known as Eve. Then, we design an Artificial Neural Network (ANN) detection model to identify these attacks. Next, we introduce adversarial perturbations into the data generated by Eve. Finally, we use the KRMNMF decomposition to extract features from CVQKD data and mitigate the influence of adversarial perturbations through reconstruction. Experimental results demonstrate that the application of KRMNMF can effectively defend against adversarial attacks to a certain extent. The accuracy of KRMNMF surpasses the commonly used Comdefend method by 32.2% and the JPEG method by 30.8%. Moreover, it exhibits an improvement of 20.8% compared to NMF and outperforms other NMF-related algorithms in terms of classification accuracy. Moreover, it can complement other defense strategies, thus enhancing the overall defensive capabilities of CVQKD systems.

 Artículos similares

       
 
Viacheslav Moskalenko, Vyacheslav Kharchenko, Alona Moskalenko and Borys Kuzikov    
Artificial intelligence systems are increasingly being used in industrial applications, security and military contexts, disaster response complexes, policing and justice practices, finance, and healthcare systems. However, disruptions to these systems ca... ver más
Revista: Algorithms

 
Mehdi Sadi, Bashir Mohammad Sabquat Bahar Talukder, Kaniz Mishty and Md Tauhidur Rahman    
Universal adversarial perturbations are image-agnostic and model-independent noise that, when added to any image, can mislead the trained deep convolutional neural networks into the wrong prediction. Since these universal adversarial perturbations can se... ver más
Revista: Information

 
Lei Chen, Zhihao Wang, Ru Huo and Tao Huang    
As an essential piece of infrastructure supporting cyberspace security technology verification, network weapons and equipment testing, attack defense confrontation drills, and network risk assessment, Cyber Range is exceptionally vulnerable to distribute... ver más
Revista: Algorithms

 
Dapeng Lang, Deyun Chen, Jinjie Huang and Sizhao Li    
Small perturbations can make deep models fail. Since deep models are widely used in face recognition systems (FRS) such as surveillance and access control, adversarial examples may introduce more subtle threats to face recognition systems. In this paper,... ver más
Revista: Algorithms

 
Weizhen Xu, Chenyi Zhang, Fangzhen Zhao and Liangda Fang    
Adversarial attacks hamper the functionality and accuracy of deep neural networks (DNNs) by meddling with subtle perturbations to their inputs. In this work, we propose a new mask-based adversarial defense scheme (MAD) for DNNs to mitigate the negative e... ver más
Revista: Algorithms