Inicio  /  Future Internet  /  Vol: 14 Par: 4 (2022)  /  Artículo
ARTÍCULO
TITULO

Adaptative Perturbation Patterns: Realistic Adversarial Learning for Robust Intrusion Detection

João Vitorino    
Nuno Oliveira and Isabel Praça    

Resumen

Adversarial attacks pose a major threat to machine learning and to the systems that rely on it. In the cybersecurity domain, adversarial cyber-attack examples capable of evading detection are especially concerning. Nonetheless, an example generated for a domain with tabular data must be realistic within that domain. This work establishes the fundamental constraint levels required to achieve realism and introduces the adaptative perturbation pattern method (A2PM) to fulfill these constraints in a gray-box setting. A2PM relies on pattern sequences that are independently adapted to the characteristics of each class to create valid and coherent data perturbations. The proposed method was evaluated in a cybersecurity case study with two scenarios: Enterprise and Internet of Things (IoT) networks. Multilayer perceptron (MLP) and random forest (RF) classifiers were created with regular and adversarial training, using the CIC-IDS2017 and IoT-23 datasets. In each scenario, targeted and untargeted attacks were performed against the classifiers, and the generated examples were compared with the original network traffic flows to assess their realism. The obtained results demonstrate that A2PM provides a scalable generation of realistic adversarial examples, which can be advantageous for both adversarial training and attacks.

 Artículos similares

       
 
Ran Chen, Jing Zhao, Xueqi Yao, Sijia Jiang, Yingting He, Bei Bao, Xiaomin Luo, Shuhan Xu and Chenxi Wang    
Generative Adversarial Networks (GANs) possess a significant ability to generate novel images that adhere to specific guidelines across multiple domains. GAN-assisted generative design is a design method that can automatically generate design schemes wit... ver más
Revista: Buildings

 
James Msughter Adeke, Guangjie Liu, Junjie Zhao, Nannan Wu and Hafsat Muhammad Bashir    
Machine learning (ML) models are essential to securing communication networks. However, these models are vulnerable to adversarial examples (AEs), in which malicious inputs are modified by adversaries to produce the desired output. Adversarial training i... ver más
Revista: Future Internet

 
Ali Mirzaei, Hossein Bagheri and Iman Khosravi    
Crop classification using remote sensing data has emerged as a prominent research area in recent decades. Studies have demonstrated that fusing synthetic aperture radar (SAR) and optical images can significantly enhance the accuracy of classification. Ho... ver más

 
Haoxuan Qiu, Yanhui Du and Tianliang Lu    
To protect images from the tampering of deepfake, adversarial examples can be made to replace the original images by distorting the output of the deepfake model and disrupting its work. Current studies lack generalizability in that they simply focus on t... ver más
Revista: Future Internet

 
Sampada Tavse, Vijayakumar Varadarajan, Mrinal Bachute, Shilpa Gite and Ketan Kotecha    
With the advances in brain imaging, magnetic resonance imaging (MRI) is evolving as a popular radiological tool in clinical diagnosis. Deep learning (DL) methods can detect abnormalities in brain images without an extensive manual feature extraction proc... ver más
Revista: Future Internet