Resumen
The paper deals with the problem of adversarial attacks on machine learning systems. Such attacks are understood as special actions on the elements of the machine learning pipeline (training data, the model itself, test data) in order to either achieve the desired behavior of the system or prevent it from working correctly. In general, this problem is a consequence of a fundamental moment for all machine learning systems - the data at the testing (operation) stage differs from the same data on which the system was trained. Accordingly, a violation of the machine learning system is possible without targeted actions, simply because we encountered data at the operational stage for which the generalization achieved at the training stage does not work. An attack on a machine learning system is, in fact, a targeted introduction of the system into the data area on which the system was not trained. Today, this problem, which is generally associated with the stability of machine learning systems, is the main obstacle to the use of machine learning in critical applications.