Redirigiendo al acceso original de articulo en 22 segundos...
ARTÍCULO
TITULO

On the robustness and security of Artificial Intelligence systems

Dmitry Namiot    
Eugene Ilyushin    

Resumen

In the modern interpretation, artificial intelligence systems are machine learning systems. Often this is even further narrowed down to artificial neural networks. The robustness of machine learning systems has traditionally been considered as the main issue that determines the applicability of machine learning systems in critical areas (avionics, driverless movement, etc.). But is robustness alone sufficient for such applications? It is precisely this issue that this article is devoted to. Will robust systems always be reliable and safe for use in critical areas? For example, the classical definition of robustness speaks of maintaining the efficiency of the system (consistency of its conclusions) under small perturbations of the input data. But this same definition does not say anything about the correctness of the results obtained. In the classical formulation, we are talking about small (imperceptible, speaking of images) data changes, but this ?smallness?, in fact, has two very specific reasons. Firstly, this corresponds precisely to the human understanding of sustainability, when small (imperceptible) changes should not affect the result. Secondly, small changes allow us to formally describe data manipulations. But if we are talking about M2M systems, then the size (degree) of data change does not matter. Robustness alone is not enough to conclude that a machine learning system is secure.