Resumen
The paper deals with the issues of formal verification of machine learning systems. With the growth of the introduction of systems based on machine learning in the so-called critical systems (systems with a very high cost of erroneous decisions and actions), the demand for confirmation of the stability of such systems is growing. How will the built machine learning system perform on data that is different from the set on which it was trained? Is it possible to somehow verify or even prove that the behavior of the system, which was demonstrated on the initial dataset, will always remain so? There are different ways to try to do this. The article provides an overview of existing approaches to formal verification. All the considered approaches already have practical applications, but the main question that remains open is scaling. How applicable are these approaches to modern networks with millions and even billions of parameters?