Resumen
In recent years, artificial intelligence technologies have been developing more and more rapidly, and a lot of research is aimed at solving the problem of explainable artificial intelligence. Various XAI methods are being developed to allow the user to understand the logic of how machine learning models work, and in order to compare the methods, it is necessary to evaluate them. The paper analyzes various approaches to the evaluation of XAI methods, defines the requirements for the evaluation system and suggests metrics to determine the various technical characteristics of the methods. A study was conducted, using these metrics, which determined the degradation in the explanation quality of the SHAP and LIME methods with increasing correlation in the input data. Recommendations are also given for further research in the field of practical implementation of metrics, expanding the scope of their use.