Resumen
In this paper we address the problem of automatic emotion recognition and classification through video. Nowadays there are excellent results focused on lab-made datasets, with posed facial expressions. On the other hand there is room for a lot of improvement in the case of `in the wild' datasets, where light, face angle to the camera, etc. are taken into account. In these cases it could be very harmful to work with a small dataset. Currently, there are not big enough datasets of adequately labeled faces for the task.\\We use Generative Adversarial Networks in order to train models in a semi-supervised fashion, generating realistic face images in the process, allowing the exploitation of a big cumulus of unlabeled face images.