Resumen
Semi-supervised metric learning intends to learn a distance function from the limited labeled data as well as a large amount of unlabeled data to better gauge the similarities of any two instances than using a general distance function. However, most existing semi-supervised metric learning methods rely on the manifold assumptions to mine the rich discriminant information of the unlabeled data, which breaks the intrinsic connection between the manifold regularizer-building process and the subsequent metric learning. Moreover, these methods usually encounter high computational or memory overhead. To solve these issues, we develop a novel method entitled Information-Theoretic Large-Scale Semi-Supervised Metric Learning via Proxies (ISMLP). ISMLP aims to simultaneously learn multiple proxy vectors as well as a Mahalanobis matrix and forms the semi-supervised metric learning as the probability distribution optimization parameterized by the Mahalanobis distance between the instance and each proxy vector. ISMLP maximizes the entropy of the labeled data and minimizes that of the unlabeled data to follow the entropy regularization, in this way, the labeled part and unlabeled part can be integrated in a meaningful way. Furthermore, the time complexity of the proposed method has a linear dependency concerning the number of instances, thereby, can be extended to the large-scale dataset without incurring too much time. Experiments on multiple datasets demonstrate the superiority of the proposed method over the compared methods used in the experiments.