Redirigiendo al acceso original de articulo en 22 segundos...
Inicio  /  Applied Sciences  /  Vol: 13 Par: 11 (2023)  /  Artículo
ARTÍCULO
TITULO

A Multi-Input Fusion Model for Privacy and Semantic Preservation in Facial Image Datasets

Yuanzhe Yang    
Zhiyi Niu    
Yuying Qiu    
Biao Song    
Xinchang Zhang and Yuan Tian    

Resumen

The widespread application of multimedia technologies such as video surveillance, online meetings, and drones facilitates the acquisition of a large amount of data that may contain facial features, posing significant concerns with regard to privacy. Protecting privacy while preserving the semantic contents of facial images is a challenging but crucial problem. Contemporary techniques for protecting the privacy of images lack the incorporation of the semantic attributes of faces and disregard the protection of dataset privacy. In this paper, we propose the Facial Privacy and Semantic Preservation (FPSP) model that utilizes similar facial feature replacement to achieve identity concealment, while adding semantic evaluation to the loss function to preserve semantic features. The proposed model is versatile and efficient in different task scenarios, preserving image utility while concealing privacy. Our experiments on the CelebA dataset demonstrate that the model achieves a semantic preservation rate of 77% while concealing the identities in facial images in the dataset.

 Artículos similares