Redirigiendo al acceso original de articulo en 24 segundos...
Inicio  /  Applied Sciences  /  Vol: 12 Par: 21 (2022)  /  Artículo
ARTÍCULO
TITULO

On the Privacy?Utility Trade-Off in Differentially Private Hierarchical Text Classification

Dominik Wunderlich    
Daniel Bernau    
Francesco Aldà    
Javier Parra-Arnau and Thorsten Strufe    

Resumen

Hierarchical text classification consists of classifying text documents into a hierarchy of classes and sub-classes. Although Artificial Neural Networks have proved useful to perform this task, unfortunately, they can leak training data information to adversaries due to training data memorization. Using differential privacy during model training can mitigate leakage attacks against trained models, enabling the models to be shared safely at the cost of reduced model accuracy. This work investigates the privacy?utility trade-off in hierarchical text classification with differential privacy guarantees, and it identifies neural network architectures that offer superior trade-offs. To this end, we use a white-box membership inference attack to empirically assess the information leakage of three widely used neural network architectures. We show that large differential privacy parameters already suffice to completely mitigate membership inference attacks, thus resulting only in a moderate decrease in model utility. More specifically, for large datasets with long texts, we observed Transformer-based models to achieve an overall favorable privacy?utility trade-off, while for smaller datasets with shorter texts, convolutional neural networks are preferable.

 Artículos similares

       
 
Fenfang Li, Zhengzhang Zhao, Li Wang and Han Deng    
Sentence Boundary Disambiguation (SBD) is crucial for building datasets for tasks such as machine translation, syntactic analysis, and semantic analysis. Currently, most automatic sentence segmentation in Tibetan adopts the methods of rule-based and stat... ver más
Revista: Applied Sciences

 
Jiaming Li, Ning Xie and Tingting Zhao    
In recent years, with the rapid advancements in Natural Language Processing (NLP) technologies, large models have become widespread. Traditional reinforcement learning algorithms have also started experimenting with language models to optimize training. ... ver más
Revista: Algorithms

 
Zhe Yang, Yi Huang, Yaqin Chen, Xiaoting Wu, Junlan Feng and Chao Deng    
Controllable Text Generation (CTG) aims to modify the output of a Language Model (LM) to meet specific constraints. For example, in a customer service conversation, responses from the agent should ideally be soothing and address the user?s dissatisfactio... ver más
Revista: Applied Sciences

 
Andrei Paraschiv, Teodora Andreea Ion and Mihai Dascalu    
The advent of online platforms and services has revolutionized communication, enabling users to share opinions and ideas seamlessly. However, this convenience has also brought about a surge in offensive and harmful language across various communication m... ver más
Revista: Information

 
Jingwen Yang and Ruohua Zhou    
Whisper speaker recognition (WSR) has received extensive attention from researchers in recent years, and it plays an important role in medical, judicial, and other fields. Among them, the establishment of a whisper dataset is very important for the study... ver más
Revista: Information