|
|
|
Ali Abbasi Tadi, Saroj Dayal, Dima Alhadidi and Noman Mohammed
The vulnerability of machine learning models to membership inference attacks, which aim to determine whether a specific record belongs to the training dataset, is explored in this paper. Federated learning allows multiple parties to independently train a...
ver más
|
|
|
|
|
|
|
Zacharias Anastasakis, Terpsichori-Helen Velivassaki, Artemis Voulkidis, Stavroula Bourou, Konstantinos Psychogyios, Dimitrios Skias and Theodore Zahariadis
Federated Learning is identified as a reliable technique for distributed training of ML models. Specifically, a set of dispersed nodes may collaborate through a federation in producing a jointly trained ML model without disclosing their data to each othe...
ver más
|
|
|
|
|
|
|
Dominik Wunderlich, Daniel Bernau, Francesco Aldà, Javier Parra-Arnau and Thorsten Strufe
Hierarchical text classification consists of classifying text documents into a hierarchy of classes and sub-classes. Although Artificial Neural Networks have proved useful to perform this task, unfortunately, they can leak training data information to ad...
ver más
|
|
|
|
|
|
|
Joseph Pedersen, Rafael Muñoz-Gómez, Jiangnan Huang, Haozhe Sun, Wei-Wei Tu and Isabelle Guyon
We address the problem of defending predictive models, such as machine learning classifiers (Defender models), against membership inference attacks, in both the black-box and white-box setting, when the trainer and the trained model are publicly released...
ver más
|
|
|
|