Resumen
The paper provides a solution for practical federated learning tasks in which a dataset is partitioned among potentially malicious clients. One such case is training a model on edge medical devices, where a compromised device could not only lead to lower model accuracy but may also introduce public safety issues.