Resumen
Smart city applications that request sensitive user information necessitate a comprehensive data privacy solution. Federated learning (FL), also known as privacy by design, is a new paradigm in machine learning (ML). However, FL models are susceptible to adversarial attacks, similar to other AI models. In this paper, we propose federated adversarial training (FAT) strategies to generate robust global models that are resistant to adversarial attacks. We apply two adversarial attack methods, projected gradient descent (PGD) and the fast gradient sign method (FGSM), to our air pollution dataset to generate adversarial samples. We then evaluate the effectiveness of our FAT strategies in defending against these attacks. Our experiments show that FGSM-based adversarial attacks have a negligible impact on the accuracy of global models, while PGD-based attacks are more effective. However, we also show that our FAT strategies can make global models robust enough to withstand even PGD-based attacks. For example, the accuracy of our FAT-PGD and FL-mixed-PGD models is 81.13% and 82.60%, respectively, compared to 91.34% for the baseline FL model. This represents a reduction in accuracy of 10%, but this could be potentially mitigated by using a more complex and larger model. Our results demonstrate that FAT can enhance the security and privacy of sustainable smart city applications. We also show that it is possible to train robust global models from modest datasets per client, which challenges the conventional wisdom that adversarial training requires massive datasets.