Analysis of Model Poisoning Attacks during training in Federated Learning
Federated learning is a distributed machine learning technique that aggregates every client model on the server side. There can be various types of attacks to destroy the robustness of this learning system. Model poisoning attack is realized after the training is finished. In this project, we will use a recently developed method* for analyzing the effect of model poisoning attack that might occur during training.
*: C. Çağlayan and A. Yurdakul, “A Clustering-Based Scoring Mechanism for Malicious Model Detection in Federated Learning,” 25th Euromicro Conference on Digital System Design (DSD’22), August 31 - September 2, 2022, Gran Canaria, Spain.