Analysis of Model Poisoning Attacks during training in Federated Learning

Analysis of Model Poisoning Attacks during training in Federated Learning

Federated learning is a distributed machine learning technique that aggregates every client model on the server side. There can be various types of attacks to destroy the robustness of this learning system. Model poisoning attack is realized after the training is finished. In this project, we will use a recently developed method* for analyzing the effect of model poisoning attack that might occur during training.

*: C. Çağlayan and A. Yurdakul, “A Clustering-Based Scoring Mechanism for Malicious Model Detection in Federated Learning,” 25th Euromicro Conference on Digital System Design (DSD’22), August 31 - September 2, 2022, Gran Canaria, Spain.

Project Advisor: 

Arda Yurdakul

Project Status: 

Project Year: 

2022
  • Fall

Bize Ulaşın

Bilgisayar Mühendisliği Bölümü, Boğaziçi Üniversitesi,
34342 Bebek, İstanbul, Türkiye

  • Telefon: +90 212 359 45 23/24
  • Faks: +90 212 2872461
 

Bizi takip edin

Sosyal Medya hesaplarımızı izleyerek bölümdeki gelişmeleri takip edebilirsiniz