Explainability and Interpretability in Generative Adversarial Networks

Explainability and Interpretability in Generative Adversarial Networks

Although it is possible to generate realistic images, GAN models often lack fine-grained control over the generated images. Recently, several methods have been proposed to provide fine-grained control over latent space. Most of these works allow us to find domain-independent and general directions such as rotation, zoom-in or color change, while others proposed techniques to explore domain-specific directions such as changing age, gender, or expression on facial images. In this work, we will develop new techniques to find interpretable directions in latent space to improve interpretability in GAN models.

 

Project Advisor: 

Pınar Yanardağ

Project Status: 

Project Year: 

2022
  • Spring

Bize Ulaşın

Bilgisayar Mühendisliği Bölümü, Boğaziçi Üniversitesi,
34342 Bebek, İstanbul, Türkiye

  • Telefon: +90 212 359 45 23/24
  • Faks: +90 212 2872461
 

Bizi takip edin

Sosyal Medya hesaplarımızı izleyerek bölümdeki gelişmeleri takip edebilirsiniz