Explainability and Interpretability in Generative Adversarial Networks
Although it is possible to generate realistic images, GAN models often lack fine-grained control over the generated images. Recently, several methods have been proposed to provide fine-grained control over latent space. Most of these works allow us to find domain-independent and general directions such as rotation, zoom-in or color change, while others proposed techniques to explore domain-specific directions such as changing age, gender, or expression on facial images. In this work, we will develop new techniques to find interpretable directions in latent space to improve interpretability in GAN models.