Adversarial Robustness and Its Applications in Healthcare

This project focuses on a recent challenge in deep learning and its potential effects on healthcare systems that utilize deep models. Deep networks have demonstrated a strong capability in detecting, classifying, and recognizing patterns in various domains. However, a recent discovery showed that convolutional neural networks are vulnerable to specifically crafted perturbations, also known as adversarial perturbations. One of the

state-of-the-art adversarial defense approaches is adversarial training. However, adversarial training suffers from overfitting. The objectives of this project are two-fold. First, an alternative adversarial defense approach will

be designed to address the poor generalization performance of adversarial training. Secondly, potential effects of adversarial attacks in the healthcare domain will be investigated and the proposed defense technique will be

extended to healthcare applications. It has been shown that adversarial attacks on medical images alter clinical decision-making. In addition to medical images, Electronic Health Records (EHRs) are commonly utilized to

train deep models in healthcare. EHRs have different characteristics than images, such as heterogeneity and temporal dependence, which can be modeled using Recurrent Neural Networks (RNNs). Adversarial training for

RNN models have not been studied thoroughly in the literature. In summary, an adversarial training approach, which is more generalizable and adoptable by different deep architectures, is the main result to be achieved in this project.

Funding Institution: 


Principal Investigator / Project Partner: 

İnci Meliha Baytaş


2020 to 2023

Project Code: 


Contact us

Department of Computer Engineering, Boğaziçi University,
34342 Bebek, Istanbul, Turkey

  • Phone: +90 212 359 45 23/24
  • Fax: +90 212 2872461

Connect with us

We're on Social Networks. Follow us & get in touch.