Award conferral confirmation letter

I got my “Award Conferral Confirmation Letter” and officially graduated :sparkles: :smile:



Keywords
  • Trustworthy machine learning
  • AI safety
  • Machine learning robustness
  • Adversarial attacks and defences

Advisor
  • Ranasinghe, Damith Chinthana
  • Abbasnejad, Ehsan

Abstract:

Deep neural networks (DNNs) have been recognized for their remarkable ability to achieve state-of-the-art performance across numerous machine learning tasks. However, DNN models are susceptible to attacks in the deployment phase, where Adversarial Examples (AEs) present significant threats. Generally, in the Computer Vision domain, adversarial examples are maliciously modified inputs that look similar to the original input and are constructed under white-box settings by adversaries with full knowledge and access to a victim model. But, recent studies have shown the ability to extract information solely from the output of a machine learning model to craft adversarial perturbations to black-box models is a practical threat against real-world systems. This is significant because of the growing numbers of Machine Learning as a Service (MLaaS) providers—including Google, Microsoft, IBM—and applications incorporating these models. Therefore, this dissertation studies the weaknesses of DNNs to attacks in black-box settings and seeks to develop mechanisms that can defend DNNs against these attacks.

Project pages of published papers:

If you can change your mind, you can change your life. —William James