O'Reilly logo
live online training icon Live Online training

Hands-on adversarial machine learning

enter image description here

Build practical attacks and defenses

Yacin Nadji

Machine learning has become commonplace in software engineering and will continue to grow in importance. Currently, most work focuses on improving classifier accuracy. However, as more and more models interact with the real world, practitioners must consider how resilient their models are against adversarial manipulation. Successful attacks can have serious implications, like crashing a car, misclassifying malicious code, or enabling fraud.

Join Yacin Nadji to learn how to think like an adversary so that you can build more resilient machine learning systems. You'll discover how to use free and open source tools to construct attacks against and defenses for machine learning models, as well as how to holistically identify potential points of attack an adversary could exploit. You'll leave able to critically examine a machine learning system for weaknesses, mount attacks to surface problems, and implement and evaluate practical defenses.

What you'll learn-and how you can apply it

By the end of this live online course, you’ll be able to:

  • Construct a threat model for an arbitrary machine learning system that includes enumerating potential adversaries, identifying relevant attack types, mapping the attack surface, and highlighting likely attacks
  • Run open source adversarial attacks and defenses
  • Design and implement simple adversarial attacks and defenses

This training course is for you because...

  • You are a machine learning practitioner or tech lead (data scientists, ML engineer, etc.) that wants to add adversarial resiliency to their models.

Prerequisites

  • Familiarity with ML applications in the real world and Jupyter notebooks
  • Intermediate experience with designing and building ML systems
  • A working knowledge of Python or a similar scripting/object-oriented language

Recommended follow-up:

About your instructor

  • Yacin Nadji is an engineer at Security Scorecard where he applies machine learning to identify companies’ infrastructure and understand their security risk. He received his Ph.D. from the School of Computer Science at Georgia Institute of Technology with a focus in Computer Security. He has published 20 academic papers with hundreds of citations, many focused on applying ML to solve security problems.

Schedule

The timeframes are only estimates and may vary according to how the class is progressing

Monotonic classifiers (10 minutes)

  • Lecture: The relationship between monotony and classifiers derived from sequences of data (e.g., execution logs)

Direct attack (10 minutes)

  • Hands-on exercises: Defeat a malicious binary classifier by altering feature vectors (direct)
  • Group discussion

Indirect attack (20 minutes)

  • Hands-on exercises: Identify features that are abusable or not robust; choose one feature and explain why it's a strong candidate; describe how you could alter program behavior to change this feature’s value; defeat a malicious binary classifier by altering program behavior (indirect)
  • Group discussion

Defense (20 minutes)

  • Hands-on exercises: Identify negative weights in an example model; explain what this means in the context of gradient descent; eliminate your previously identified negative weights in model; demonstrate that this nullifies previous attacks
  • Group discussion
  • Break (10 minutes)

Retraining and breaking DGA classifier (20 minutes)

  • Lecture: Why simply retraining with adversarial samples is effective; drawbacks, such as loss of accuracy
  • Hands-on exercises: Change character distribution of all malicious DGAs to be classified as benign; retrain with adversarial samples you generated in the last exercise
  • Group discussion

CleverHans and sophisticated whitebox attacks (40 minutes)

  • Lecture: Open source tools and adversarial sample generation techniques
  • Hands-on exercises: Run all attacks on MNIST classifier
  • Group discussion
  • Q&A
  • Break (10 minutes)

JPEG defense (30 minutes)

  • Lecture: JPEG compression defense and why it works (i.e., by preventing model from learning on imperceptible portions of image)
  • Hands-on exercises: Construct JPEG defense on adversarial example images
  • Group discussion
  • Q&A

Wrap-up and Q&A (20 minutes)