Adversarial machine learning tutorial

Adversarial machine learning tutorial

Overview

Machine learning has seen a remarkable rate of adoption in recent years across a broad spectrum of industries and applications. Many applications of machine learning techniques are adversarial in nature, insofar as the goal is to distinguish instances which are ``bad'' from those which are ``good''. Indeed, adversarial use goes well beyond this simple classification example: forensic analysis of malware which incorporates clustering, anomaly detection, and even vision systems in autonomous vehicles could all potentially be subject to attacks. In response to these concerns, there is an emerging literature on adversarial machine learning, which spans both the analysis of vulnerabilities in machine learning algorithms, and algorithmic techniques which yield more robust learning. This tutorial will survey a broad array of these issues and techniques from both the cybersecurity and machine learning research areas. In particular, we consider the problems of adversarial classifier evasion, where the attacker changes behavior to escape being detected, and poisoning, where training data itself is corrupted. We discuss both the evasion and poisoning attacks, first on classifiers, and then on other learning paradigms, and the associated defensive techniques. We then consider specialized techniques for both attacking and defending neural network, particularly focusing on deep learning techniques and their vulnerabilities to adversarially crafted instances.

Syllabus


9:00 am - 9:15 am
Introduction to adversarial machine learning

9:15 am - 10:00 am
Understanding evasion attacks
- Adversarial Learning
- Feature manipulation for evasion attacks
Defending against Evasion
- Robustness and regularization
- Convex learning with invariances
- Stackelberg Game based Analysis
- Randomized Classification
- Validating Evasion Attack Models

10:00 am - 10:15 am
Coffee break

10:15 am - 11:15 am
Attacks on deep neural networks
- Adversarial Examples
- Optimization Method for Generating Adversarial Examples
- Delving into Transferable Adversarial Examples and Black-box attacks
- Generating Adversarial Examples with Adversarial Networks
- Spatial Transformation based Adversarial Examples
- Exploring the Space of Black-box Attacks on Deep Neural Networks
- Physical Adversarial Examples
- Backdoor Poisoning Attack
Defenses on deep neural networks
- Pre-process input: Exploring the Space of Adversarial Images
- Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
- Iterative Adversarial Retrain
- Characterizing adversarial subspaces using local intrinsic dimensionality
- Defense still has a long way: Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
11:15 am - 11:30 am
Coffee break

11:30 am - 12:30 pm
Understanding poisoning attacks
- Optimization based poisoning attack methods against
---- Collaborative filtering, SVM, General supervised learning tasks
- Data Poisoning Attacks on Multi-Task Relationship Learning
Defense against poisoning attacks
- Robust Logistic Regression
- Robust Sparse and PCA Regression
- Defense Analysis against Poisoning Attacks

For any questions, please contact the tutorial organizers at: lxbosky@gmail.com