d176: Machine Duping 101: Pwning Deep Learning Systems

https://github.com/cchio/deep-pwning

Researchers have found that it is surprisingly trivial to trick a machine learning model (classifier, clusterer, regressor etc.) into making an objectively wrong decisions. This field of research is called Adversarial Machine Learning. It is not hyperbole to claim that any motivated attacker can bypass any machine learning system, given enough information and time. However, this issue is often overlooked when architects and engineers design and build machine learning systems. The consequences are worrying when these systems are put into use in critical scenarios, such as in the medical, transportation, financial, or security-related fields.

Hence, when one is evaluating the efficacy of applications using machine learning, their malleability in an adversarial setting should be measured alongside the system’s precision and recall.

This tool was released at DEF CON 24 in Las Vegas, August 2016, during a talk titled Machine Duping 101: Pwning Deep Learning Systems.

https://www.defcon.org/html/defcon-24/dc-24-speakers.html#Chio

===

Updated (2016.08.09) meetup with Ian Goodfellow on Adversarial Examples and Adversarial Training: https://www.meetup.com/superintelligencemeetup/events/233220099/

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.