SafetyNet: Detecting and Rejecting Adversarial Examples Robustly

Authors: Jiajun Lu, Theerasit Issaranon, David Forsyth

ICCV 2017


Abstract

We describe a method to produce a network where current methods such as DeepFool have great difficulty producing adversarial samples. Our construction suggests some insights into how deep networks work. We provide a reasonable analyses that our construction is difficult to defeat, and show experimentally that our method is hard to defeat with both Type I and Type II attacks using several standard networks and datasets. This SafetyNet architecture is used to an important and novel application SceneProof, which can reliably detect whether an image is a picture of a real scene or not. SceneProof applies to images captured with depth maps (RGBD images) and checks if a pair of image and depth map is consistent. It relies on the relative difficulty of producing naturalistic depth maps for images in post processing. We demonstrate that our SafetyNet is robust to adversarial examples built from currently known attacking approaches.


Downloads

Paper: Download 3.84M

Code & Data: Coming Soon.


Feedback

Please send email to us if you have any questions.