Banana Or Toaster? — Deep Learning Edition

TL;DR: This one simple sticker will turn your banana into a toaster (to a #DeepLearning system)
(OK, that’s a bit simplistic, but it’s not that far off.)
Thing is, we’ve known for a while now that image classification can be hacked (cue scare stories about Stop Signs and School buses, etc.). In fact, there is some evidence to point out that this “hackability” is intrinsic to the process, a kind of uncertainty principle if you will.
Most of the work, thus far, has focused on small perturbations, the kind of stuff that would be invisible to the human eye, but would fool image classifiers. Stuff like manipulating pixels shades, quasi steganographic changes, etc. The large perturbations that people have looked at have also been (quasi)invisible-to-the-human-eye changes, things like glasses that break the classifier (•)
There is new work out by Brown et al. that does things very differently — they create stickers that are pretty huge (10% the size of the actual object, or larger), which pretty much cause the classifier to barf out the wrong answer.
How?
Well, it looks like, to the classifier, the sticker is more important than the object being classified! To put this differently, your Deep Learning system looks at the image, and goes, “yeah, there might be a banana there HOLY F**K THAT IS A TOASTER” because the sticker screams toaster-ness way more than the banana screams banana-ness.
The part that makes this truly fascinating is that this works under a wide variety of light conditions, angles, etc. Because, well, that’s how machine learning rolls 😇
Fun times ahead!!!
(•) I said “quasi”, because we’re not surprised that people wear glasses 😀

Comments

Popular posts from this blog

Erlang, Binaries, and Garbage Collection (Sigh)

Cannonball Tree!

Visualizing Prime Numbers