Image (mis)-Classification, Deep Learning, and an Uncertainty Principle?

You’ve seen the articles about the horror of image mis-classification, right? The ones where Stop Signs aren’t recognized because of stickers, schoolbuses become panda bears, and faces aren’t recognizable because of glasses (°)?
The reason for this has been hard to identify, but Gilmore et. al. — https://arxiv.org/abs/1801.02774 — may have found something in play, a kind of Uncertainty Principle if you will. They found that there is a tradeoff between the test error, and the distance to the test error (in the higher dimensional space that the classifications are occurring in).
To put this differently, it has been assumed that the mis-classification occurs because the model is incorrectly including data from outside the classification space(“outside the data manifold”). What this result seemingly shows is that the models are correctly identifying incorrect data, they actually think the school-bus is a panda bear, so to speak.
If true, this may point out that — for insufficiently large sets — there will always be mis-classification.
Interesting times…
(°) Superman/Clark Kent. I made this up. Whatever .

Comments

Popular posts from this blog

Cannonball Tree!