The *Real* Threat of “Bad AI”
/via http://www.commitstrip.com/en/2018/09/24/trolling-the-ai/ |
This particular episode of CommitStrip pretty much represents the popular take on AI. Mind you, “popular take” is, as usual, not the same as “realistic”. After all, when you’re designing robust systems, one of the very first things you’ll do is correct for signs that are missing, misplaced, or mis-identified, and if/when self-driving cars actually make their way into the real world, one of the things that they will be is robust against errors at multiple levels.
Heck, that’s pretty much the way humans work — our vision is pretty seriously fallible, but we correlate a lot of other information to make sure that we don’t f**k up!
Heck, that’s pretty much the way humans work — our vision is pretty seriously fallible, but we correlate a lot of other information to make sure that we don’t f**k up!
Think about how you process visual feedback when driving. You are, unconsciously, running rules like these in your head all the time
• A sign saying “60 mph” in a residential neighborhood is clearly wrong• If you are wondering if it is a Street Sign or a Banana, go with Street Sign
• It doesn’t matter what it is, if it is in the middle of the freeway, slow down, and then stop
And yes, that’s exactly what any kind of robust AI will be doing too.
• A sign saying “60 mph” in a residential neighborhood is clearly wrong• If you are wondering if it is a Street Sign or a Banana, go with Street Sign
• It doesn’t matter what it is, if it is in the middle of the freeway, slow down, and then stop
And yes, that’s exactly what any kind of robust AI will be doing too.
The reality, however, is that there is a deeper and far more insidious problem at play — that of biases in the models and training data used to actually create the AIs .
You’ve, no doubt, seen the recent articles about Amazon’s ill-fated attempt at building out an AI powered recruiting tool, right? They trained it based upon their own recruiting and hiring history, and — surprise! — they ended up with a system that skewed overwhelmingly towards hiring men. Which wasn’t particularly surprising, because that’s kinda the way Tech culture used to be.¹
Or, far worse, when it comes to facial recognition, commercial products from IBM, Microsoft, and Face++, and found huge disparities when analyzing darker-skinned people. In fact, dark-skinned women faring the worst, with error rates of almost 35%, as compared to < 1% for light-skinned men!!!
Worst of all, it turns out that when it comes to risk-scoring, it is mathematically impossible to actually be “fair” across multiple groups — the point being that the context around how the problem space is defined will — irretrievably — affect the outcome of your system. Context really is key when it comes to how these systems get designed .
So yeah, I’m not particularly concerned about cars driving into traffic signs. What I do worry about is self-driving taxis that won’t pick up dark-skinned people in the US (because…crime), or women in Saudi Arabia (because…culture), etc. Because, if there is one things that we humans are good at, it’s weaponizing the shortcomings of technology. The really bad security on IoT devices (
admin/admin
anyone?) has powered any number of bots, who knows what kind of badness is being caused by electronic voting machines, oh, the list goes on.
So yeah, before we freak out about self-driving cars driving into road-signs, we might want to think about the choices and contexts around the way the AIs in these cars have been designed…
- 1. Re: Amazon’s recruiting/AI debacle, the issue was a bit more nuanced, but the gist pretty much is “bad data in, bad data out”.
Comments