fbpx

Will Mistakes Ruin the AI Revolution?

Intelligent Systems make mistakes. There is no way around it. The mistakes will be inconvenient, some will be actually quite bad. If left unmitigated the mistakes can make an Intelligent System seem stupid, they could even render an Intelligent System useless or dangerous.
 
Here are some example situations that might result from mistakes in an Intelligent System:
  • You are talking to your wife, but your personal assistant thinks you said ‘Tell Bob…all the stuff you said to your wife…’
  • Your self-driving car starts following a lane that doesn’t exist and you end up in an accident.
  • Your social network thinks your posts are offensive…but they aren’t.
These types of mistakes, and many others, are just part of the cost of using machine learning and artificial intelligence to build systems.
 
And these mistakes are not the fault of the people doing the machine learning. I mean, I guess the mistakes could be their fault — it’s always possible for people to be bad at their jobs — but even people who are excellent — world class — at applied machine learning will produce intelligence that make mistakes.
 
Mistakes in intelligent systems can occur when:
  • A part of your Intelligent System has an outage.
  • Your model is created, deployed, or interpreted incorrectly.
  • Your intelligence isn’t a perfect match for the problem (and it isn’t).
  • The problem evolves, so yesterday’s answer is wrong for today.
  • You user base changes, and new users act in ways you did not expect.

Why mistakes in Intelligent Systems are so damaging

Intelligent experiences succeed by meshing with their users in positive ways, making users happier, more efficient, helping them act in more productive ways (or ways that better align with positive business outcomes).
 
But dealing with Intelligent Systems can be stressful for some users, by challenging expectations.
 
One way to think about it is this: Humans deal with tools, like saws, books, cars, objects. These things behave in predictable ways. We’ve evolved over a long time to understand them, to count on them, to know what to expect out of them. Sometimes they break, but that’s rare. Mostly they are what they are, we learn to use them, and then stop thinking so much about them.
 
Tools become, in some ways, parts of ourselves, allowing us powers we wouldn’t have without them.
They can make us feel good, safe, comfortable.
 
Intelligent Systems aren’t like this, exactly.
 
Intelligent Systems make mistakes. They change their ‘minds’. They take very subtle factors into consideration in deciding to act. Sometimes they won’t do the same thing twice in a row, even though a user can’t tell that anything has changed. Sometimes they even have their own motivations that aren’t quite aligned with their user’s motivations.
 
Interacting with intelligent systems can seem more like a human relationship than like using a tool.
 
Here are some ways this can affect users:
 
Confusion — When the intelligent system acts in strange ways or makes mistakes, users will be confused. They might want to (or have to) invest some thought and energy to understanding what is going on.
 
Distrust — When the intelligent system influences user actions will the user like it or not? For example, a system might magically make the user’s life better, or it might nag them to do things, particularly things the user feels are putting others’ interests above theirs (e.g. by showing them ads).
 
Lack of Confidence — Does the user trust the system enough to let it do its thing or does the user come to believe the system is ineffective, always trying to be helpful, but always doing it wrong?
 
Fatigue — When the system demands user attention, is it using it well, or is asking too much of the user? Users are good at ignoring things they don’t like.
 
Creep-o-ville — Will the interactions make the user feel uncomfortable? Maybe the system knows them too well. Maybe it makes them do things they don’t want to do, or post information they feel is private to public forums. If a smart TV sees a couple getting familiar on the couch it could lower the lights and play some romantic music — but should it?
 
If these emotions begin to dominate users’ thoughts when they think about systems built with AI — we have a problem.
 

Getting Ready for Mistakes in your own Intelligent System

So is it time to give up?
 
No way!
 
You can take control of the mistakes in your intelligent systems, embrace them, and design systems that protect users from them.
 
But in order to solve a problem, you have to understand it, so ask yourself: what is the worst thing my Intelligent System could do?
 
Maybe your Intelligent System will make minor mistakes, like flashing a light the user doesn’t care about or playing a song they don’t love.
 
Maybe it could waste time and effort, automating something that a user has to undo, or causing your user to take their attention off of the thing they actually care about and look at the thing the intelligence is making a mistake about.
 
Maybe it could cost your business money by deciding to spend a lot of CPU or bandwidth, by accidentally hiding your best (and most profitable) content.
 
Maybe it could put you at legal risk by taking an action that is against the law somewhere, or by shutting down a customer or a competitor’s ability to do business, causing them damages you might end up being liable for.
 
Maybe it could do irreparable harm by deleting things that are important, melting a furnace, or sending an offensive communication from one user to another.
 
Maybe it could hurt someone — even get someone killed.
 
Most of the time when you think about your system you are going to think about how amazing it will be, all the good it will cause, all the people who will love it. You’ll want to dismiss its problems; you’ll even try to ignore them.
 
Don’t.
 
Find the worst thing your system can do.
 
Then find the second worst.
 
Then the third worst.
 
Then get five other people to do the same thing. Embrace their ideas, accept them.
 
And then when you have fifteen really bad things your Intelligent System might do, ask yourself: is that okay?
 
Because these types of mistakes are going to happen, and they will be hard to find, and they will be hard to correct.
 

Making Your Mistakes Less Costly

Random, low cost mistakes are to be expected. But when mistakes spike, when they become systematic, or when they become risky/expensive you might consider mitigation, common approaches include:
  • Find mistakes fast — by building lots of great feedback systems into your product, including ways for users to report problems and telemetry systems to capture examples of problems occurring. This type of investment will help you solve problems before they cause serious trouble, but it will also help you get data to make the system better.
 
  • Build better intelligence management — that allow you to deploy new intelligence cheaply and reliably, expose it to users in a controlled fashion, and roll it back if something goes wrong. The faster you can react to a problem, the more you can control the cost of the problem.
 
  • Rebalancing the experience — so that mistakes are less costly to the user, are easier for the user to notice, and are easier for the user to correct. For example, prompting the user to ask if they want to send a message to their friend, instead of automatically sending it. Or moving a suspicious email to a junk folder instead of deleting it. Or by simply reducing the frequency of interaction between the user and the intelligent system.
 
  • Solving a different problem if the mistakes your system can make are too bad to contemplate… you might consider doing something else. This could be a simpler version of what you are trying to do (e.g. lane following as opposed to full driving automation). And working on this simpler problem can give you time to build towards solving the problem you really want to solve.
 
  • Implementing guardrails — such as simple heuristic rules that prevent the system from making obvious mistakes, or from making the same mistake over and over and over. Sure, your machine learning should be able to learn these things. But sometimes you need to take control for a while and help keep users safe and happy. Used sparingly, guardrails can be an effective addition to any intelligent system.
 
  • Investing more in intelligence — by building better models. You can do this by investing in machine learning, in the data that fuels the machine learning (including collecting more telemetry from the live service). You can do this by allowing more CPU at training time or at run time. And even automating parts of the intelligence creation process.
An active mistake mitigation plan can allow the rest of your Intelligent System to be more aggressive — and achieve more impact. Embracing mistakes, and being wise and efficient at mitigating them, is an important part of creating systems that work in practice.
 
You can learn much more in the book: building intelligent systems. You can even get the audio book version for free by creating a trial account at Audible.
 
Also, check out my friend’s small business, which is currently being seriously affected by mistakes in a big company’s AI systems https://togethermade.com/.