fbpx

You’re Machine Learning the Wrong Thing!

Arrows missing a target.Well, you might be machine learning the wrong thing…

Because it’s easy to get complacent. You find something familiar, set it as a goal, work hard to achieve it, and get distracted from true success.

This is particularly true for machine learning people, because we have so many incredible tools for measuring the quality of the models we produce. It’s great. A classification task? Precision and a recall. You move those numbers in the right direction and you’re achieving success. Move them further, you’re doing better. You have the game set up. You have the tools. You can win!

Sometimes that works, but  get tunnel vision on optimizing your models — before long you’ll be machine-learning the wrong thing.

For example, consider a system to stop phishing attacks.

Phishing involves web sites that look like legitimate banking sites but are actually fake sites, controlled by abusers. Users are lured to these phishing sites and tricked into giving their banking passwords to criminals. Not good.

But machine learning can help!

Talk to a machine-learning person and it won’t take long to get them excited. ML people will quickly see how to build models that examine web pages and predict whether they are phishing pages or not. These models will consider things like the text, the links, the forms, and the images on the web pages. If the model thinks a page is a phish, block it. If a page is blocked, a user won’t browse to it, won’t type their banking password into it. Perfect.

So number of phishing pages you block seems like a great thing to optimize — block more phishing sites, and the system is doing a better job.

Or is it?

What if your model is so effective at blocking sites that phishers quit? Every single phisher in the world gives up and finds something better to do with their time? Perfect! But then there wouldn’t be any more phishing sites and the number of blocks would drop to zero. The system has achieved total success, but the metric indicates total failure. Not great.

Or what if the system blocks one million phishing sites per day, every day, but the phishers just don’t care? Every time the system blocks a site, the phishers simply make another site. Your machine learning is blocking millions of things, everyone on the team is happy, and everyone feels like they are helping people—but the number of users losing their credentials to abusers is the same as before your system was built. Not great.

And these are sort of toy examples, but there are two important points: Things change and your metrics aren’t right.

Things change

Your problem will change, your users will change, the business environment will change. If you don’t also change your machine learning goals – you’ll be machine-learning the wrong thing in no time.

Some common sources of change include:

  • Users – new users come, old users leave, users change their behavior, users learn to use the system better, users get bored.
  • Problems – your problem changes, new news stories are published, fashion trends changes, natural disasters occur, elections happen.
  • Costs – the cost of running your system might change, which puts new constraints on model execution and data and telemetry collection.
  • Objectives – the business environment might change, maybe a feature that attracted users last year is ho-hum this year.
  • Abuse – if people can make a buck by abusing your system, you can bet they will…

If you aren’t thinking about how these types of change are affecting your system on a regular basis, you’re machine-learning the wrong thing.

Your Metrics Aren’t Right

The true objective of your system isn’t to have high-quality intelligence. The true objective is something else, like keeping users from losing their passwords to abusers (or maybe even making your business some money).

A system’s true objective tends to be very abstract (like making money next quarter), but the things a system can directly affect tend to be very concrete (like deciding whether to block a web site or not). Finding a clear connection between the abstract and concrete is a key source of tension in setting goals for machine learning and Intelligent Systems. And it is really hard.

One reason it is hard is that different participants will care about different types of goals (and have their own tools for measuring them). For example:

  • Some participants will care about making money and attracting and engaging customers.
  • Some participants will care about helping users get good outcomes.
  • Some participants will care that the intelligence of the system is accurate.

These are all important goals, and they are related, but the connection between them is indirect: you won’t make much money if the system is always doing the wrong thing; but making the intelligence 1% better will not translate into 1% more profit.

If you don’t understand how your metrics relate to true success, you’re machine learning the wrong thing (Ok, Ok… I promise, I’ll only say it one more time…)

Machine learning the right thing…

So you’ll need to invest in keeping your goals healthy.

Start by defining success on different levels of abstraction and coming up with some story about how success at one layer contributes to the others. This doesn’t have to be a precise technical endeavor, like a mathematical equation, but it should be an honest attempt at telling a story that all participants can get behind.

Then meet with team members on a regular basis to talk about the various goals and their relationships. Look at some data to see if your stories about how your goals relate might be right – or how you can improve them. Don’t get too upset that things don’t line up perfectly, because they won’t.

For example:

  • On an hourly or daily basis: optimize model properties, like the false positive rate or the false negative rate of the model. For example: how many phishing sites are getting blocked?
  • On a weekly basis: review the user outcomes and make sure changes in model properties are affecting user outcomes as expected. For example: you blocked more phishing sites, did fewer users end up getting phished?
  • On a monthly basis: review the leading indicators – like customer sentiment and engagement – and make sure nothing has gone off the rails. For example: How many users say they feel safer using your browser because of the phishing protection? How many are irritated by it?
  • On a quarterly basis: look at the organizational objectives and make sure your work is moving in the right direction to affect them. For example: market share, particularly for visits to banking sites?

Your team members will make better decisions when they have some understanding of these different measures of success, and some intuition about how they relate.

And remember: you’ll need to revisit the goals of your Intelligent System often. Because things change, and if you don’t invest the time to keep your goals healthy – you’re machine learning the wrong thing!

You can learn much more in the book: building intelligent systems. You can even get the audio book version for free by creating a trial account at Audible.

Design Patterns for Machine Learning

There are many skills that go into making working Intelligent Systems. As an analogy, in software you have base skills like:
  • Programming languages
  • Algorithms and data structures
  • Networking and other specialized skills
But then you have to take these skills and combine them to make a working system. And the ability to do this combination is a skill in its own right, sometimes called Software Engineering. To be good at software engineering you need to know about architecture, software lifecycles, management and program management — all different ways to organize the parts of the system and the people building the system to achieve success.
 
Software engineering skills are critical to moving beyond building small systems, with a couple of people, and to start having big impact.
 
When working with AI and machine learning you have to add a bunch of things to the base skills, including:
  • Statistics
  • Data science
  • Machine learning algorithms
  • And then maybe some specialized things like computer vision or natural language understanding
But then you also need to integrate these skills into your broader software engineering process, so that you can turn data into value at large scale.
And the ability to do this combination is a skill in its own right too. Not Software Engineering exactly, call it Machine Learning Engineering.
And here are two very important concepts in setting up an Intelligent System for success in practice:
  • The first is Closing the Loop between users and intelligence so that they support each other.
  • The second is Balancing the key components of your system, and maintaining that balance as your problem and your users evolve over time.
Taken together these form the basis of what I call the closed loop intelligent system pattern for applying machine learning.

Closing the Loop

Virtuous cycle between intelligence and users.Closing the loop is about creating a virtuous cycle between the intelligence of a system and the usage of the system. As the intelligence gets better, users get more benefit from the system (and presumably use it more) and as more users use the system, they generate more data to make the intelligence better.
 
So, for example in a search engine, you type your query and get some answers. If you find a useful web page, you click it and are happy. Maybe you come back and use the search engine again. Maybe you tell your friends and they start using the search engine. As a user, you are getting value from the interaction. Great.
 
But the search engine is getting value from the interaction too. Because when you click your answers, the search engine gets to see which pages get clicked in response to which queries. Maybe the most popular answer to a particular query is 5th on the list. The search engine will see that users prefer the 5th answer to the answer it thought was best. The search engine can use this to adapt and improve. And the more users use the system, the more opportunities there are to improve.
 
This is a virtuous cycle between the intelligence of the system and the usage of the system. Closing the loop between users and intelligence is key to being efficient and scalable with Intelligent Systems.
 
Doing extra work to close the loop, and let your users help your Intelligent System grow, can be very efficient, and enable all sorts of systems that would be prohibitively expensive to build any other way.

Balancing Intelligent Systems

There are five things you need to keep in balance to have a successful Intelligent System.
 
The Objective. An Intelligent System must have a reason for being, one that is meaningful to users and accomplishes your goals. The objective should be one that requires an intelligent system (and that you can’t solve easier and cheaper some other way), and it must also be achievable by the Intelligent System you will be able to build and run. Your objective might be relatively easy, or it might be hard, getting the objective right is critical for achieving success, and it is hard to do.
The Experience. An Intelligent System needs a user experience that takes the output of the intelligence (such as the predictions its machine learning makes) and presents it to users to achieve objectives. To do this the experience must put the intelligence in a position to shine when it is right—while minimizing the cost of mistakes it makes when it is wrong. The experience must not irritate users, and it must leave them feeling they are getting a good deal. And it must also elicit both implicit and explicit feedback from users to close the loop and help the system improve its intelligence over time.
 
The Implementation. The Intelligent System implementation includes everything it takes to execute intelligence. This involves things like deciding where the intelligence lives: in a client, a service or a backend. It involves building the pipes to move new intelligence to where it needs to be safely and cheaply. It involves controls on how and when the intelligence is exposed to users. And controlling what and how much to collect in telemetry to balance costs while improving over time.
 
The Intelligence. Most Intelligent Systems will have complex intelligences made up of many, many models and hand-crafted rules. The process of creating these can be quite complex too, involving many people working over many years. Intelligence creation must be organized so that the right types of intelligence address the right parts of the problem, and so it can be effectively created by a team of people over an extended time.
 
The Orchestration. Things change, and all the elements of an Intelligent System must be kept in balance to achieve its objectives. This orchestration includes keeping the experience in sync with the quality of the intelligence as it evolves, deciding what telemetry to gather to track down and eliminate problems, and how much money to spend building and deploying new intelligence. It also involves dealing with mistakes, controlling risk, and defusing abuse.
 
If you want a to learn more you can watch the free webinar.
 
And if you really want to learn how to create Closed Loop Intelligent Systems check out the book or the audio book, which you can get for free if you start a trial account with Audible.

Will Mistakes Ruin the AI Revolution?

Intelligent Systems make mistakes. There is no way around it. The mistakes will be inconvenient, some will be actually quite bad. If left unmitigated the mistakes can make an Intelligent System seem stupid, they could even render an Intelligent System useless or dangerous.
 
Here are some example situations that might result from mistakes in an Intelligent System:
  • You are talking to your wife, but your personal assistant thinks you said ‘Tell Bob…all the stuff you said to your wife…’
  • Your self-driving car starts following a lane that doesn’t exist and you end up in an accident.
  • Your social network thinks your posts are offensive…but they aren’t.
These types of mistakes, and many others, are just part of the cost of using machine learning and artificial intelligence to build systems.
 
And these mistakes are not the fault of the people doing the machine learning. I mean, I guess the mistakes could be their fault — it’s always possible for people to be bad at their jobs — but even people who are excellent — world class — at applied machine learning will produce intelligence that make mistakes.
 
Mistakes in intelligent systems can occur when:
  • A part of your Intelligent System has an outage.
  • Your model is created, deployed, or interpreted incorrectly.
  • Your intelligence isn’t a perfect match for the problem (and it isn’t).
  • The problem evolves, so yesterday’s answer is wrong for today.
  • You user base changes, and new users act in ways you did not expect.

Why mistakes in Intelligent Systems are so damaging

Intelligent experiences succeed by meshing with their users in positive ways, making users happier, more efficient, helping them act in more productive ways (or ways that better align with positive business outcomes).
 
But dealing with Intelligent Systems can be stressful for some users, by challenging expectations.
 
One way to think about it is this: Humans deal with tools, like saws, books, cars, objects. These things behave in predictable ways. We’ve evolved over a long time to understand them, to count on them, to know what to expect out of them. Sometimes they break, but that’s rare. Mostly they are what they are, we learn to use them, and then stop thinking so much about them.
 
Tools become, in some ways, parts of ourselves, allowing us powers we wouldn’t have without them.
They can make us feel good, safe, comfortable.
 
Intelligent Systems aren’t like this, exactly.
 
Intelligent Systems make mistakes. They change their ‘minds’. They take very subtle factors into consideration in deciding to act. Sometimes they won’t do the same thing twice in a row, even though a user can’t tell that anything has changed. Sometimes they even have their own motivations that aren’t quite aligned with their user’s motivations.
 
Interacting with intelligent systems can seem more like a human relationship than like using a tool.
 
Here are some ways this can affect users:
 
Confusion — When the intelligent system acts in strange ways or makes mistakes, users will be confused. They might want to (or have to) invest some thought and energy to understanding what is going on.
 
Distrust — When the intelligent system influences user actions will the user like it or not? For example, a system might magically make the user’s life better, or it might nag them to do things, particularly things the user feels are putting others’ interests above theirs (e.g. by showing them ads).
 
Lack of Confidence — Does the user trust the system enough to let it do its thing or does the user come to believe the system is ineffective, always trying to be helpful, but always doing it wrong?
 
Fatigue — When the system demands user attention, is it using it well, or is asking too much of the user? Users are good at ignoring things they don’t like.
 
Creep-o-ville — Will the interactions make the user feel uncomfortable? Maybe the system knows them too well. Maybe it makes them do things they don’t want to do, or post information they feel is private to public forums. If a smart TV sees a couple getting familiar on the couch it could lower the lights and play some romantic music — but should it?
 
If these emotions begin to dominate users’ thoughts when they think about systems built with AI — we have a problem.
 

Getting Ready for Mistakes in your own Intelligent System

So is it time to give up?
 
No way!
 
You can take control of the mistakes in your intelligent systems, embrace them, and design systems that protect users from them.
 
But in order to solve a problem, you have to understand it, so ask yourself: what is the worst thing my Intelligent System could do?
 
Maybe your Intelligent System will make minor mistakes, like flashing a light the user doesn’t care about or playing a song they don’t love.
 
Maybe it could waste time and effort, automating something that a user has to undo, or causing your user to take their attention off of the thing they actually care about and look at the thing the intelligence is making a mistake about.
 
Maybe it could cost your business money by deciding to spend a lot of CPU or bandwidth, by accidentally hiding your best (and most profitable) content.
 
Maybe it could put you at legal risk by taking an action that is against the law somewhere, or by shutting down a customer or a competitor’s ability to do business, causing them damages you might end up being liable for.
 
Maybe it could do irreparable harm by deleting things that are important, melting a furnace, or sending an offensive communication from one user to another.
 
Maybe it could hurt someone — even get someone killed.
 
Most of the time when you think about your system you are going to think about how amazing it will be, all the good it will cause, all the people who will love it. You’ll want to dismiss its problems; you’ll even try to ignore them.
 
Don’t.
 
Find the worst thing your system can do.
 
Then find the second worst.
 
Then the third worst.
 
Then get five other people to do the same thing. Embrace their ideas, accept them.
 
And then when you have fifteen really bad things your Intelligent System might do, ask yourself: is that okay?
 
Because these types of mistakes are going to happen, and they will be hard to find, and they will be hard to correct.
 

Making Your Mistakes Less Costly

Random, low cost mistakes are to be expected. But when mistakes spike, when they become systematic, or when they become risky/expensive you might consider mitigation, common approaches include:
  • Find mistakes fast — by building lots of great feedback systems into your product, including ways for users to report problems and telemetry systems to capture examples of problems occurring. This type of investment will help you solve problems before they cause serious trouble, but it will also help you get data to make the system better.
 
  • Build better intelligence management — that allow you to deploy new intelligence cheaply and reliably, expose it to users in a controlled fashion, and roll it back if something goes wrong. The faster you can react to a problem, the more you can control the cost of the problem.
 
  • Rebalancing the experience — so that mistakes are less costly to the user, are easier for the user to notice, and are easier for the user to correct. For example, prompting the user to ask if they want to send a message to their friend, instead of automatically sending it. Or moving a suspicious email to a junk folder instead of deleting it. Or by simply reducing the frequency of interaction between the user and the intelligent system.
 
  • Solving a different problem if the mistakes your system can make are too bad to contemplate… you might consider doing something else. This could be a simpler version of what you are trying to do (e.g. lane following as opposed to full driving automation). And working on this simpler problem can give you time to build towards solving the problem you really want to solve.
 
  • Implementing guardrails — such as simple heuristic rules that prevent the system from making obvious mistakes, or from making the same mistake over and over and over. Sure, your machine learning should be able to learn these things. But sometimes you need to take control for a while and help keep users safe and happy. Used sparingly, guardrails can be an effective addition to any intelligent system.
 
  • Investing more in intelligence — by building better models. You can do this by investing in machine learning, in the data that fuels the machine learning (including collecting more telemetry from the live service). You can do this by allowing more CPU at training time or at run time. And even automating parts of the intelligence creation process.
An active mistake mitigation plan can allow the rest of your Intelligent System to be more aggressive — and achieve more impact. Embracing mistakes, and being wise and efficient at mitigating them, is an important part of creating systems that work in practice.
 
You can learn much more in the book: building intelligent systems. You can even get the audio book version for free by creating a trial account at Audible.
 
Also, check out my friend’s small business, which is currently being seriously affected by mistakes in a big company’s AI systems https://togethermade.com/.

Acing the Machine Learning Interview

A whiteboard during a machine learning interview.In my decade of managing applied machine learning teams I’ve interviewed maybe a hundred people. Over that time, I’ve come to rely on two main questions. I’m going to tell you what they are.
 
First, a bit of philosophy. There are lots of things we could talk about in an interview:
  • What do you like?
  • What did you do in your last project?
  • Can you tell a good story about yourself?
  • Have you read lots of papers about machine learning?
  • Can you program?
  • Do you know statistics?
All of that is great, and of course candidates must know those things to get a job, but what I also want to know is: what can you do when you have a blank screen in front of you and an open-ended machine learning task to complete?
 
That isn’t easy to figure out in an interview, but I try. The approach I take is to talk through an end-to-end problem. For example:
 
Let’s walk through an example of intelligence creation: a blink detector. Maybe your application is authenticating users by recognizing their irises, so you need wait till their eyes are open to identify them. Or maybe you are building a new dating app where users wink at the profiles of the users they’d like to meet. How would you build it?
 
There are so many interesting things to discuss, so many ways to approach this question, and I still learn from the conversations I have. A good answer has discussion on the following topics:
  • Understanding the Environment
  • Defining Success
  • Getting Data
  • Getting Ready to Evaluate
  • Simple Features and Heuristics
  • Machine Learning
  • Understanding the Tradeoffs
  • Assessing and Iterating

Understanding the environment

The first step in every applied intelligence-creation project is to understand what you are trying to do. Detect a blink, right? I mean, what part of “detect a blink” is confusing? Well, nothing. But there are some additional things you’ll need to know to succeed. Candidates might ask things like:
  • What kind of sensor will the eye images come from? Will the image source be standardized or will different users have different cameras?
  • What form will the input take? A single image? A short video clip? An ongoing live feed of video?
  • Where will the product be used? On desktop computers? Laptops? Indoors? Outdoors?
  • How will the system use the blink output? Should the output of the intelligence be a classification (that is, a flag that is true if the eye is closed and false if it is opened)? Should the output be a probability (1.0 if the eye is closed, and 0.0 if the eye is opened)? Or should the output be something else?
  • What type of resources can the blink detector use? How much RAM and CPU are available for the model? What are the latency requirements?
That’s a lot of questions before even getting started, and the answers are important to making good decisions about how to proceed.
 

Defining Success

To succeed, the blink detector will need to be accurate. But how accurate? This depends on what it will be used for. I want to know if a candidate can consider the experience that their model will drive and discuss how various levels of accuracy will change the way users perceive the overall system.
 
Questions include:
  • How many mistakes will a user see per day?
  • How many successful interactions will they have per unsuccessful interaction?
  • What will the mistakes cost the user?
I look for a discussion of options for how accuracy and experience will interact, how users will perceive the mistakes, and how will they be able to work around them.
 

Getting Data

Data is critical to creating intelligence. If you want to do machine learning right out of the gate, you’ll need lots of training data. I hope a candidate can discuss two distinct ways to think about getting data:
 
Getting data to bootstrap the intelligence:
  • Search the web and download images of people’s faces that are a good match for the sensor the blink- detector will be using (resolution, distance to the eye, and so on). Then pay people to separate the images into ones where the eye is opened and ones where it is closed.
  • Take a camera (that is a good match to the one the system will need to run on) to a few hundred people, have them look into the camera and close and open their eyes according to some script that gets you the data you need.
  • Something else?
How to get data from users as they use the system:
A well-functioning Intelligent System will produce its own training data as users use it. But this isn’t always easy to get right. In the blink-detector case some options include:
  • Tie data collection to the performance task: For example, in the iris-login system, when the user successfully logs in with the iris system, that is an example of a frame that works well for iris login. When the user is unable to log in with their iris (and has to type their password instead), that is a good example of a frame that should be weeded out by the intelligence.
  • Creating a data collection experience: For example, maybe a setup experience that has users open and close their eyes so the system can calibrate (and capture training data in the process). Or maybe there is a tutorial in the game that makes users open and close their eyes at specific times and verify their eyes are in the right state with a mouse-click (and capture training data).

Getting Ready to Evaluate

A candidate should have a very good understanding of evaluating models, including:
 
1. Setting aside data for evaluation:
Make sure there is enough set aside, and the data you set aside is reasonably independent of the data you’ll use to create the intelligence. In the blink-detector case you might like to partition by user (all the images from the same person are either used to create intelligence or to evaluate it), and you might like to create sub-population evaluation sets for: users with glasses, ethnicity, gender, and age.
 
2. Creating a framework to run the evaluation:
That is, a framework to take an “intelligence” and executes it on the test data exactly as it will be executed at runtime. Exactly. The. Same.
 
3. Generating reports on intelligence quality that can be used to know:
  • How accurate the intelligence is.
  • If it is making the right types of mistakes or the wrong ones.
  • If there is any sub-population where the accuracy is significantly worse.
  • Some of the worst mistakes it is making.
 

Simple Features and Heuristics

I like to have some discussion about simple heuristics that can solve the problem, because:
  1. Making some heuristics can help you make sure the problem is actually hard (if your heuristic intelligence solves the problem you can stop right away, saving time and money).
  2. It can create a baseline to compare with more advanced techniques—if your intelligence is complex, expensive, and barely improves over a simple heuristic, you might not be on the right track.
In the case of blink-detection you might try:
  • Measuring gradients in the image in horizontal and vertical directions, because the shape of the eye changes when eyes are opened and closed.
  • Measuring the color of the pixels and comparing them to common “eye” and “skin” colors, because if you see a lot of “eye” color the eye is probably open, and if you see a lot of “skin color” the eye probably closed.
Then you might set thresholds on these measurements and make a simple combination of these detectors, like letting each of them vote “open” or “closed” and going with the majority decision.
 
If a candidate has computer vision experience their heuristics will be more sophisticated. If they don’t have computer vision experience their heuristics might be as bad as mine. It doesn’t matter as long as they come up with some reasonable ideas and have a good discussion about them.
 

Machine Learning

I look for candidates who can articulate a simple “standard” approach for the type of problem we’re discussing. And I am aware that standards change. It doesn’t matter what machine learning technique the candidate suggests, as long as they can defend their decisions and exchange ideas about the pros and cons.
 
And here is where I bring in the second question. I let the candidate pick their favorite machine learning algorithm and then ask them to teach me something about it.
 
This can mean different things for different people. They might go to the board and explain the math about how to train the model. Maybe they explain the model representation and how inference works. They could discuss what types of feature engineering works well with the approach. Maybe they explain what types of problems the approach works well on — and which it works poorly on. Or maybe they explain the parameters the training algorithm has and what the parameters do and how they know which to change based on the results of a training run.
 
What’s important is that they understand the tool and make me believe they can use it effectively in practice.
 

Understanding the Tradeoffs

I want a candidate to be able to discuss some of the realities of shipping a model to customers. This is a process of exploring constraints and trade-offs. Discussing questions like these:
  • How does the intelligence quality scale with computation in the run-time?
  • How many times will we need to plan to update the intelligence per week?
  • What is the end-to-end latency of executing the intelligence on a specific hardware setup?
  • What are the categories of worst customer-impacting mistakes the intelligence will probably make?
The answers to these questions will help decide where the intelligence should live, what support systems to build, how to tune the experiences, and more. The candidate should be able to talk about these.
 

Assess and Iterate

And of course, machine learning is iterative. The candidate must be able to talk about the process of iterating, saying things like:
  • You could look at lots of false positives and false negatives.
  • You could try more or different data.
  • You could try more sophisticated features.
  • You could try more complex machine learning.
  • You could try to change people’s minds about the viability of the system’s objectives.
  • You could try influencing the experience to work better with the types of mistakes you are making.
  • And then you iterate and iterate and iterate.
A junior candidate might start in the middle of this list and might only be able to talk about one or two of these topics. A senior candidate should have a good sense of all of them and be able to discuss options as I probe and add constraints. There is no right answer — good discussion is key.
And if you really want to learn how to ace the machine learning interview, you can check out the book or the audio book, which you can get for free if you start a trial account with Audible.

Do you Need an Intelligent System?

Building Intelligent Systems Logo
Intelligent systems connect users to artificial intelligence (machine learning) to achieve meaningful objectives. An intelligent system is one in which the intelligence evolves and improves over time, particularly when the intelligence improves by watching how users interact with the system.
How do you know an intelligent system is right for you?
 
One key factor in knowing whether you’ll need an intelligent system is how often you think you’ll need to update the system before you have it right. If the number is small—for example, five or ten times—then an intelligent system is probably not right. But if the number is large—for example, every hour for as long as the system exists—then you might need an intelligent system.
 
There are four situations that clearly require this level of iteration:
  • Big problems, that require a lot of work to solve.
  • Open-ended problems, which continue to grow over time.
  • Time-changing problems, where the right answer changes over time.
  • Intrinsically hard problems, which push the boundaries of what we think is possible.

Big Problems

Some problems are big. They have so many variables and conditions that need to be addressed that they can’t really be completed in a single shot. For example, there are more web pages than a single person could read in their lifetime—more than a hundred people could read. There are so many books, television programs, songs, video games, live event streams, tweets, news stories, and e-commerce products that it would take thousands of person-years just to experience them all.
 
These problems and others like them require massive scale. If you wanted to build a system to reason about one of these, and wanted to completely finish it before deploying a first version… well, you’d probably go broke trying. When you have a big problem that you don’t think you can finish in one go, an Intelligent System might be a great way to get started, and an efficient way to make progress on achieving your vision, by giving users something they find valuable and something they are willing to help you improve.

Open-Ended Problems

Some problems are more than big. Some problems are open-ended. That is, they don’t have a single fixed solution at all. They go on and on, requiring more work, without end. Web pages, books, television programs, songs, video games, live event streams—more and more of them are being created every day.
 
Trying to build a system to reason about and organize things that haven’t even been created yet is hard. In these cases, a static solution—one where you build it, deploy it, and walk away—is unlikely to work. Instead, these situations require services that live over long periods of time and grow throughout their lifetimes. If your problem has a bounded solution, an Intelligent System might not be right. But if your problem is big and on-going, an Intelligent System might be the right solution.

Time-Changing Problems

Things change. Sometimes the right answer today is wrong tomorrow. For example:
  • Imagine a system for identifying human faces—and then facial tattoos become super popular.
  • Imagine a system for moving spam email to a junk folder—and then a new genius-savant decides to get in the spam business and changes the game.
  • Or Imagine a UX that users struggle to use—and then they begin to learn how to work with it.
One thing’s for certain—things are going to change. Change means that the intelligence you implemented yesterday—which was totally right for what was happening, which was making a lot of users happy, maybe even making your business a lot of money—might be totally wrong for what is going to happen tomorrow. Addressing problems that change over time requires the ability to detect that something has changed and to adapt quickly enough to be meaningful. If your domain changes slowly or in predictable ways, an Intelligent System might not be needed. On the other hand, if change in your domain is unpredictable, drastic, or frequent, an Intelligent System might be the right solution for you.

Intrinsically Hard Problems

Some problems are just hard. So hard that humans can’t quite figure out how to solve them. At least not all at once, not perfectly. Here are some examples of hard problems:
  • Understanding human speech.
  • Identifying objects in pictures.
  • Predicting the weather more than a few minutes in the future (apparently).
  • Competing with humans in complex, open-ended games.
  • Understanding human expressions of emotion in text and video.
In these situations, machine learning has had great success, but this success has come on the back of years (or decades) of effort, gathering training data, understanding the problems, and developing intelligence. These types of systems are still improving and will continue to improve for the foreseeable future. There are many ways to make progress on such hard problems. One way is to close the loop between users and intelligence creation in a meaningful application using an Intelligent System.
You can learn more from the book or the audio book, which you can get for free if you start a trial account with Audible.