fbpx

Thinking like a Machine Learning Architect

I’m going to go through three questions you can ask to start thinking like a machine learning architect:

Is machine learning the right way to solve the problem?

How do machine learning systems integrate with exiting systems?

What is the cost to build and run the ML system over time?

I had a chance to give an overview of machine learning to a “leader” – think executive at a big company. This person is world class at what they do, but has no understanding of machine learning, and fears they are getting left behind.

And it was an eye-opening experience for me. Think about it. Machine learning is hot. The news is full of amazing stories – beyond human level – successes with ML. What if a competitor gets there first? There’s a lot of pressure for a leader to make some good decisions.

And it’s not easy to know what to do. People throw around buzzwords, deep-boosted this, reinforcement BERT-ing of that, Bayesianized sigmoidization of neural activizationing, and blah-bla-die blah-da day. Ask a researcher, they’ll tell you how their latest technique is the linchpin to success and everyone else has been getting it wrong all along; ask someone just out of school and they’ll ask you where the training data is at; ask someone with a lot of experience, and, well, you can’t, because one of the big tech companies already hired them.

Where is the bridge between this potentially amazing tool, and a good decision about if and where to invest in it?

What’s needed is the ability to move beyond asking ‘can I model that?’, and starts asking the question ‘should my organization model that?’ I call this type of thought: machine learning architecture.

And you don’t have to be super technical to be a great machine learning architect. Just think of it like any other investment a business could make. Should we buy a second delivery van? Well, is a van the right tool to solve the problems we’re having? Can we adapt our business to properly leverage it? What will it cost to run it month over month?

Basic questions, but they require understanding the strengths and weaknesses of machine learning in a kind of deep way and they require popping up and understanding the context. I’m going to go through three questions you can ask to start thinking like a machine learning architect:

  1. Is machine learning the right way to solve the problem?
  2. How do machine learning systems integrate with exiting systems?
  3. What is the cost to build and run the ML system over time?

Is machine learning the right way to solve the problem?

If you’re writing software for a bank to deal with withdrawals, you could use machine learning. You’ll have tons of training data, endless logs of transactions with info on: balance before, withdrawal amount, new balance. A simple regression problem…You could probably even get to like 99.5% accuracy if you worked at it hard enough…

Or you could write one line of code: newBalance = oldBalance – withdrawlAmount;

A bit of a silly example. But the point is that machine learning isn’t right for every problem.

A machine learning architect will have a good understanding of the properties that makes machine learning an efficient approach to solving a problem. Here are a few to get you started:

  1. The problem is very large. Like if you have to organize tens of millions of web pages or pictures or social network posts and it’s just too much to do manually. Think about it, there are more web pages than 100 people could examine in their lifetime, more than 1000 people could. When a problem is huge, machine learning might be the right answer.
  2. The problem is open ended. But there are more books, buildings, products, people, and, well – stuff – every day. If you need to constantly make decisions about new things and it’s just not practical to keep up, machine learning might be the right answer.
  3. The problem changes. What’s worse than building an expensive system once? Building it every week, over and over, forever. We’re living through a huge change right now. Every business projection and decision process designed in 2019 is out the window for 2020. Machine learning isn’t a magic bullet for dealing with change, but it can make it faster and cheaper to adapt.
  4. The problem is hard. Things like human level perception, or where humans need some serious expertise to succeed. Think about a game like tic-tac-toe. Anyone can become ‘world class’ at that game by learning a few simple rules. Tic-tac-toe is not hard enough to need ML. Contrast this to chess, where experts are much, much better than beginners – that’s a hard problem and ML might help.

A machine learning architect will identify one of these four properties in a problem before suggesting machine learning. In fact, they’ll probably see several of them. If not, they’ll probably find a cheaper and more reliable solutions without machine learning.

How do machine learning systems integrate with your existing systems?

One of the great Program Managers I had the pleasure to work with used to say: Machine learning is an approach, it isn’t a solution. A machine learning architect will understand how the machine learning approach they select complements and is supported by the approach taken by their existing systems.

And there are several important approaches to machine learning. I call them Machine learning design patterns.

One important ml design pattern is called corpus based, where you invest in creating a data asset and leveraging it across a series of hard (but not-time-changing) problems. When I worked on the Kinect, we took a corpus based approach, collecting tons of data and carefully annotating it for many different uses.

Another important ml design pattern is called closed loop, where you carefully shape the interactions your users will have your system so that they automatically create training data as they go. I used a closed loop approach when working on anti-abuse systems, where an adversary changed the problem every day, so there was much less value in building up a long-lived corpus.

I’ll provide links to videos about these two important ml design patterns (corpus based, closed loop), including a breakdown of their properties, and a walk-through of case studies.

A machine learning architect will be familiar with the pros and cons of the common machine learning design patterns. They’ll know how each design pattern could interact with their current systems & processes, which match well, and which would require major rework.

What is the cost to build and run the ML system over time?

Building an ML system is easy! Just install python, maybe pytorch, a bit of feature engineering, a few days of tuning, then compile the model into your current app and add ‘proven machine learning expert’ to your resume…right?

Well, that’s one way to do it, but if you’re doing it that way, you’re not thinking like a machine learning architect.

To be most valuable, machine learning needs a lot of support, and if you’re not building that support, you probably didn’t need machine learning to begin with.

This include things like telemetry systems, automated retraining, model deployment and management systems, orchestration systems, and client integrations.

You probably won’t have to invest in all of these to make efficient use of machine learning, but a machine learning architect would understand how important they are (given the ml design pattern they’d selected) and how much work it would be to add them to their existing systems.

For example, you’ve got to be realistic about mistakes, because any machine learning based system is going to make mistakes. Wild, and crazy mistakes. Like you know how when a human expert makes a mistake, they are usually at least kind of right? In the ballpark? Because they pretty much know what’s going on? Well, machine learning isn’t like that. Machine learning makes bat-zo-bizarro mistakes.

So ask yourself, how does your existing system interact with the mistakes you expect? Are the mistakes easy to detect and mitigate? Or will you have to change your existing workflows to identify and mitigate the problems that ML will create?

So there are three steps to thinking like a machine learning architect: Do you have the right problem; how does machine learning integrate with existing systems; and what does it cost to build and run over time.

And developing the base skills to think this way can be valuable to anyone involved in machine learning systems, not just the machine learning scientist. So If you’re a machine learning professional, a engineering manager, a technical program manager, or even that leader I got a chance to talk to, who is trying to figure out if and how to invest in machine learning…

You can learn a lot more by reading this book, building intelligent systems. Or by subscribing to my YouTube channel.

Good luck, and stay safe!

Design Patterns for Machine Learning

There are many skills that go into making working Intelligent Systems. As an analogy, in software you have base skills like:
  • Programming languages
  • Algorithms and data structures
  • Networking and other specialized skills
But then you have to take these skills and combine them to make a working system. And the ability to do this combination is a skill in its own right, sometimes called Software Engineering. To be good at software engineering you need to know about architecture, software lifecycles, management and program management — all different ways to organize the parts of the system and the people building the system to achieve success.
 
Software engineering skills are critical to moving beyond building small systems, with a couple of people, and to start having big impact.
 
When working with AI and machine learning you have to add a bunch of things to the base skills, including:
  • Statistics
  • Data science
  • Machine learning algorithms
  • And then maybe some specialized things like computer vision or natural language understanding
But then you also need to integrate these skills into your broader software engineering process, so that you can turn data into value at large scale.
And the ability to do this combination is a skill in its own right too. Not Software Engineering exactly, call it Machine Learning Engineering.
And here are two very important concepts in setting up an Intelligent System for success in practice:
  • The first is Closing the Loop between users and intelligence so that they support each other.
  • The second is Balancing the key components of your system, and maintaining that balance as your problem and your users evolve over time.
Taken together these form the basis of what I call the closed loop intelligent system pattern for applying machine learning.

Closing the Loop

Virtuous cycle between intelligence and users.Closing the loop is about creating a virtuous cycle between the intelligence of a system and the usage of the system. As the intelligence gets better, users get more benefit from the system (and presumably use it more) and as more users use the system, they generate more data to make the intelligence better.
 
So, for example in a search engine, you type your query and get some answers. If you find a useful web page, you click it and are happy. Maybe you come back and use the search engine again. Maybe you tell your friends and they start using the search engine. As a user, you are getting value from the interaction. Great.
 
But the search engine is getting value from the interaction too. Because when you click your answers, the search engine gets to see which pages get clicked in response to which queries. Maybe the most popular answer to a particular query is 5th on the list. The search engine will see that users prefer the 5th answer to the answer it thought was best. The search engine can use this to adapt and improve. And the more users use the system, the more opportunities there are to improve.
 
This is a virtuous cycle between the intelligence of the system and the usage of the system. Closing the loop between users and intelligence is key to being efficient and scalable with Intelligent Systems.
 
Doing extra work to close the loop, and let your users help your Intelligent System grow, can be very efficient, and enable all sorts of systems that would be prohibitively expensive to build any other way.

Balancing Intelligent Systems

There are five things you need to keep in balance to have a successful Intelligent System.
 
The Objective. An Intelligent System must have a reason for being, one that is meaningful to users and accomplishes your goals. The objective should be one that requires an intelligent system (and that you can’t solve easier and cheaper some other way), and it must also be achievable by the Intelligent System you will be able to build and run. Your objective might be relatively easy, or it might be hard, getting the objective right is critical for achieving success, and it is hard to do.
The Experience. An Intelligent System needs a user experience that takes the output of the intelligence (such as the predictions its machine learning makes) and presents it to users to achieve objectives. To do this the experience must put the intelligence in a position to shine when it is right—while minimizing the cost of mistakes it makes when it is wrong. The experience must not irritate users, and it must leave them feeling they are getting a good deal. And it must also elicit both implicit and explicit feedback from users to close the loop and help the system improve its intelligence over time.
 
The Implementation. The Intelligent System implementation includes everything it takes to execute intelligence. This involves things like deciding where the intelligence lives: in a client, a service or a backend. It involves building the pipes to move new intelligence to where it needs to be safely and cheaply. It involves controls on how and when the intelligence is exposed to users. And controlling what and how much to collect in telemetry to balance costs while improving over time.
 
The Intelligence. Most Intelligent Systems will have complex intelligences made up of many, many models and hand-crafted rules. The process of creating these can be quite complex too, involving many people working over many years. Intelligence creation must be organized so that the right types of intelligence address the right parts of the problem, and so it can be effectively created by a team of people over an extended time.
 
The Orchestration. Things change, and all the elements of an Intelligent System must be kept in balance to achieve its objectives. This orchestration includes keeping the experience in sync with the quality of the intelligence as it evolves, deciding what telemetry to gather to track down and eliminate problems, and how much money to spend building and deploying new intelligence. It also involves dealing with mistakes, controlling risk, and defusing abuse.
 
If you want a to learn more you can watch the free webinar.
 
And if you really want to learn how to create Closed Loop Intelligent Systems check out the book or the audio book, which you can get for free if you start a trial account with Audible.

Acing the Machine Learning Interview

A whiteboard during a machine learning interview.In my decade of managing applied machine learning teams I’ve interviewed maybe a hundred people. Over that time, I’ve come to rely on two main questions. I’m going to tell you what they are.
 
First, a bit of philosophy. There are lots of things we could talk about in an interview:
  • What do you like?
  • What did you do in your last project?
  • Can you tell a good story about yourself?
  • Have you read lots of papers about machine learning?
  • Can you program?
  • Do you know statistics?
All of that is great, and of course candidates must know those things to get a job, but what I also want to know is: what can you do when you have a blank screen in front of you and an open-ended machine learning task to complete?
 
That isn’t easy to figure out in an interview, but I try. The approach I take is to talk through an end-to-end problem. For example:
 
Let’s walk through an example of intelligence creation: a blink detector. Maybe your application is authenticating users by recognizing their irises, so you need wait till their eyes are open to identify them. Or maybe you are building a new dating app where users wink at the profiles of the users they’d like to meet. How would you build it?
 
There are so many interesting things to discuss, so many ways to approach this question, and I still learn from the conversations I have. A good answer has discussion on the following topics:
  • Understanding the Environment
  • Defining Success
  • Getting Data
  • Getting Ready to Evaluate
  • Simple Features and Heuristics
  • Machine Learning
  • Understanding the Tradeoffs
  • Assessing and Iterating

Understanding the environment

The first step in every applied intelligence-creation project is to understand what you are trying to do. Detect a blink, right? I mean, what part of “detect a blink” is confusing? Well, nothing. But there are some additional things you’ll need to know to succeed. Candidates might ask things like:
  • What kind of sensor will the eye images come from? Will the image source be standardized or will different users have different cameras?
  • What form will the input take? A single image? A short video clip? An ongoing live feed of video?
  • Where will the product be used? On desktop computers? Laptops? Indoors? Outdoors?
  • How will the system use the blink output? Should the output of the intelligence be a classification (that is, a flag that is true if the eye is closed and false if it is opened)? Should the output be a probability (1.0 if the eye is closed, and 0.0 if the eye is opened)? Or should the output be something else?
  • What type of resources can the blink detector use? How much RAM and CPU are available for the model? What are the latency requirements?
That’s a lot of questions before even getting started, and the answers are important to making good decisions about how to proceed.
 

Defining Success

To succeed, the blink detector will need to be accurate. But how accurate? This depends on what it will be used for. I want to know if a candidate can consider the experience that their model will drive and discuss how various levels of accuracy will change the way users perceive the overall system.
 
Questions include:
  • How many mistakes will a user see per day?
  • How many successful interactions will they have per unsuccessful interaction?
  • What will the mistakes cost the user?
I look for a discussion of options for how accuracy and experience will interact, how users will perceive the mistakes, and how will they be able to work around them.
 

Getting Data

Data is critical to creating intelligence. If you want to do machine learning right out of the gate, you’ll need lots of training data. I hope a candidate can discuss two distinct ways to think about getting data:
 
Getting data to bootstrap the intelligence:
  • Search the web and download images of people’s faces that are a good match for the sensor the blink- detector will be using (resolution, distance to the eye, and so on). Then pay people to separate the images into ones where the eye is opened and ones where it is closed.
  • Take a camera (that is a good match to the one the system will need to run on) to a few hundred people, have them look into the camera and close and open their eyes according to some script that gets you the data you need.
  • Something else?
How to get data from users as they use the system:
A well-functioning Intelligent System will produce its own training data as users use it. But this isn’t always easy to get right. In the blink-detector case some options include:
  • Tie data collection to the performance task: For example, in the iris-login system, when the user successfully logs in with the iris system, that is an example of a frame that works well for iris login. When the user is unable to log in with their iris (and has to type their password instead), that is a good example of a frame that should be weeded out by the intelligence.
  • Creating a data collection experience: For example, maybe a setup experience that has users open and close their eyes so the system can calibrate (and capture training data in the process). Or maybe there is a tutorial in the game that makes users open and close their eyes at specific times and verify their eyes are in the right state with a mouse-click (and capture training data).

Getting Ready to Evaluate

A candidate should have a very good understanding of evaluating models, including:
 
1. Setting aside data for evaluation:
Make sure there is enough set aside, and the data you set aside is reasonably independent of the data you’ll use to create the intelligence. In the blink-detector case you might like to partition by user (all the images from the same person are either used to create intelligence or to evaluate it), and you might like to create sub-population evaluation sets for: users with glasses, ethnicity, gender, and age.
 
2. Creating a framework to run the evaluation:
That is, a framework to take an “intelligence” and executes it on the test data exactly as it will be executed at runtime. Exactly. The. Same.
 
3. Generating reports on intelligence quality that can be used to know:
  • How accurate the intelligence is.
  • If it is making the right types of mistakes or the wrong ones.
  • If there is any sub-population where the accuracy is significantly worse.
  • Some of the worst mistakes it is making.
 

Simple Features and Heuristics

I like to have some discussion about simple heuristics that can solve the problem, because:
  1. Making some heuristics can help you make sure the problem is actually hard (if your heuristic intelligence solves the problem you can stop right away, saving time and money).
  2. It can create a baseline to compare with more advanced techniques—if your intelligence is complex, expensive, and barely improves over a simple heuristic, you might not be on the right track.
In the case of blink-detection you might try:
  • Measuring gradients in the image in horizontal and vertical directions, because the shape of the eye changes when eyes are opened and closed.
  • Measuring the color of the pixels and comparing them to common “eye” and “skin” colors, because if you see a lot of “eye” color the eye is probably open, and if you see a lot of “skin color” the eye probably closed.
Then you might set thresholds on these measurements and make a simple combination of these detectors, like letting each of them vote “open” or “closed” and going with the majority decision.
 
If a candidate has computer vision experience their heuristics will be more sophisticated. If they don’t have computer vision experience their heuristics might be as bad as mine. It doesn’t matter as long as they come up with some reasonable ideas and have a good discussion about them.
 

Machine Learning

I look for candidates who can articulate a simple “standard” approach for the type of problem we’re discussing. And I am aware that standards change. It doesn’t matter what machine learning technique the candidate suggests, as long as they can defend their decisions and exchange ideas about the pros and cons.
 
And here is where I bring in the second question. I let the candidate pick their favorite machine learning algorithm and then ask them to teach me something about it.
 
This can mean different things for different people. They might go to the board and explain the math about how to train the model. Maybe they explain the model representation and how inference works. They could discuss what types of feature engineering works well with the approach. Maybe they explain what types of problems the approach works well on — and which it works poorly on. Or maybe they explain the parameters the training algorithm has and what the parameters do and how they know which to change based on the results of a training run.
 
What’s important is that they understand the tool and make me believe they can use it effectively in practice.
 

Understanding the Tradeoffs

I want a candidate to be able to discuss some of the realities of shipping a model to customers. This is a process of exploring constraints and trade-offs. Discussing questions like these:
  • How does the intelligence quality scale with computation in the run-time?
  • How many times will we need to plan to update the intelligence per week?
  • What is the end-to-end latency of executing the intelligence on a specific hardware setup?
  • What are the categories of worst customer-impacting mistakes the intelligence will probably make?
The answers to these questions will help decide where the intelligence should live, what support systems to build, how to tune the experiences, and more. The candidate should be able to talk about these.
 

Assess and Iterate

And of course, machine learning is iterative. The candidate must be able to talk about the process of iterating, saying things like:
  • You could look at lots of false positives and false negatives.
  • You could try more or different data.
  • You could try more sophisticated features.
  • You could try more complex machine learning.
  • You could try to change people’s minds about the viability of the system’s objectives.
  • You could try influencing the experience to work better with the types of mistakes you are making.
  • And then you iterate and iterate and iterate.
A junior candidate might start in the middle of this list and might only be able to talk about one or two of these topics. A senior candidate should have a good sense of all of them and be able to discuss options as I probe and add constraints. There is no right answer — good discussion is key.
And if you really want to learn how to ace the machine learning interview, you can check out the book or the audio book, which you can get for free if you start a trial account with Audible.