Share on facebook
Share on twitter
Share on linkedin
Share on email

More AI Basics for Healthcare Leaders

by Andrew Rosen

Thomasina: If you could stop every atom in its position and direction, and if your mind could comprehend all the actions thus suspended, then if you were really, really good at algebra you could write the formula for all the future; and although nobody can be so clever as to do it, the formula must exist just as if one could.

Tom Stoppard, Arcadia

Artificial Intelligence (AI) is suddenly everywhere – hyped, discussed, worried about. And it all stems from a very recent breakthrough, involving cat photos.

Computers are much better than humans at many things. Yet, they struggle at some tasks any 5-year old can do – like identifying a dog in a garden. Identifying images is critical for applications such as self-driving cars, facial recognition, and others. But progress has been very slow. In 2009, scientists created the the ImageNet Challenge to improve the situation. The Challenge pitted teams against each other to design software that could identify digital images. The images came from a hand-labeled data set of 3.2 million pictures. Each team “trained” their software on the data, and then the software tried to identify unseen images – “this is table, this is a schnauzer.” The computers did not do well. Then, out of the blue, in 2012 researchers from the University of Toronto used a neural network to smash the previous record. Their 16% error rate approached that of a human (today, in the same competition it is less than 5%). The dramatic win drew attention to AI and created the boom we are experiencing now. It’s also why so many things have improved so much recently – facial recognition, Google translate, fraud detection, navigation, etc.

In part one we looked at AI in the simplest possible terms. In this part, we’ll go a little bit deeper to understand what is happening under the hood.

What is AI?

In the same way you probably don’t understand exactly how your phone works, you don’t need to understand exactly how AI works. You can create strategies around it just by understanding what it can do. But knowing the basics can be helpful.

Much artificial intelligence terminology is fluid. AI, cognitive computing, machine learning, and big data are imprecise terms and come more from marketing than IT. The simplest way to think about AI is as computers and analytics that are a step more powerful than they were. Computers able to do more things, faster, and to predict outcomes. This is not consciousness. A computer is still just a calculator that can do arithmetic really fast. We are hearing the more accurate, hype-free term applied analytics more and more.

Finding the Formula

Computers think in math, and behave according to formulas. To get a computer to do something, we need to know the mathematical formula for that action and program it it in. Computers have been limited to doing tasks and solving problems based on formulas that we know and understand. We can tell a computer how to understand a keystroke, or a filled circle in a Scantron, or to play tic-tac-toe because there are simple formulas to do it. We can even get a computer to tell a chihuahua from an elephant if we know its height. But we don’t know the mathematical formula for how to recognize a tree or a face, or a word. The other way computers solve problems is through raw power – literally trying every possibility and picking the best one. This only works when the number of possibilities are small. A computer can’t use raw power alone to win at chess. Even though there are only 32 pieces on a 64 square board, there are more possible games of chess than atoms in the universe. Even the most powerful computer can’t try every possible chess combination to work out how to win in a reasonable time frame. So getting a computer to drive on a busy highway by using raw computing power – no chance.

So without the formula, or with an imperfect formula, computers are not very good at certain complex tasks. Until now.

Most of what we call AI are techniques that allow computers to work out (and adapt and improve) the mathematical formula of complex things. What is the mathematical formula (algorithm) for a cat in a picture? for the word “fromage” in speech? for a tumor in a mammogram? for the behavior of bees in a hive? For how a human would respond in traffic? To work out the formula the computer uses informed trial and error. This requires enormous amounts of data in digital format, and tons of power to run trillions of calculations over and over very fast. Only recently have we had the computer power and the gigantic data sets to power and fuel this work.

The trial and error happens in what is called a neural network. The neural network is the computing version of a rudimentary human brain. Interconnected “neurons” gather information from surrounding neurons and then pass that information on if it meets a certain threshold. Think of it as a machine with lots of different, interconnected settings and dials arranged in vertical columns (or layers). The computer is given some data at one end, it travels through the machine through all the various nodes and an answer appears at the other end. If the answer is wrong, it looks at the error, tweaks all the dials, runs another trial. It keeps doing this until it gets the right answer. Once it gets the right output it has “learned” – but what it has really done is created an algorithm that works for that specific task. That algorithm can now be put to work as is, and/or it can be improved over time with new data. There are so many possible tweaks to the settings that a human could not possibly try each combination fast enough.

Two Main Types of Neural Network

There are all sorts of different ways of setting up these neural networks and they are better at solving different problems.

A Convolutional Neural Network (CNN) consists of lots of neuron-like layers that each look for small patterns. This is particularly good for detecting what is going on in photos. For example, in a dog picture, one layer may look for certain edges in parts of the image, the next layer may assemble those edges into eyes, and so on.
A Recurrent Neural Network (RNN), can feed back on itself and is good at detecting things that occur across time or in sequence. It’s used for understanding language, speech recognition and translation.

Different Ways to “Train” the Network.

You can “train” these networks in different ways too. You can either train the computer to work out how to do something that is known, like how to identify a lung tumor. Or you can ask it to try to work out the underlying formula of something, like whether there are indicators in human breath that could diagnose lung cancer. You can also use a mixture of both methods.

Supervised learning is done when a programmer gives the neural network a set of inputs along with an answer key. For example, a set of mammography images where a certain type of abnormality has been detected and labeled. The computer then will keep trying to adjust its algorithm to be able to match the image with the right label more perfectly.

In unsupervised learning a neural network is given an unlabeled dataset and will keep adjusting its algorithms until it finds patterns in the data. This is a much more mysterious process. In one famous example, an unsupervised neural network was tasked with finding patterns in millions of YouTube videos – and came up unassisted with the idea of “cat” as being an important visual pattern.

Decision Opacity

One of the issues here is that what goes on inside the neural network is often impossible for humans to understand. We could task a computer with predicting whether a given pathology specimen suggests disease. We could train it with a big set of pathology specimens and known diagnoses. But it’s possible that when the computer flags a specimen as being reflective of a disease it is seeing something, a pattern, that humans have never identified as being an issue. And it’s difficult and often impossible to know this, or if we know it, to understand it. This becomes a big problem when decisions are made based on AI. Already, in Europe, there are regulations that gives consumers a right to an explanation if say, they are rejected for a loan by AI.

AI in Healthcare

In healthcare there will be technologies that enhance existing technologies and pathways such as speech recognition, automation, and robotics. This will be a more straightforward and familiar area to manage. There will also be areas where computers will predict or make decisions that humans won’t be able to understand – dianosis, bed management, staffing, hiring, treatment recommendations. This will be a more complex area of development.

In the near term, healthcare leaders will see more powerful clinical and management tools. Most of these should be treated like any other investment and considered on their utitlity and merits. In the longer term, where AI displaces staff, changes workflows, or directly affects patient care, much more careful oversight and management will be needed. We will report on these developments as they occur. AI has developed rapidly since 2012 and will continue to move very fast.

I like this video series for explaining what a neural network is.

McKinsey’s executive guide is also a good overview, especially for enumerating the different types of AI and use cases.

Add Your Heading Text Here

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

Leave a Reply