Deep learning vs. machine learning: Explained

Both are powerful forms of AI, but one’s more mysterious than the other.

Well, you clicked this, so obviously you’re interested in some of the finer nuances of artificial intelligence. Little wonder; it’s popping up everywhere, taking on applications as far ranging as trying to catch asymptomatic COVID infections via cough, creating maps of wildfires faster, and beating up on esports pros.

It also listens when you ask Alexa or summon Siri, and unlocks your phone with a glance.

But artificial intelligence is an umbrella term, and when we start moving down the specificity chain, things can get confusing — especially when the names are so similar, e.g. deep learning vs. machine learning.

Deep Learning vs. Machine Learning

Let’s make that distinction between deep learning vs machine learning; they’re pretty closely related. Machine learning is the broader category here, so let’s define that first.

 Machine learning is a field of AI wherein the program “learns” via data. It existed on paper in the 1950s and in rudimentary forms by the 1990s, but only recently has the computing power it needs to really shine been available.

That learning data could come from a large set labeled by humans — called a ground truth — or it can be generated by the AI itself.

For example, to train a machine learning algorithm to know what’s a cat — you knew the cat was coming — or not you could feed it an immense collection of images, labeled by humans as cats, to act as the ground truth. By churning through it all, the AI learns what makes something a cat and something not, and can then identify it.

The key difference for deep learning vs machine learning is that deep learning is a specific form of machine learning powered by what are called neural nets.

As their name suggests, neural nets are inspired by the human brain. Between your ears, neurons work in concert; a deep learning algorithm does essentially the same thing. It uses multiple layers of neural networks to process the information, delivering, from deep within this complicated system, the output we ask it to.

Take the computer program AlphaGo. By playing the strategy board game Go against itself countless times, AlphaGo developed its own unique playing style. Its technique was so unsettling and alien that during a game against Lee Sedol, the best Go player in the world, it made a move so discombobulating that Sedol had to leave the room. When he returned, he took another 15 minutes to think of his next step.

He has since announced his retirement. “Even if I become the number one, there is an entity that cannot be defeated,” Sedol told Yonhap News Agency.

Notice how Sedol called AlphaGo an “entity?” That’s because it didn’t play like a run-of-the-mill Go program, or even a typical AI. It made itself into something … else.

Deep learning systems like AlphaGo are, well, deep. And complex. They create programs we really do call entities because they take on a “thinking” pattern that is so complex that we don’t know how they arrive at their output. In fact, deep learning is often referred to as a “black box.”.

The Black Box Problem

Since deep learning neural nets are so complex, they can actually become too complex to comprehend; we know what we put into the AI, we know what it gave us, but in-between, we don’t know how it arrived at that output — that’s the black box.

This may not seem too concerning when the AI in question is recognizing your face to open your iPhone, but the stakes are considerably higher when it’s recognizing your face for the police. Or when it is trying to determine a medical diagnosis. Or when it is keeping autonomous vehicles safely on the road. While not necessarily dangerous, black boxes pose a problem in that we don’t know how the entities are arriving at their decision — and if the medical diagnosis is wrong or the autonomous vehicle goes off the road, we may not know exactly why.

Does this mean we shouldn’t use black boxes? Not necessarily. Deep learning experts are divided on how to handle the black box.

Some researchers, like Auburn University computer scientist Anh Nguyen, want to crack open these boxes and figure out what makes deep learning tick. Meanwhile, Duke University computer scientist Cynthia Rudin thinks we should focus on building AI that doesn’t have a black box  problem in the first place, like more traditional algorithms. Still other computer scientists, like the University of Toronto’s Geoff Hinton and Facebook’s Yann LeCun, think we shouldn’t be worried about black boxes at all. Humans, after all, are black boxes as well.

It’s a problem we’ll have to wrestle with, because it can’t really be avoided; more complex problems require more complex neural nets, which means more black boxes. In deep learning vs machine learning, the former’s going to wipe the floor with the later when problems get tough — and it uses that black box to do so.

As Nguyen told me, there’s no free lunch when it comes to AI.

Related
The West needs more water. This Nobel winner may have the answer.
Paul Migrom has an Emmy, a Nobel, and a successful company. There’s one more big problem on the to-do list.
Can we automate science? Sam Rodriques is already doing it.
People need to anticipate the revolution that’s coming in how humans and AI will collaborate to create discoveries, argues Sam Rodrigues.
AI is now designing chips for AI
AI-designed microchips have more power, lower cost, and are changing the tech landscape.
Why futurist Amy Webb sees a “technology supercycle” headed our way
Amy Webb’s data suggests we are on the cusp of a new tech revolution that will reshape the world in much the same way the steam engine and internet did in the past.
AI chatbots may ease the world’s loneliness (if they don’t make it worse)
AI chatbots may have certain advantages when roleplaying as our friends. They may also come with downsides that make our loneliness worse.
Up Next
ai military jet
Exit mobile version