This AI can finally tell humans why we’re losing 

Unlike AIs that have defeated human champions in games like go, this AI can tell you why.
Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox

By bumping off eight of the world’s best bridge players, a new AI has not only claimed a win in one of the games where humans have still held out, but also victory for other AIs of its type.

French startup NukkAI’s bridge champ, named NooK, represents a different kind of AI from the deep learning neural networks which currently power self-driving cars, unlock smartphones with your face, and dominate in strategy games like go

While these AIs are capable of delivering results for complex problems, there’s a catch: we don’t know how they do it. 

They are a black box.

French AI program NooK beat eight of the world’s best bridge players, a victory for AIs of its type.

NooK takes a different tack. Called neurosymbolic AI, it combines deep learning with more traditional AI approaches to create algorithms with deep learning strengths that, essentially, can show their work.

“What we’ve seen represents a fundamentally important advance in the state of artificial intelligence systems,” Stephen Muggleton, professor of machine learning at Imperial College London, told The Guardian.

The black box problem: Deep learning AI is powered by neural networks. As their name suggests, these algorithms take their design cues from the brain. Inside your brain, individual neurons work together as a network to solve complex problems, and deep learning AI does the same thing. 

By using layer after layer of neural nets, they are capable of turning out solutions to some really difficult, beyond-human-level problems, like finding patterns in data, and they excel when given huge amounts of data.

Deep learning AI is capable of turning out solutions to some really difficult, beyond-human-level problems, but we don’t know how they work. They’re a black box.

But the complexity of these AIs also mean it is practically impossible for us to know exactly the process behind their answers — the so-called black box problem. This may not seem like a big deal when it’s unlocking your iPhone, but when it’s driving a car, you probably would like to know what it is “thinking.”

Compare this to traditional, symbolic AI, where the algorithms are designed using a known rule set, so we can interpret their outputs. These rules-based AIs don’t require massive data sets, and can better handle abstract questions, Knowable Magazine reported — as long as the questions fit into their rules.

Now, researchers are learning to combine the two approaches.

“It’s one of the most exciting areas in today’s machine learning,” NYU computer and cognitive scientist Brenden Lake told Knowable.

Neurosymbolic AI will be important for tasks where we want an explanation from the AI — “white box” AI — Auburn University computer science and software engineering assistant professor Anh Nguyen told Freethink.

Neurosymbolic AI: Hybrid neurosymbolic AI starts with a set of known rules, which its neural nets then operate in. In this case, NooK was taught the rules of bridge, and then learned how to sharpen its play by practicing the game. 

“The NooK approach learns in a way that is much closer to human beings,” Muggleton told the Guardian.

AI like NooK are combining known rules with neural nets.

When playing a modified version of bridge, NooK was able to win 67 out of 80 sets against eight human experts — although battling it out in a tweaked version of the game isn’t exactly the best test, as ZME Science pointed out.

Still, it’s a splashy win for the fledgling field of neurosymbolic AI.

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].

Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox
Related
Up Next
Subscribe to Freethink for more great stories