TV shows train AI to predict human behavior

Algorithms are learning to guess what you'll do next by analyzing shows like "The Office."
Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox

This article is an installment of The Future Explored, a weekly guide to world-changing technology. You can get stories like this one straight to your inbox every Thursday morning by subscribing here.

Columbia researchers have developed an algorithm for predicting human behavior based on thousands of hours of movies, sports games, and TV shows like “The Office.”

Computer vision is the field of AI that trains machines to interpret visual information like photos and videos. This computer vision algorithm is designed to give machines an “intuition” about human behavioral trends.

Gut feelings for machines: When you’re meeting someone for the first time, you know that you’re going to greet them in some way. You may not be able to determine until right in the moment if that greeting will include a handshake, hug, or fist bump, but you’re already aware of the set of expected behaviors.

Carl Vondrick, the Columbia professor who directed the study, explained that giving machines the ability to “think” in this fashion could hold big implications for the future of human-machine collaboration.

“Our algorithm is a step toward machines being able to make better predictions about human behavior, and thus better coordinate their actions with ours,” Vondrick said in a statement. “Our results open a number of possibilities for human-robot collaboration, autonomous vehicles, and assistive technology.”

Vondrick and the research team claim that this is the most accurate method to date for predicting human actions, interactions, and body language up to several minutes into the future. 

“Our results open a number of possibilities for human-robot collaboration, autonomous vehicles, and assistive technology.”

Carl Vondrick

Part of the innovation here is that the algorithm creates higher-level associations between people and other objects, so when it can’t quite pin down a specific action, such as a particular secret handshake, it can still deduce the higher-level concept that relates to it — such as “greeting.”

“Not everything in the future is predictable,” said Dídac Surís, co-lead author of the paper. “When a person cannot foresee exactly what will happen, they play it safe and predict at a higher level of abstraction. Our algorithm is the first to learn this capability to reason abstractly about future events.”

Harder than it sounds: Giving a machine “intuition” about what will come next, well enough to make predictions in open-ended scenarios, is incredibly complex. Past attempts have focused on predicting individual actions one-by-one, but these methods tend to fail in general contexts, such as social environments with no pre-established set of rules — where spontaneity and uncertainty are high.

“Trust comes from the feeling that the robot really understands people.”

Ruoshi Liu

Of course, this is the name of the game when it comes to human interactions. If machines are going to be useful at the speed and complexity that we require, they won’t be able to refer to a pre-programmed action; they’ll have to be able to make productive, nuanced, real-time decisions in these open and often chaotic environments. 

“Trust comes from the feeling that the robot really understands people,” said Ruoshi Liu, co-lead author of the paper. “If machines can understand and anticipate our behaviors, computers will be able to seamlessly assist people in daily activity.”

Cold water: The algorithm is a far cry from the crime-predicting precogs in Minority Report, but developments like this provoke urgent discussions about how AI is incorporated into public settings. 

In the wrong hands, a computer vision algorithm that can predict human behavior could become a problematic tool. It’s not hard to imagine unsavory applications ranging from advertising to law enforcement — potentially undercutting the researchers’ goal of improving people’s lives with smarter, more trustworthy machines. 

Developing safeguards and clear guidelines for use will be important as these algorithms improve and find potential uses in the real world.

Next steps: The algorithm has proven its efficacy within the lab, but the research team will next need to demonstrate it can produce similar results in other settings.

Looking toward the future, one clear use case for this technology is in autonomous vehicles. If a self-driving car is able to predict, for example, how a pedestrian looking down at their cell phone will act — and adjust accordingly — it could save that person’s life.

Aude Oliva, co-director of the MIT-IBM Watson AI Lab, who was not involved in the study, explained that this is a key development in the field of AI because it brings machines closer to being useful to humans, on human terms.

“Prediction is the basis of human intelligence,” Oliva said. “Machines make mistakes that humans never would because they lack our ability to reason abstractly. This work is a pivotal step towards bridging this technological gap.”

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].

Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox
Related
Should we turn the electricity grid over to AI?
AI could one day be woven throughout the grid management system — here are the pros and cons.
AI skeptic Gary Marcus on AI’s moral and technical shortcomings
From hallucinations to regulatory battles, Gary Marcus argues the AI status quo has failed us and it’s time citizens demand something more.
Flexport is using generative AI to create the “holy grail” of shipping
Flexport is using generative AI to read documents, talk to truckers, and create a “knowledge agent” that’s an expert in shipping.
The West needs more water. This Nobel winner may have the answer.
Paul Migrom has an Emmy, a Nobel, and a successful company. There’s one more big problem on the to-do list.
Can we automate science? Sam Rodriques is already doing it.
People need to anticipate the revolution that’s coming in how humans and AI will collaborate to create discoveries, argues Sam Rodrigues.
Up Next
predict suicide attempts
Subscribe to Freethink for more great stories