OpenBCI’s new VR headset reacts to your brain and body

A completely new generation of human-computer interaction is coming.
Galea
Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox

This article is an installment of Future Explored, a weekly guide to world-changing technology. You can get stories like this one straight to your inbox every Thursday morning by subscribing here.

When Joseph Artuso was an undergrad at Columbia, he played rugby with Conor Russomanno, an engineering student. After feeling his mind “change” due to the concussions he suffered on the field, Russomanno hacked a brainwave-reading toy so that he could study his own mind.

Russomanno soon developed a fascination with the brain that led him to co-found OpenBCI — a company that creates open-source tools that make it easy for people to access their own brain data — in 2014. 

Artuso is now the company’s president and chief commercial officer, and in December 2023, he and Russomanno took the stage together at Slush to unveil OpenBCI’s latest product: Galea Beta.

Two men standing on stage. One holds a black headset in his hands.
OpenBCI
Conor Russomanno (left) and Joseph Artuso (right) unveiling the Galea Beta device at Slush.

Galea Beta — named after Gal Sont, an OpenBCI collaborator who passed away from ALS — combines a professional-grade VR/mixed reality headset (developed by Varjo) with an array of physiological sensors that measure a user’s heart, skin, muscles, eyes, and brain activity.

This information can then be used to adjust what the person sees or hears through the headset in real-time.

A far cry from OpenBCI’s first product (a $300 EEG starter kit), Galea Beta is an enterprise device with a starting price of $25,000. The company spent five years developing it, with early partners using it in healthcare, entertainment, workplace training, and more.

“These are enterprise teams … looking to build adaptive experiences that can change based on the real-time reactions of the user’s brain and body,” Artuso said at Slush. 

A rendering of the Galea Unlimited system. It features a VR headset that connects to a U-shaped component that rests over the shoulders.
OpenBCI
A rendering of the Galea Unlimited system.

OpenBCI expects to deliver the devices to the first customers in the second quarter of 2024, but the system is just a stepping stone to Galea Unlimited — while Galea Beta needs to be tethered to a PC, OpenBCI’s goal is to make Galea Unlimited a “wearable computer,” with all of the processing happening on the device itself. 

“By bringing this all into one system and putting it on the body, we are reducing the latency and speeding up this feedback loop,” said Artuso. 

“When that loop reaches the point where it’s happening faster than our ability to perceive it —  that we can’t necessarily keep track of the ways that all of these sensors and inputs are adjusting in real time — it’s going to unlock an entirely new form of human computer interaction that feels like a natural extension of our bodies,” he continued.

Freethink recently got a chance to talk to Artuso about Galea, the future of neurotech, and why the brain alone can’t unravel the mystery of the human mind.

This interview has been edited for length and clarity.

“There are no existing guidelines on how to solve all the challenges that have come up.”

Joseph Artuso

Freethink: What have been the biggest challenges with developing Galea? 

The challenge with Galea is figuring out how to build something that’s never been built before. 

We work to push the boundaries of what is possible today and don’t always know where the limit is when we start. There are no existing guidelines on how to solve all the challenges that have come up, so we “figure it out” a lot. For example, removing environmental noise, movement artifacts, trying to quantify “unquantifiable” human states like stress. 

I’d say the biggest specific challenge with creating Galea has been the ergonomics side of things. Even without the physiological sensors, making a headset that is comfortable across the entire population is already a challenge. Everyone’s body is slightly different. Adding the extra complexity of keeping sensors in close contact with the correct parts of the body makes the ergonomics puzzle even more challenging. 

We are constantly approached by larger companies who are looking for help solving this problem.

Freethink: Was any part of the process not as difficult as you expected it to be?

One thing that has not been as difficult as I expected is finding customer applications for Galea.

We started out in the entertainment industry with Valve, who are interested in applying Galea for playtesting and user research. Since then, we’ve branched out into applications involving education, wellbeing, training in many spaces (e.g., pilots, athletes, astronauts), user testing, medical research, and even fragrance and food research. 

“Galea can be used to quantify emotional states in real-time.”

Joseph Artuso

Freethink: Is there a particular use case for Galea that excites you the most, perhaps something one of your early adopters is working on? 

I’d say what we pulled off with Christian Bayerlein for the TED talk will go down as a career highlight. Christian was an early Kickstarter backer of OpenBCI and always wanted to be able to fly a drone. Being able to make that possible on the TED stage was special. 

Other than that, it’s been very exciting to see the work coming from Mark Billinghurst’s lab. They’ve been showing how data from Galea can be used to quantify emotional states in real-time and then using those emotional metrics to dynamically adjust the VR content.

This type of “closed loop” experience that involves the user’s mind and body is going to profoundly change how we interact with computers for entertainment and productivity.

Freethink: At Slush 2023, you talked about the need to change the status quo and put users in control of their “mental vault.” Can you elaborate on that and how OpenBCI might accomplish it?

During the Slush talk, we defined the notion of “closed-loop computing” and how we as users are already part of a constant feedback loop with our devices. The fundamental thing we’re trying to change is who is in control of how that feedback loop operates. 

Right now, it’s the operating system creators who have the most control: Apple, Microsoft, and Google. These companies control what user data is allowed to flow to apps and software made by companies like Facebook and how software is allowed to interact with everything else on a user’s device — just look at the constant tug of war between Apple and Facebook over iOS advertising permissions. 

We’ve become used to not being in control over how our devices work, and before neurotechnology goes mainstream, I want to see the status quo shift to be more in favor of the user, rather than device or OS manufacturers.

Before OpenBCI, I worked in the digital advertising space, and I know how much can be derived from things as simple as clicks, views, and dwell times. I also know that most consumers prioritize convenience and cost over privacy. It’s not going to be easy to change these incentives, and I don’t have 100% of the answers today. 

“We’ve become used to not being in control over how our devices work.”

Joseph Artuso

One thing that gives me hope is that if we look at the enterprise computing market, rather than the consumer/personal market, there is a much greater expectation that the device owners have the final say on privacy and data ownership. I think there’s practices we can adopt on the consumer side as well.

The guiding principle behind the “mental vault” is that the user is prioritized above all other stakeholders when it comes to decisions about how data can be used. If we can also make it so that the user stands to benefit financially from companies who want to use their data, it may help combat the natural tendency to sacrifice privacy for lower cost and convenience. 

OpenBCI recently added Professor Nita Farahany as a member of our Advisory Board. Nita has written extensively on neuroethics and the social implications of emerging technologies, and I’m excited to have her input on how OpenBCI can define commercially viable policies that can serve as an alternative for consumers and an example for other companies. 

“We are going to see a completely new generation of human-computer interaction emerge.”

Joseph Artuso

Freethink: When Freethink spoke with Conor in 2016, he said we were “just at the beginning” of a neuro-revolution. Do you think that’s still the case? Or have we reached a new level? If not, what will be the milestone that puts humanity at the next level?

We’re much further along. Back in 2016, OpenBCI was one of a handful of companies that existed on the “consumer” side of neurotechnology. Now there’s hundreds, maybe thousands of neurotech companies. Dozens of them were started by OpenBCI customers who used our products to prototype their early MVPs [minimum viable products]!

Neurotechnology is definitely growing. UNESCO did some good research recently on the market size. Once Elon Musk jumped into the ring with Neuralink, more investment started flowing in, and I found that I had to explain far fewer acronyms and vocab terms than before. 

It’s still a common mistake to think that neurotechnology is only about the brain. A big lesson OpenBCI has learned is that the brain alone is not enough — you need context from the rest of the body and from the environment around the user in order to truly understand the human mind. 

You can see adoption of physiological sensors in more and more consumer products. 

Apple Watch, Whoop, and Oura ring are all based on the same types of sensors that OpenBCI has been working with for Galea. The eye tracking and gesture detection on the Apple Vision Pro are an early glimpse at new interaction methods that’ll become more widespread as brain and body sensors become integrated into more everyday devices. 

When we start combining the recent breakthroughs in AI with new data streams that quantify our external environment (e.g., spatial computing) and our mind and body (e.g., neurotechnology), we are going to see a completely new generation of human-computer interaction emerge.

It’s going to be a massive technological shift, and I’m excited that I’ll get to live through it.

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].

Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox
Related
The exciting research that may cure Parkinson’s 
GeneCode is developing a drug it hopes won’t just alleviate Parkinson’s symptoms but also protect and restore patient’s neural health.
You’re thinking of the metaverse all wrong, says Matthew Ball
Rumors of the metaverse’s demise have been greatly exaggerated.
What will it take for smart glasses to replace smartphones?
Smart glasses that combine personal computing, AI, and augmented reality could be the next life-changing consumer tech device. Here’s how.
Revolutionary weight-loss drugs like Wegovy come with a catch
People taking GLP-1 agonists are losing too much muscle, but these drugs designed to prevent muscle loss could solve the problem.
How Brilliant Labs CEO is creating a “symbiosis of humanity and artificial intelligence”
CEO Bobak Tavangar discusses the philosophy behind Brilliant’s latest device, Frame, and his vision for the future of AI.
Up Next
A close up of a red blood cell, highlighting its structure and shape.
Subscribe to Freethink for more great stories