OpenBCI’s new VR headset reacts to your brain and body

A completely new generation of human-computer interaction is coming.
Galea

This article is an installment of Future Explored, a weekly guide to world-changing technology. You can get stories like this one straight to your inbox every Thursday morning by subscribing here.

When Joseph Artuso was an undergrad at Columbia, he played rugby with Conor Russomanno, an engineering student. After feeling his mind “change” due to the concussions he suffered on the field, Russomanno hacked a brainwave-reading toy so that he could study his own mind.

Russomanno soon developed a fascination with the brain that led him to co-found OpenBCI — a company that creates open-source tools that make it easy for people to access their own brain data — in 2014. 

Artuso is now the company’s president and chief commercial officer, and in December 2023, he and Russomanno took the stage together at Slush to unveil OpenBCI’s latest product: Galea Beta.

Two men standing on stage. One holds a black headset in his hands.
OpenBCI
Conor Russomanno (left) and Joseph Artuso (right) unveiling the Galea Beta device at Slush.

Galea Beta — named after Gal Sont, an OpenBCI collaborator who passed away from ALS — combines a professional-grade VR/mixed reality headset (developed by Varjo) with an array of physiological sensors that measure a user’s heart, skin, muscles, eyes, and brain activity.

This information can then be used to adjust what the person sees or hears through the headset in real-time.

A far cry from OpenBCI’s first product (a $300 EEG starter kit), Galea Beta is an enterprise device with a starting price of $25,000. The company spent five years developing it, with early partners using it in healthcare, entertainment, workplace training, and more.

“These are enterprise teams … looking to build adaptive experiences that can change based on the real-time reactions of the user’s brain and body,” Artuso said at Slush. 

A rendering of the Galea Unlimited system. It features a VR headset that connects to a U-shaped component that rests over the shoulders.
OpenBCI
A rendering of the Galea Unlimited system.

OpenBCI expects to deliver the devices to the first customers in the second quarter of 2024, but the system is just a stepping stone to Galea Unlimited — while Galea Beta needs to be tethered to a PC, OpenBCI’s goal is to make Galea Unlimited a “wearable computer,” with all of the processing happening on the device itself. 

“By bringing this all into one system and putting it on the body, we are reducing the latency and speeding up this feedback loop,” said Artuso. 

“When that loop reaches the point where it’s happening faster than our ability to perceive it —  that we can’t necessarily keep track of the ways that all of these sensors and inputs are adjusting in real time — it’s going to unlock an entirely new form of human computer interaction that feels like a natural extension of our bodies,” he continued.

Freethink recently got a chance to talk to Artuso about Galea, the future of neurotech, and why the brain alone can’t unravel the mystery of the human mind.

This interview has been edited for length and clarity.

“There are no existing guidelines on how to solve all the challenges that have come up.”

Joseph Artuso

Freethink: What have been the biggest challenges with developing Galea? 

The challenge with Galea is figuring out how to build something that’s never been built before. 

We work to push the boundaries of what is possible today and don’t always know where the limit is when we start. There are no existing guidelines on how to solve all the challenges that have come up, so we “figure it out” a lot. For example, removing environmental noise, movement artifacts, trying to quantify “unquantifiable” human states like stress. 

I’d say the biggest specific challenge with creating Galea has been the ergonomics side of things. Even without the physiological sensors, making a headset that is comfortable across the entire population is already a challenge. Everyone’s body is slightly different. Adding the extra complexity of keeping sensors in close contact with the correct parts of the body makes the ergonomics puzzle even more challenging. 

We are constantly approached by larger companies who are looking for help solving this problem.

Freethink: Was any part of the process not as difficult as you expected it to be?

One thing that has not been as difficult as I expected is finding customer applications for Galea.

We started out in the entertainment industry with Valve, who are interested in applying Galea for playtesting and user research. Since then, we’ve branched out into applications involving education, wellbeing, training in many spaces (e.g., pilots, athletes, astronauts), user testing, medical research, and even fragrance and food research. 

“Galea can be used to quantify emotional states in real-time.”

Joseph Artuso

Freethink: Is there a particular use case for Galea that excites you the most, perhaps something one of your early adopters is working on? 

I’d say what we pulled off with Christian Bayerlein for the TED talk will go down as a career highlight. Christian was an early Kickstarter backer of OpenBCI and always wanted to be able to fly a drone. Being able to make that possible on the TED stage was special. 

Other than that, it’s been very exciting to see the work coming from Mark Billinghurst’s lab. They’ve been showing how data from Galea can be used to quantify emotional states in real-time and then using those emotional metrics to dynamically adjust the VR content.

This type of “closed loop” experience that involves the user’s mind and body is going to profoundly change how we interact with computers for entertainment and productivity.

Freethink: At Slush 2023, you talked about the need to change the status quo and put users in control of their “mental vault.” Can you elaborate on that and how OpenBCI might accomplish it?

During the Slush talk, we defined the notion of “closed-loop computing” and how we as users are already part of a constant feedback loop with our devices. The fundamental thing we’re trying to change is who is in control of how that feedback loop operates. 

Right now, it’s the operating system creators who have the most control: Apple, Microsoft, and Google. These companies control what user data is allowed to flow to apps and software made by companies like Facebook and how software is allowed to interact with everything else on a user’s device — just look at the constant tug of war between Apple and Facebook over iOS advertising permissions. 

We’ve become used to not being in control over how our devices work, and before neurotechnology goes mainstream, I want to see the status quo shift to be more in favor of the user, rather than device or OS manufacturers.

Before OpenBCI, I worked in the digital advertising space, and I know how much can be derived from things as simple as clicks, views, and dwell times. I also know that most consumers prioritize convenience and cost over privacy. It’s not going to be easy to change these incentives, and I don’t have 100% of the answers today. 

“We’ve become used to not being in control over how our devices work.”

Joseph Artuso

One thing that gives me hope is that if we look at the enterprise computing market, rather than the consumer/personal market, there is a much greater expectation that the device owners have the final say on privacy and data ownership. I think there’s practices we can adopt on the consumer side as well.

The guiding principle behind the “mental vault” is that the user is prioritized above all other stakeholders when it comes to decisions about how data can be used. If we can also make it so that the user stands to benefit financially from companies who want to use their data, it may help combat the natural tendency to sacrifice privacy for lower cost and convenience. 

OpenBCI recently added Professor Nita Farahany as a member of our Advisory Board. Nita has written extensively on neuroethics and the social implications of emerging technologies, and I’m excited to have her input on how OpenBCI can define commercially viable policies that can serve as an alternative for consumers and an example for other companies. 

“We are going to see a completely new generation of human-computer interaction emerge.”

Joseph Artuso

Freethink: When Freethink spoke with Conor in 2016, he said we were “just at the beginning” of a neuro-revolution. Do you think that’s still the case? Or have we reached a new level? If not, what will be the milestone that puts humanity at the next level?

We’re much further along. Back in 2016, OpenBCI was one of a handful of companies that existed on the “consumer” side of neurotechnology. Now there’s hundreds, maybe thousands of neurotech companies. Dozens of them were started by OpenBCI customers who used our products to prototype their early MVPs [minimum viable products]!

Neurotechnology is definitely growing. UNESCO did some good research recently on the market size. Once Elon Musk jumped into the ring with Neuralink, more investment started flowing in, and I found that I had to explain far fewer acronyms and vocab terms than before. 

It’s still a common mistake to think that neurotechnology is only about the brain. A big lesson OpenBCI has learned is that the brain alone is not enough — you need context from the rest of the body and from the environment around the user in order to truly understand the human mind. 

You can see adoption of physiological sensors in more and more consumer products. 

Apple Watch, Whoop, and Oura ring are all based on the same types of sensors that OpenBCI has been working with for Galea. The eye tracking and gesture detection on the Apple Vision Pro are an early glimpse at new interaction methods that’ll become more widespread as brain and body sensors become integrated into more everyday devices. 

When we start combining the recent breakthroughs in AI with new data streams that quantify our external environment (e.g., spatial computing) and our mind and body (e.g., neurotechnology), we are going to see a completely new generation of human-computer interaction emerge.

It’s going to be a massive technological shift, and I’m excited that I’ll get to live through it.

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].

Related
Brain implant for “artificial vision” is still working after 2 years
A new type of brain implant technology has given a man with total blindness a kind of “artificial vision.”
AI can help predict whether a patient will respond to specific tuberculosis treatments
Instead of a one-size-fits-all treatment approach, AI could help personalize treatments for each patient to provide the best outcomes.
Why a neurodivergent team will be a golden asset in the AI workplace
Since AI is chained to linear reasoning, workplaces that embrace it will do well to have neurodivergent colleagues who reason more creatively.
In a future with brain-computer interfaces like Elon Musk’s Neuralink, we may need to rethink freedom of thought
In a future with more “mind reading,” thanks to computer-brain interfaces, we may need to rethink freedom of thought.
When an antibiotic fails: MIT scientists are using AI to target “sleeper” bacteria
Most antibiotics target metabolically active bacteria, but AI can help efficiently screen compounds that are lethal to dormant microbes.
Up Next
A close up of a red blood cell, highlighting its structure and shape.
Subscribe to Freethink for more great stories