A few weeks ago, I saw the best quality mixed reality headset with an interface controlled using my fingers and eyes: Apple’s Vision Pro. But a few months before its announcement, I saw something perhaps even wilder. Clips on my ears, a crown of rubbery-tipped sensors nestled into my hair and a face mask lowered in front of my eyes. Suddenly I was looking at my own brain waves in VR and moving things around with only tiny movements of my facial muscles. I was test driving OpenBCI’s Galea.
The future of VR and AR is advancing steadily, but inputs remain a challenge. For now, it’s a territory moving from physical controllers to hand- and eye-tracking. But there are deeper possibilities beyond that, and they’re neural.
I don’t even know how to describe my experience trying the Galea headset, because, well, it’s a platform to explore what happens next. And my experiences in neural tech are still embryonic.
OpenBCI, a Brooklyn-based company building research tools for noninvasive brain-computer-interface technology, has been adapting its own sensory systems into a mixed-reality headset called Galea, which will become available later this year. I tried a prototype version of Galea at OpenBCI’s Brooklyn offices, curious about how brain-computer interfaces could work in VR and AR. I was also wondering what the future could hold for our interactions with computers in general.
A new sensor platform for VR and AR
I’ve tried more simplified sets of EEG sensors focused on a particular visual task. NextMind allowed me to focus on particular spots to trigger actions, for example. (NextMind was acquired by Snap last year.) It felt almost like a mouse click but with my mind. But OpenBCI’s Galea is a complete mix of all sorts of sensors: EEG, EMG, EDA, PPG and eye tracking. It’s an acronym festival.
EEG, or electroencephalography, measures the electrical activity of brain signals. OpenBCI measures this with rubbery-tipped sensors that push in close to my scalp, much like the NextMind hardware I tried back in 2021. The electrodes work when dry, but need to stay clear of too much hair to get a good signal.
EMG, or electromyography, measures nerve and muscle electrical activity. Here, the sensors are around a facemask on the headset, pushing around my forehead and eyes and cheeks. When I make little movements of my facial muscles, readings get measured. But unlike face cameras on VR headsets like the Quest Pro, which look for particular physical movement, the readings here are all electrical-based. You could theoretically make movements so small they’d be more like simple neural impulses. Meta is also developing EMG technology for wristbands that can be used with future headsets. But that wrist tech only measures finger-hand movement via the wrist. OpenBCI’s sensors are looking at the face.
EDA, or electrodermal activity, is an electrical measurement of sweat on the skin. It’s often used for stress-sensing: Fitbit built its own EDA sensors into its Fitbit Sense smartwatch for stress measurements. OpenBCI’s EDA sensors are on the forehead part of the headset.
PPG, or photoplethysmography, is optical heart-rate sensing, similar to what’s already on most smartwatches. PPG is measured on the forehead as well on the final Galea headset. But in my demo, I wore earclips that measured PPG.
The sensor array is married to an existing VR-AR headset, the Varjo XR-3 (or lower-cost Varjo Aero), and connects to a PC to run the software and analyze the data. Varjo’s high-resolution display and passthrough video mixed reality gives OpenBCI’s sensors a lot of software possibilities to work with in VR and AR-type scenarios. But OpenBCI’s sensor array can work independently of a VR headset, or connect with others.
Apple’s Vision Pro could also be an ideal platform for OpenBCI because of its processing power and standalone function. According to OpenBCI’s CEO and co-founder, Conor Russomanno, working on something like Vision Pro, or future AR and VR platforms, is totally possible. He likens Apple’s recent moves as emphasizing the computer aspects of mixed reality, which is exactly how OpenBCI thinks of the opportunities, too.
OpenBCI’s sensory array could pursue multiple possibilities at once. It’s not about one particular goal, but how the system’s sensors could enable research and also open doorways to interactions with computers. Most recently, OpenBCI worked with Christian Beyerlein, a hacker with spinal muscular atrophy, who used OpenBCI’s sensory array to control a drone by using facial muscle impulses. That presentation, given as a TED talk, was a demonstration of how brain-computer interfaces could open up new doorways of accessibility and control of virtual and real-world tech.
My demos included one EMG-based control game called Cat Runner. I moved a cartoon character back and forth with little movements of my facial muscles, which were recognized by the EMG sensors in the Galea facepiece. This is the same game Meta has been using to test and demonstrate its neural input wristband tech, which I saw last fall at Meta’s Reality Labs Research Redmond headquarters. But Meta is looking at sensing movement on wrist, whereas OpenBCI’a Russomanno sees better opportunities when used on the head, where the sensors won’t interfere with existing efforts in camera-based hand tracking.
EMG technology is meant to sense electrical impulses so subtle that perhaps no muscles seem to move at all, but that level of relationship between sensors, algorithms and human input could take a while to finesse. OpenBCI’s multiple types of sensors could provide a ton of data that could indicate future directions for research, or new interfaces. They can also provide feedback on how using VR and AR affects the brain or attention. There have already been previous efforts to use sensors on VR headsets to study cognitive processes, including the HP Omnicept, which had a heart-rate sensor, or most eye-tracking-enabled headsets.
Another demo for the EEG sensors, created a meditative “synesthesia room” where my different brain-wave states were turned into colors of ambient light. My brain waves apparently changed the colors I saw. I started to try to see if I could focus in certain ways to bring out different colors. It seemed to be working? Feedback is a key part of a lot of OpenBCI’s sensors, and how eventually they can be used to train and improve how we control things with our own neural impulses.
Russomanno sees what Beyerlain is doing with drone control, having a system extend his own brain and body functions by using EMG, as a sign of how neurofeedback will change how we interact with computers just as much as AI.
“It’s not to say that AI is not useful, it’s just not the end-all be-all, not the only holy grail solution that’s going to change the world,” Russomanno said. “We actually like neurofeedback, and really smart UI, and really smart design, which is optimizing the other direction of the loop. Using technologies to make computers teach humans better is this other cool revolution that we’re going to go through.”
It reminds me of the path of smartwatch tech, where optical heart-rate sensors started to open up flows of data that resulted in new health features on watches over time. Fitbit layered multiple new sensors on its Sense watch, which also included an EDA stress sensor. Is OpenBCI’s Galea, and efforts like it, going to open up new doorways to future wearable sensors that interface with what we see and hear, and what our hands interact with?
Russomanno is certain of it. He sees the arrival of better standalone VR and AR headsets, including Apple’s, as a pathway to new inputs and peripherals.
“We’re just not going to be able to know until these headsets are out there, and people start building the bi-directional applications,” Russomanno said to me via video chat months after our demo, referring to more advanced AR and mixed-reality devices to come. “The cool thing about an AR headset is that it has every external world sensor that you would want to know about anything about your local environment. And then what we do is like the internal world. When you bring those two datasets together, we still just don’t know what’s going to be possible.”
While Russomanno mentions neurofeedback as compared to AI, I also think of the dovetailing of both. AI needs datasets to work its magic; so, too, do future sensory tech systems. As neurotech evolves, the possibilities of AI co-evolving with it do too.
A sensor platform that could expand beyond headsets
OpenBCI’s Galea is actually a VR and AR headset, but its interface with Varjo’s hardware is one part of the equation. The sensor array could also be used on its own. That interests me even more when I think about a world of future wearables that could eventually interact with other wearables on our bodies. A world where our everyday interactions are possibly enhanced by more advanced sensors. That’s all a long way off, but the beginnings of that future look like they’re taking root in some of the sensors OpenBCI’s put together in Galea.
Right now, it’s enough of a struggle to convince people of the value of VR and AR and wearable visual tech. But improving the ways we can eventually interface with spatial computing or the real world might be part of the answer to how VR/AR could evolve into something far more meaningful…and possibly unsettling. It already feels like personal tech is developing a deeper relationship with our senses and our brains. But what I’ve seen suggests we haven’t even gotten started yet.