Less than 2 months ago, we said “wouldn’t it be great if everyone can have an EEG headset and walk around controlling things with their mind.” We set out to build a space that would allow this to happen – that would let visitors experience what it might be like to wear a neural prosthetic. The goals were lofty and the infrastructure that had to be put into place was not insignificant.
This is the most ambitious and public-facing hacking project I worked on. The exhibit ran throughout the month of February. Here I break down some of the decisions our team (Marion Le Borgne, James Bobowski, David Silva, Me) made and tools we used.
There are a number of consumer EEG headsets available now – Interaxon’s Muse, Neurosky, Emotiv, etc… We chose the Muse because it gave us full access to realtime streaming data; not only raw EEG, but also pre-processed frequency bands (alpha, beta, etc…). It’s also the easiest for visitors to use – putting it on is straightforward and dry electrodes mean that we don’t need to slather visitors with gel or soak them in saline. A quick email to Interaxon was met with great excitement in the project and we received all the Muses we needed.
Cloudbrain – Storing, Analyzing, Visualizing
In writing the software for the exhibit, we wanted to build something that would live beyond the exhibit and can be use by others to better understand EEG. These methods for storing, analyzing and visualizing the data is an open-source project we’re calling CloudBrain. Our goal is to get others on-board to further build out tools that can help us better understand the brain.
For visitors who opted-in, their EEG data is being stored into a central database. We decided to go with Cassandra. Cassandra is an excellent fit for time series data, and it’s widely used for storing many types of data that follow the time series pattern: performance metrics, fleet tracking, sensor data, logs, financial data (pricing and ratings histories), user activity, and so on.
The Analyzer Module of CloudBrain is responsible for analyzing collected data. It was designed to handle a very large number of metrics. As such, it contains an ensemble of a few different algorithms. This includes simple aggregations as are used in the exhibit as well as more complex machine learning algorithms.
Several visualization have been developed with the goal of being able to better understand the brain. One visualization is a Radar Chart of real-time EEG data across all users. In the exhibit, this is projected onto a monolithic screen in the center of the room. The system was written in AngularJS by Marion and David.
The booths in the exhibit serve as other ways to visualize EEG data, whether it be through virtual reality, lights, or simple charts.
An option for an exhibit like this would have been to tie a headset to each booth and have visitors put it on and take it off each time they moved, but that would take away from the ‘neural prosthetic experience.’ So, we chose a publisher/subscriber architecture where we would centralize all of the EEG streams (publishers), then route a particular visitor’s stream to a requesting booth (subscriber).
RFID for User Identification
But how does the booth know that a new visitor is ready to interact with it? There are a number of ways this could have been done (typing a number on screen, pushing the right button, etc…). We chose to use RFID. In addition to the headset, each user gets an RFID tag to carry around tagging into various booths – super easy for the visitor.
We also wanted the booth developer experience to be easy. Instead of asking each developer to integrate more code, we built the RFID tagging as a completely standalone system. The Spark Core really help here. The WIFI capability meant all we needed was the Core and the RFID reader. When tagged, it sends the central server an instruction on whose EEG stream to route to that booth.
Rapid Prototyping with Spacebrew
Getting a publisher/subscriber system going was very simple with Spacebrew, a NodeJS websockets-based app. Within a week we were able to show the group a working system where Muse data gets published and a simple booth prototype subscribes.
Spacebrew also gave individual booth developers libraries across multiple platforms (JS, Python, Processing, etc…) to start integrating their applications.
Production system rewritten in Scala
Spacebrew was not written for such high traffic applications, though. Each Muse can pump out a couple thousand messages per second and we had 20 of them! So, we (and I really mean James), rewrote a fully backwards compatible drop-in replacement in Scala and Akka. The new system sips RAM and CPU but had also been susceptible to a memory leak under heavy routing, though to a lesser extent.
More Scalability / Robustness
Update 8/15: This has now officially been replaced with rabbitMQ, a much more robust, scalable, and supported system to handle realtime messaging and baked into CloudBrain.
All of the machines are monitored using New Relic and setup for automatic process control using Supervisor. They are fully automated and will recover from power failure.
We are running an 8GB 4-core server locally to manage the network, but have moved the pub/sub operations to a heftier 16GB 8-core DigitalOcean server. The Exploratorium gave us a dedicated 50/50mbps internet connection, so traffic load was the least of our worries.
The Muses require bluetooth pairing. For this, we’re using an array of 5 machines, aptly named dendrites, to capture the bluetooth data and forward on to the BrainServer.
A Huge Group
Much more can be written here about the Exploratorium project, but I’m limiting this post to the infrastructure work that I had my hands in. Over 30 people played a role in the project, many of them building insanely awesome booths with 3D-printed lighted flowers, Virtual Reality rock levitation, EEG/Heart Rate Variability correlation, and EEG reactive light tables, a brain-controlled robotic arm, 3D brain reconstructions, and fMRI algorithms. It. Was. Awesome.
More information on the rest of the exhibit at: http://explorecogtech.com/