Hacking your brain with OpenBCI and PsychoPy

random pic from the internet -- and why are you looking at the alt text anyway, geek?

I love coding and I also study the brain — so why not tackle them both? In this post, I show you how you can hack your brain using both code and the amazing and low-cost hardware from OpenBCI to manipulate your brain signals. I started my neuroscience journey back in 2014. Since then, I have always wanted to explore the brain with the available EEG. This should be a required lab for every cognitive neuroscience student and researcher as it represents an opportunity to explore many neuroscientific questions. Anyway, I only bought the OpenBCI chip recently and I have been using it ever since. In this blog post, I will report my first explorations that will be essential for my future posts about my journey with #brain_hacking.

But what’s the deal with BCI?

This is a legit question — why should we care about communicating with the brain at all? There are many reasons why and here I will mention few applications for BCI hoping that they will convince you. The first application that comes to mind is using BCI for assistive and prosthetic purposes. Specifically, BCI tools like cochlear and retina implants, artificial limbs and deep brain stimulation technologies are helping millions around the world.

Restoring brain functions that have been lost is one of the most important motives behind BCI technology. There are, however, many other applications that excite most people such as augmenting brain functioning via neurofeedback, using the power of thought (alone) to control your favorite device or play video games, and many others (check the amazing brain-computer interface entry on Wikipedia if you want to see wider coverage of possible applications).

P300

P300 is a very salient neural activity that happens within the first second of seeing something that the subject cares about. It is being used in many innovative ways such as lie detection and typing (with your thought alone) making it a viable tool to use to enable paralyzed patients to communicate with their thoughts.

Experimental Design

The experimental design: two images of different spatial frequencies were used as stimuli. They were presented for 0.5s followed by an intertrial interval of 3s. A third condition (not shown) of just the background was used as a control condition. Each condition was presented for 50 times.

The design of my experiments is pretty straightforward. They all involve showing different images at random order while recording brainwaves at the occipital and temporal areas. In this post, I used grating images (see the image) that consist of multiple black bars arranged in different spatial frequencies (more or fewer bars). Those images are very popular in vision research for many reasons that are beyond the scope of this post. Each image was repeated 50 times resulting in 150 image presentations (spatial frequencies of 3, 12 and no image). Each image was presented for half a second, followed by 3s inter-trial interval (ITI) in which a ‘+’ sign was presented.

Results

I honestly didn’t know about P300 until I saw this image (the image is only from the first channel but the exact pattern is observed in all channels). This positive ramp is present in each experiment I ran using a wide variety of stimulus types (will explore some of those in future posts). Notice how the control condition did not show any of that deflection while both experimental conditions (where actual images were shown) show that pattern. A thing that I did not expect is the second positive ramp around 400ms only for the orange (higher frequency bars) but not for the blue line. This makes it very easier for machine learning algorithms to distinguish the two. Indeed, a very simple logistic regression classifier achieved about 52% accuracy (in cross-validation settings) in distinguishing the three classes.

Neural response during 3 different images: the blue line is the average of neural response while seeing grating images with 3 bars, the orange line is the same but while seeing grating images with 12 bars, the green line is the control condition where no image is shown. The shaded region is the standard error. Notice the two ramps when seeing an image vs. the no ramp when not seeing an image.

Here, I showed that low-cost hardware (it costs $322 to get all the equipments) can get you a very high-quality EEG signal. Indeed, OpenBCI keeps a running list of scientific publications that used OpenBCI in their data collection. I plan to pursue further experiments and share their results in this blog.

Finally, a word of thanks to OpenBCI team and community for their incredible effort in making neuroscience and BCI hardware and software tools more accessible to the general public and to the Neurotech@Berkeley team for their amazing course and software that I used to work on the experiments.

Check my Github repository for the code used in this post

More details on technical Setup: I used the Ganglion device that offers 4 channels from OpenBCI. Those channels were attached to (approximately) O1, O2, T1, and T2 (covering both sides of Occipital and Temporal Areas). I used Node.js to connect with the chip and process the data (via the lab streaming layer) in a python script that also stores the recordings in a text file. All those tools are adapted from the neurotech course labs. Along with the recording, I also used the PsychoPy to design and run the experiment.

Leave a Reply

Your email address will not be published. Required fields are marked *