It is by now old news that scientists can read out aspects of your internal conscious experience by recording neural signals. But a new study of this kind involving music is still a remarkable achievement. Before we get to the latest work, I want to give a brief history of this area of research.

The Beginning: Reading Out Images from the Visual System

The first blockbuster result of this kind came in 2008 from Kendrick Kay, Jack Gallant, and colleagues at the University of California, Berkeley, who used noninvasive functional magnetic resonance imaging (fMRI) to guess which of a thousand images a subject was viewing. The basic experimental idea they exploited goes back to research by Yukiyasu Kamitani and Frank Tong, and separately by groups led by Jim Haxby and John-Dylan Haynes in the early 2000s.

 

In the 2008 Berkeley study, the researchers had advance access to the images, and built a model of how they thought blood flow in the brain would respond to each of the images. Then, when they measured blood flow in the brain while subjects viewed the images, they simply had to find the image whose modeled response best matched the observed pattern of blood flow.

 

The researchers were able to correctly identify more than 90 percent of the images seen by the subjects based solely on blood flow measurements in the brain. This largely came down to the fact that images falling on the eyes each possess quite different patterns of light and dark across space, which lead to corresponding patterns of higher or lower neuronal activity in the visual brain (a phenomenon called retinotopy). I am making this sound easy, but it was quite an accomplishment.

 

Brain Reading 2.0

Since then the same basic technique has been applied noninvasively to many other types of perception and cognitive activity including silently recited speech, words seen as text, and even dreams.

In most cases, scientists have provided subjects with some set of stimuli, for which they had developed a model of predicted response ahead of time. The task for the researchers was to guess the order in which the stimuli were presented, based only on measured brain responses, usually in the form of fMRI measurements of blood flow. Given the laws of probability, the chances of a lucky guess have generally been low.

 

This has been largely successful. Perhaps most importantly, there is a clear "ground truth" that researchers are trying to recover, namely the physical image, sound, or word.

However, in the case of decoding dreams, which leave no physical trace, researchers had to be more creative. A 2013 study on dreams, led by Kamitani, tried to match patterns of activity measured while people viewed images from one of 20 categories of imagery (male, computer, car, etc.) to verbal reports they made of the contents of their dreams. This was done with EEG (surface electrodes placed on the scalp) in the visual brain. Kamitani's team was able to guess whether a certain type of imagery (e.g., a male) appeared in a dream, with an accuracy of about 75 to 80 percent.

 

In some cases, such as work by Tom Mitchell at Carnegie Mellon University, another pioneer in this area, the models can extrapolate beyond the stimuli they were trained on. In Mitchell's work, the meanings of word-picture pairs not previously studied could be predicted, but not nearly as well as the pictures decoded in Kendrick Kay et al.'s 2008 study.

 

Brain Reading in Preoperative Epilepsy Patients

This brings us to the latest study, led by Robert Knight and colleagues, also of UC Berkeley. Here the innovation was to decode signals recorded by electrodes placed directly on the brains of pre-operative epilepsy patients.

The patients, who suffer debilitating, recurring, drug-resistant seizures, are fitted by neurosurgeons with very sensitive electrodes to locate the part of the brain that generates the seizure (the "focus"). Patients stay in the hospital for days or weeks while surgeons gather data. The goal is to find the offending chunk of brain and eliminate it in a surgical operation. (To get a sense of how this is done, see this paper). The extensive data collection helps ensure that the smallest possible piece of brain is excised.

 

While patients and their doctors wait for seizures, scientists can perform experiments to take advantage of this opportunity to record directly from the human brain, which is normally not an option. Patients get a chance to contribute to scientific understanding and keep busy during an otherwise long and difficult hospital stay.

Making all this work is a huge undertaking, but in the past, it has been worth it. The famous "Jennifer Anniston" cell paper by Itzhak Fried and colleagues, which found neurons in the brain that respond to the actor's face, was one of the first to make use of preoperative patients.

Brain Waves Sing Pink Floyd

Knight and colleagues recorded neuronal signals in the temporal lobe, often a source of seizures, as well as the frontal lobe. Patients passively listened to the Pink Floyd song "Another Brick in the Wall, Part 1," which is about three minutes long. They recorded activity with more than 2,500 electrodes, placed in a grid on the patients' grey matter.

 

In the recordings, they looked for rhythmic variations in electrical activity that corresponded to the song's driving melody "We don't need no education..."—indeed, this song was seemingly chosen because of this unmistakable and clearly repeated pattern. After a great deal of signal processing, they then used a machine learning system (read: AI) called a multi-layer perceptron to produce a best guess of what the neuronal signals "sound like."

What it All Means

The success of this kind of research in reading out internal experiences can be jarring, especially if you haven't been following its development over the years.

It may feel like we are on the verge of dangerous territory where someone could read out what you are thinking without your knowledge. Luckily, all existing work involves brain imaging methods (fMRI, EEG, etc.) that cannot be done surreptitiously, or at a distance. The Pink Floyd work additionally required major brain surgery that involved temporarily opening up a section of the skull. Moreover, all of this work requires a highly tailored model of each subject's brain, as well as knowing the stimuli to be "read out" in advance.

 

For me, the important question is: What does this tell us about brains?

Shortly after he published his seminal work on decoding pictures in the brain, I discussed with Kendrick Kay the deeper meaning of his results. He was clear that his work represented an engineering problem. The task was, can we do it? It was not primarily about discovering new phenomena. The main neurobiological knowledge he and his colleagues exploited—retinotopy in the visual cortex—had been known for a century. It was nevertheless a huge achievement and one that has spawned much subsequent work, including the Pink Floyd paper.

 

To me, the latest results, are in some ways similar. They do suggest that the right hemisphere is particularly important for music perception. Other analyses performed by the authors provide additional insights. But much depends on the largely inscrutable neural network tool that was used to produce the read-out of the song. And the work was highly tailored to this one song, and may not generalize beyond it.

 

Nevertheless, it is impressive work, and it may well lead to future insights into how we perceive sound and music.