November 22, 2024

BEST EVER: Researchers take a song from Pink Floyd and reproduce it using…….

BEST EVER: Researchers take a song from Pink Floyd and reproduce it using…….

The findings of this research may prove to be extremely beneficial for those who experience speech difficulties, such as stroke or muscle paralysis patients.

While previous studies have extracted words and images from brainwaves, for the first time, research was conducted to reconstruct “music” from the mind.

In August 2023, researchers at the University of California, Berkeley recreated a Pink Floyd song from 1979 by decoding the electrical signals in the listeners’ brainwaves.

The corresponding study was published in the journal PLOS Biology. A few decades ago, it seemed impossible to read people’s minds. However, with the advent of “neural decoding,” neuroscientists have grasped how to decode what’s going on in people’s brains just by monitoring their brainwaves.

Lead researchers Robert Knight and Ludovic Bellier examined the electrical activity of 29 epileptic patients at Albany Medical Center in New York who were having brain surgery in order to conduct the study.

In the operating area, Pink Floyd’s song “Another Brick in the Wall, Part 1” was playing while the patients underwent surgery.

These patients had many electrodes taped to their heads, which captured the electrical activity occurring in their brains while they listened to the song.

Later on, Bellier used artificial intelligence algorithms to reconstruct the melody from this electrical activity. The final song has a fascinating and unsettling quality.

Knight told The Guardian, “It sounds a bit like they’re speaking underwater, but it’s our first shot at this.”

This reconstruction demonstrated the feasibility of recording and translating brain waves to capture the musical elements of speech, as well as the syllables; in humans, these musical elements, called prosody — rhythm, stress, accent, and intonation — carry meaning that the words alone do not convey.

Given that these “intracranial-electroencephalography (iEEG) recordings” could only be made from the surface of the brain, this research was as close as one could get to the auditory centers.

This could prove to be a wonderful thing for people who have difficulty in speech, like those suffering from a stroke or muscle paralysis. “It’s a wonderful result,” said Knight, per the press release. “It gives you the ability to decode not only the linguistic content but some of the prosodic content of speech, some of the effect. I think that’s what we’ve begun to crack the code on.”

Speaking about why they chose only music and not voice for their research, Knight told Fortune that it is because “music is universal.” He added, “It preceded language development, I think, and is cross-cultural. If I go to other countries, I don’t know what they’re saying to me in their language, but I can appreciate their music.” More importantly, he said, “Music allows us to add semantics, extraction, prosody, emotion, and rhythm to language.”

As Bellier put it to Fortune, “Right now, technology is more akin to a keyboard for the mind. You can’t read your thoughts from a keyboard; you have to push the buttons. It also produces a kind of robotic voice; there’s definitely less of what I call expressive freedom.”

In addition to identifying new brain regions linked to the detection of rhythm, such as a thrumming guitar, the study also revealed that the right hemisphere of the brain is more responsive to music than the left. “Language is more left brain. Music is more distributed, with a bias toward right,” stated Knight in a press release. “It wasn’t clear it would be the same with musical stimuli,” Bellier continued. “So here, we confirm that that’s not just a speech-specific thing, but that it’s more fundamental to the auditory system and the way it processes both speech and music.” (Knight, p. 1).

Leave a Reply

Your email address will not be published. Required fields are marked *