It’s safe to say that even with convincing deep fakes causing record-label panic and a flurry of new AI tech garnering plenty of interest, we’re still at the very beginning of learning how the age of modern machine learning will impact art and pop culture.
In the latest move sure to excite some studio execs, researchers have used the neural activity of 33 people tasked with listening to 24 songs, combined with statistical modeling and machine learning, to almost perfectly predict what will be a hit tune and what will flop.
“By applying machine learning to neurophysiologic data, we could almost perfectly identify hit songs,” said Paul Zak, a professor at Claremont Graduate University and senior author of the study. “That the neural activity of 33 people can predict if millions of others listened to new songs is quite amazing. Nothing close to this accuracy has ever been shown before.”
The participants, aged 18 to 57, were fitted with rhythm and PPG cardiac sensors and played 24 recently released songs as selected by staff at a streaming service. A ‘hit’ was determined if the song had received more than 700,000 streams. A range of genres made up the selection of 13 hits and 11 flops, and included songs such as Tones and I’s 2019 number-one smash Dance Monkey.
After the aural experiment, participants completed a survey about the songs, which included aspects like whether the song was offensive, if they’d heard it before, and if they’re likely to recommend it to friends.
However, key to this was the neurophysical response to the songs. Capturing this small set of data from the 33 participants allowed for “neuroforecasting” to predict population-wide responses to hits and flops without having to test a thousand set of ears first.
“The brain signals we’ve collected reflect activity of a brain network associated with mood and energy levels,” Zak said.
What the researchers found was that when the data was processed through a linear statistical model, its success rate for predicting a hit was 69%. While this wasn’t terrible, when machine learning was applied to the dataset, the accuracy shot up to 97.2%. In fact, when the AI model assessed data from just one minute of song listening, accuracy was still 82%.
“If in the future wearable neuroscience technologies, like the ones we used for this study, become commonplace, the right entertainment could be sent to audiences based on their neurophysiology,” Zak said. “Instead of being offered hundreds of choices, they might be given just two or three, making it easier and faster for them to choose music that they will enjoy.”
The researchers certainly had a customer-forward spin on their work, suggesting it could be used by streaming companies to “readily identify new songs that are likely to be hits [to add to] people’s playlists more efficiently, making the streaming services’ jobs easier and delighting listeners."
And while the study had limitations in how broad the song choice and audience selection was, it's not hard to picture a future in which music, television and film won't make it past the demo stage, particularly if machine-learning models can predict success with an accuracy better than 80% after consuming only one minute of the media.
However, with an estimated 100,000 new songs uploaded online every day, it doesn't look like music fans will have their choices restricted anytime soon.
Not surprisingly, Zak added that, “it is likely that this approach can be used to predict hits for many other kinds of entertainment too, including movies and TV shows."
The research was published in the journal Frontiers in Artificial Intelligence.