Music Makes You Think | Do Robots Prefer Digital or Analog Music?
Music Makes You Think is for people who hear, feel and experience music a little more deeply than the average listener.
Music Makes You Think, Music, Inspiration, Bands, Musicians, 21 Pilots, Julie, Simmons, Blog, Music blog, Julie Simmons, Music Forums
11029
single,single-post,postid-11029,single-format-standard,edgt-core-1.0,ajax_fade,page_not_loaded,,hudson-ver-1.2, vertical_menu_with_scroll,smooth_scroll,overlapping_content,animate_overlapping_content,frame_around_overlapping_content,blog_installed,wpb-js-composer js-comp-ver-4.6.2,vc_responsive
 
Photo by Amy Baran Belonio
Jun 15 2016

Do Robots Prefer Digital or Analog Music?

This article was originally published here in Reverb magazine on June 8, 2016.

 

 

 

A music journalist gets into a car with a record producer and a gaming executive. (This is not the set up for a joke).
As they were curling their way up a mountain outside of Silicon Valley, the conversation spun around to the topic of analog v. digital music. (Analog being the recording method that takes variations in air pressure produced by sound and converts them into electrical analog signals while digital converts physical properties of sound into numbers). The record producer piped up, “Remember that question about whether robots would listen to analog or digital music?” The gaming executive chuckled and bantered about how AIs (Artificial Intelligence) should “naturally prefer” digital music because robots are digital beings. Meanwhile, the record producer teased that even a semi-intelligent robot should be able to detect the superior sound quality of analog recordings. While the conversation quickly spun off in another direction, this quirky question followed the music journalist like a stubborn echo. It opened the door to philosophical questions like whether humans were actually programmed to prefer certain kinds of music.

 

 

 

Fifty years ago, attempting to answer any “sci-fi” question about robots would have started with some sort of planetary understanding of life on Mars. But today, if we really want to know whether an AI would prefer digital or analog, we probably need to understand how AI inventors program for…preference. I stumbled upon a website called Quora. (Quora is known for offering “the best answer to any question.”) While no one on Google or Quora directly posed the question “Would a robot prefer digital or analog music?” I read a response from Aaron Hosford about “AI preferences” that served as a good starting point. Hosford is a musician and a robotic programmer who claims to have written programs that actually make music. On Quora, Hosford answered a question about how all forms of intelligence [humans and machines] have preferences. He elaborated, “Preferences are intrinsic to motivation. Without motivation, a machine is nothing more than a stimulus/response mechanism. In other words, if a machine never makes choices, it is not intelligent.”

 
Motivated to get an answer to this bizarre question, I reached out to Hosford and asked: “Would a robot prefer digital or analog music?” Also, “Would it be possible to program a robot to prefer the sound of analog or digital music?”

 
Hosford promptly responded. He confirmed that it wouldn’t be possible for an AI to inherently prefer anything. Preference is both programmed and learned over time. However, he explained, “It would indeed be possible to program a robot to prefer analog or digital music, even limited to current technology. If I were going to build such a machine, I would start by using deep learning to train an ANN (Artificial Neural Network) to correctly categorize music samples as either analog or digital. This would give the robot the ability to distinguish the two forms of music.” Hosford went on to describe how in addition to programming the AI to recognize the difference between analog and digital, he could then apply a reinforcement learning algorithm. And, if he wanted his robot to develop an appreciation for analog, he would consistently expose it to analog music followed by a reward signal that would be implemented every time the AI selected correctly. Hosford added, “The AI could be explicitly programmed with preferences or the AI could develop an emergent preference” that develops over time. But perhaps more compelling than an emergent preference is to design an AI with a reward signal set for human satisfaction.
So, for a moment, let’s forget about analog and digital.
Hosford invited me to “imagine an AI programmed to only care about making humans happy.” What if you had your very own robot that either selected songs or maybe even created songs that made you happy? Or, maybe you could program it to satisfy feelings of aggression. Hosford supposed a computational creation and shared, “I would let [the AI] prefer music that it believed contributed to human happiness. Its preferences would derive from its beliefs about the relationship between observable features of the music and human moods. If the machine was exposed to lots of people really enjoying a particular type of music, this might have a significant effect on its beliefs, and therefore on its preferences. An AI with indirectly derived preferences like this would have to be relatively sophisticated – capable of reasoning or at least modeling its environment in some way – for secondary preferences to develop.”

 

 

 

Unlike the popular software that recommends music based on playlists and purchasing habits, what Hosford suggested is more comparable to the movie “Big Hero 6.” In the film, Baymax (the AI healthcare provider), scans his patient’s serotonin and hormone levels and ultimately confirms, “Will doing that make you happy?”

 

 

 

 

Baymax scanning patient from the movie, Big Hero 6

Baymax scanning patient from the movie, Big Hero 6

 

 

 

Sure enough, this sort of technology already exists. Melomic’s music streaming service, @life, is designed to select music depending on your surroundings and activity. According to the product’s website, while driving, @life will select songs “according to the speed as estimated by GPS, relaxing music will play while immersed in dense traffic, changing to activating music when zooming the highway.” And while trying to fall asleep, @life will play tracks that “will get more and more relaxing, depending on the body movements, disappearing after falling asleep.” As you wish…
And this is where it gets a little, well, existential. If humans can program AIs to appreciate music, were humans programmed to appreciate music?
1.61803398875…
That number is also known as the algebraic expression of the perfect geometric relationship between two quantities. It’s called the Golden Mean and most people don’t even know they respond to it, sometimes, on a daily basis. The Golden Mean (also known as the Golden Ratio and directly linked to the Fibonacci sequence) operates aesthetically. It can be seen in nature’s patterns (like the spiraling seeds in the middle of a sunflower). It instinctively tells us the perfect place to position a painting on the wall based on where we’re standing. Its rational proportions have even been applied to famous architecture. So what about sound? Is there such a thing as a Golden Mean in music?
In the NOVA episode, “The Great Math Mystery,” jazz musician, Esperanza Spalding, demonstrates how Pythagoras and the Greeks discovered three musical ratios that were most pleasing to humans: the octave, a fifth and a fourth. Esperanza plucks notes on her upright bass so listeners can hear how an octave is basically the first two notes of ‘Somewhere over the Rainbow.” (In vibrational terms, that’s the relationship of 2:1). A fifth can be heard in the first two notes of “Twinkle Twinkle Little Star” or a ratio of 3:2. And a fourth is the beginning of “Here Comes the Bride” or a 4:3 ratio. For a more contemporary example of how humans might have been programmed to prefer certain types of music, we look to the British comedy band, Axis of Awesome. Their musical montage, “4 chords,” has received over 4 million views on YouTube. Their claim is that “the most popular songs from the past 40 years use the exact same four chords.”

 

 

 

 

Granted, the Axis of Awesome’s theory solely focuses on what Western Culture has deemed “popular.” And, really, those “popular songs” were deliberately served to the general public by music industry leaders and radio station programmers. But before we sneer with disdain at the industry for trying to program us to prefer certain music, let’s bring the focus back to that seemingly insignificant question of whether robots would prefer digital or analog music. The line of creator and creation has been blurring ever since AIs started making their own music.
At least 20 years ago, computer software programs were helping people compose music digitally. Even non-musicians could plot out notes on a series of scales, assign a few instruments and listen to their very own orchestral composition. Now, musical AIs (known as “artificial composers”) are writing and performing autonomously. Two such AIs are named “Iamus” (as in the Greek god who was fluent in bird languages) and “Melomics109.” An article in studentpulse.com recently reported, “Iamus takes a self-generated piece of music and mutates it, checking to see how the mutations conform to its prescribed rules [both musically stylistic rules and biological, human limitations]. The best products are allowed to mutate further, allowing the most fit to continue until all the conditions are met and whole pieces of music are formed.” While Iamus produces modernist classical music, Melomics109 is designed to create “contemporary popular music” for the masses.
Of course, the general public has yet to recognize a “popular song” made by Melomics109, but who knows, a day might come when the music journalist gets into a self-driving car with the music producer and gaming expert and the journalist asks, “Who’s your favorite robot musician?”

 

 

If you’d like to join the discussion, look for us on Facebook, Instagram and Twitter at Music Makes You Think.

0 Comments
Share Post
Julie Simmons
jsimmonstrapp@gmail.com

Julie Simmons is an American music journalist and creator of Music Makes You Think. She's written for numerous national publications, including: Chicago Tribune, UTNE Reader, Paste, Harp, Reverb, DRUM! and Tom-Tom magazine. Throughout her music writing career, she's interviewed hundreds of musicians and industry leaders including Josh Dun (twenty one pilots), Peter Gabriel, Suzanne Vega, Neyla Pekarek (f. The Lumineers), M Ward (She & Him), and Jeff Bridges (Academy Award Winning actor / singer). Simmons was named an Industry Icon by Hit Like a Girl for interviewing female drummers. Her musical career started at the University of Notre Dame, where she organized and hosted concerts for Tracy Chapman, The Indigo Girls, Gin Blossoms, Blues Travelers, They Might Be Giants and others. At the university's 26th annual Sophomore Literary Festival, she was invited to stand in for Pulitzer Prize winner, Edward Albee, and read an excerpt from one of her own short stories. Simmons also spent weekends deejaying global music at WSND-FM and spent weekdays working for a doctor in Notre Dame's Psychology Department. Her collaborative research on "Timbre Reliance in Nonmusicians' and Musicians' Memory for Melodies" was published in Music Perception Journal (University of California Berkeley Press). After graduation and in the midst of a successful advertising career, Simmons began moonlighting as a music journalist. Then, after undergoing an 11-hour surgery for a full spinal fusion, she launched Music Makes You Think (MMYT) and took up the drums. To date, the MMYT's Facebook group has posted more than 1,700 daily questions about music. Simmons is turning her daily questions into a conversation card game. Most recently, she was invited to host the music podcast, Campfire Songs.

No Comments

Post a Comment