A glimpse into the future as artificial intelligence makes waves in the music industry
Artificial Intelligence (AI) is a term, which we have all become familiar with in recent years, but many of you may not be aware of its revolutionary potential in the music industry. In fact, AI might just change the way we produce and consume music forever.
“We believe that Amper Music is going to usher in the greatest creative revolution in the history of humanity”, Drew Silverstein tells me. He is the CEO and co-founder of Amper Music, an AI based composer, performer, and producer that allows any user to create their own unique track in just a matter of seconds. There are start-ups investing serious amounts of money into experimenting with AI music and Amper is among those leading the way.
Silverstein’s vision is to give everyone in the world equal opportunities to create music, regardless of their background, training, or access to other resources. With the click of just a few buttons you will be able to create a song in the genre and style you want, choose the instruments, and adjust the tempo of the track. The resulting composition will be yours to use for whatever your heart desires (providing you pay a small fee).
Sounds great, doesn’t it? Silverstein believes:
“AI will provide a really powerful creative and collaborative tool for musicians, artists, and composers to help them further their music making.”
Among those pioneering the use of AI technology in music composition is American singer-songwriter Taryn Southern. She is "collaborating" with five different algorithms, including Amper, to create the world’s first album to be entirely composed and produced with AI. With this album, appropriately titled I Am AI, set for release this month, Southern is not concerned with the skepticism surrounding this topic.
After the release of the first two singles from her LP, Break Free and Life Support, she found that people were more accepting of her work. “I anticipated more backlash than what actually occurred, which was in fact a lot of curiosity, fascination, and interest in the production of the track. People were surprised about the quality of the music given that it was composed entirely with AI.”
The AI software essentially becomes a ‘24/7 assistant producer’. As Southern explains: “I think that the process is faster as I’m not waiting on another human to give me feedback or to make a change based on my feedback. It’s just me and the algorithm. Whenever I have free time to go back in and make changes to a song it’s a very quick process. I just continue making adjustments until I get a song that I’m really happy with!” The time and money this saves is certainly worth taking notice of, not to mention the benefits this brings to a musician's creative process.
Another artist utilizing this emerging technology is Toronto-based DJ/Producer Martin Bernie, AKA Pusher. He has plans to produce a concept album set in a future, about 15 to 20 years from now, where the face of music has completely changed. He is experimenting with the AI music software, Magenta, a Google Brain research project exploring the role of machine learning with regards to creating art and music.
He has been feeding the software Musical Instrument Digital Interface (MIDI) files, uploading them, and training the software. Pusher explains: “Magenta has no pre-conceptions of what music is when you first start using it. It learns from the MIDI files I feed it and gradually learns to associate them with music by spotting patterns like notes and rhythm changes. It then writes new music from that.”
However, he admits, for someone who is already up and running in music, Magenta’s limitations quickly become apparent. Currently, the software can only read MIDI files, not audio files such as MP3.
He believes that the industry is changing rapidly, and that DJs and music selectors are likely to be the first ones to be significantly affected. When asked for his prediction, Pusher tells me: “In the near future, whenever you will release a song you may release a full mix like we do now, but I think people are going to be releasing stems to play into remix culture. This is because AI is going to make it very easy to separate vocals, guitars, and other things from the track.”
What he is anticipating is in fact already happening and could move into the public eye imminently. Another UK based start-up company, fittingly called AI Music, for example, is exploring what happens when you apply the latest AI techniques to music creation. These guys, however, are creating software which will edit and transform pre-existing songs.
“The idea is to change the way that we consume music. Take a Rihanna track for example, thanks to AI Music you will be able to change the song’s genre yourself. You will be able to turn a drum and bass tune into an R&B tune, and so on…” David Ronan, one of the company’s lead audio engineers explains. The company is already demonstrating its app in Beta to potential investors, showing them different remixes of the Jackson 5 hit I Want You Back.
But what happens when AI starts creating music without any human intervention? It is that thought that stuck with me following my conversation with Pusher. We could soon witness the demise of music being seen as an art form and start viewing it more as a science or product. Pusher elaborates: “People don’t just listen to music for the creative journey. Often, they listen to music for more functional purposes. This could be music they can exercise to, meditate, or sleep to. I think people will use music as a support to optimize their performance in life.”
Pusher links the potential change in the way we consume music to the introduction of Amazon Echo, Google Home, and Siri into our daily lives. “These devices are going to learn what you like so well, they will be able to artificially generate music for you in real time as you go throughout your day”, he says. “It will be customized to match what you are doing at any given time. That’s the scary potential reality. It’s the thing nobody wants to talk about!”
AI has marked its arrival in the music industry and those involved with the software’s development firmly believe that it is here to stay. If that is the case, then one would presume we have two options. We can embrace it and use it as a collaborative partner to help with our music creation and help millions more express their creativity through the art form of music. Or, as Pusher points out, we can let it take the reins, leaving it to create functional music to suit our needs—no art, soul, or passion left.
While the latter broadcast is a very bleak outlook on the future of the music industry, it is a possibility. I still lean towards the more hopeful and optimistic view as envisioned by Silverstein that: “We as a society and a culture will forever value artistic creation because that’s just part of who we are.” But Pusher makes a fair point. Our listening habits have changed and this feeds into the instant gratification society we now live in. Perhaps artificially generated music will prove to be the thing we always needed but never knew we wanted.
Written by Joshua Coase
Edited by Amelie Varzi