Insights August 8th, 2018
In the Artificial Intelligence Bulletin – A.I. & The Future of Music we examine just how much impact A.I. and machine learning is having in the field of music.
In many ways, we already know that A.I. is having a big effect on our relationship with music.
Amazon and Spotify are constantly guessing, and in most cases, quite accurately suggesting songs, albums or artists that we should like based on our previous listening experience.
This machine-learning and predictive analysis of our musical tastes is getting more and more accurate as the ravenous machines hoover up more and more data, and predict what people like based on their past likes, and musical consumption behaviour.
However, just how far will A.I. reach into musical creativity and composition?
Let’s take a look and move from simpler applications of A.I. in the music industry to well, simply, quite mind-blowing…
Amazon AI Predicts Users’ Musical Tastes Based on Playback Duration
On the more simple scale of A.I. and music, this report offers more a continuation and expanse on the Amazon and Spotify algorithms to detect human musical tastes.
It is interesting news in how it relates to digital assistants, which are becoming more and more popular playback methods for music in the home.
Basically, due to the way that music is selected, consumed, and engaged with by Amazon Echo, Google Home, and other digital assistants, it’s not that easy for these little data miners to determine which songs we prefer over others.
That’s why they are now using the average time we spend playing a song as a key metric to denote how much we like the song. Sound simple?
They do have to take into account some variables though like us having to cut the music due to a telephone call or other interruptions.
Read more at Venture Beat
MIT’s New AI System Can Identify Instruments Within Music
Okay. This is starting to get way cooler.
M.I.T.’s ‘PixelPlayer’ is an A.I. learning system that can listen to a music video and extract the sound of individual instruments.
You can then put these individual layers into your digital software or hardware and alter the sound of each layer of musical instrument.
This has the potential to add a huge capability to remaster old poor quality recordings.
It’s also pretty exciting for producers who want to take elements of tracks and repurpose them in new ones.
Read more at Musically
Computer Creativity: When AI Turns Its Gaze to Art
If the last report about M.I.T. piqued your A.I. music curiosity, here’s where things get even more interesting…
There has been lots of research concerning which jobs will most likely be under threat through the A.I. revolution, and its wide-ranging socio-economic implications.
What if we added a few new jobs to the list that were initially considered to be in the safe creative zone… ?
How about music producers and composers, or artists, in general?
On the face of it, you might think, surely not.
Art is a form of human communication, and without the human actor, there can be no art.
But if you reduce it to its essence, it makes sense that A.I. and machine-learning could be used initially as an important tool in art and music composition, and eventually could eradicate the need for human producers.
You feed in data of previous musical or art compositions and the machine learning capabilities can predict common elements or styles to utilize to create new versions.
This is essentially what Douglas Eck who created the Magenta Project at Google is teaching machines to do.
“It’s your idea, and then what the computer is doing via the machine learning is generating some new possible endings for you” – Douglas Eck, Creator of Magenta Project at Google
Google Created an AI-based, Open Source Music Synthesizer
Related to the Magenta Project at Google, comes this AI-based synthesizer, the NSynth.
Also, not to be missed, check out this beatbox battle between human, Reeps One, and machine, Reeplicator AI.
Marvel at the creative possibilities!
Taryn Southern’s New Album is Produced Entirely by AI
Now we arrive at the exclamation point!
What we’ve been describing above is already here…
The musician, Taryn Southern, has created a complete album of 12+ songs made only with A.I. composition and some of her own tweaking.
The co-contributors to her album are not John, Paul, George, or even Ringo, but rather four nascent music AI’s: Amper Music, IBM’s Watson Beat, Google’s Magenta, and AIVA.
They were used individually throughout the album on separate songs.
Each A.I. has its own intricacies, modus operandi, benefits, and limitations but the message is clear:
“You can literally make music with the touch of a button.” – Taryn Southern, Musician
How do you feel about that?
Inspired by the opportunities to make music or sadness at the death of something that until now was quintessentially human?
Here’s one of Taryn’s music videos:
Read more at Digital Trends
Finally, we share a video from a live audio-visual experience at Future Camp YVR 2016, an un-conference organized by Nikolas Badminton:
‘Do Androids Dream of Electronic Beats?’ The Future of Music & The Singularity | Musical Innovations
Nikolas is a world-leading Futurist that drives world leaders to take action in creating a better world for humanity. He promotes exponential thinking along with a critical, honest, and optimistic view that empowers you with knowledge to plan for today, tomorrow, and for the future.
Contact him to discuss how to engage and inspire your audience. You can also see more of Nikolas’ thoughts on his Futurist Speaker VLOGs as he publishes them in this Youtube playlist.
Please SUBSCRIBE to Nikolas’ Youtube channel so that you don’t miss any as they come up. You can see more of his thoughts on Linkedin, Twitter, and bookmarked research on Tumblr.
Read previous bulletins and posts from Nikolas Badminton: