by Florence Grist, Verne Global
23 September 2019
We’ve discovered that AI can perform methodically and analytically, but does it have the potential to mimic the creativity of the human mind in order to produce a unique work of art such as a musical score? The creative process takes imagination; it is a challenge for even the most right-brained people to compose something brilliant that a listener has never heard before.
It is even more of a challenge for that music to contain feeling and soul – so how could a computer possibly manage such a task when it ultimately has neither?
Due to the growth in film and TV production, original musical compositions are in high demand. It can take composers years to craft a single score, and yet with new technology like AI, it seems this process can be perfected within a week, saving time and money, and resulting in an equally brilliant finished product. One example of this new approach is Luxembourg-based AIVA – Artificial Intelligence Virtual Artist – which has mastered just that!
She, as AIVA is referred to, has the ability to compose unique, emotive music for all types of entertainment. The system is based on stochastic algorithms, meaning compositions are never duplicated. Yet it also relies on patterns: AIVA has read and analyzed 30,000 of history’s greatest scores, from which, via machine learning, she has learned to predict melody movement, harmony arrangement and sequences in rhythm. These predictions have allowed her to build the mathematical formula to create the perfect piece. You can find examples of her compositions below.
But the quality of music is subjective, and it needs to be appropriate. For example, ‘Lion King’ wouldn’t have the same sentimental effect on an audience if it was accompanied by death metal! So AIVA is programmed to respond to 30 category labels such as genre, mood and style, further enhancing the algorithm. Whilst AIVA can compose for content creators with little musical knowledge – for example, YouTubers requiring backing music – she can also inspire composers and their own work, or design beautiful compositions that musicians can bring to life in performance. Either way, this technological advance can be seen as a wonderful intersection of musical creativity and science.
Another recent example of AI enhancing humans’ experience with music is the emergence of AI-based music tutors. This could be extremely beneficial: a cheaper, simpler solution to learning music, almost as effective as a human teacher.
The AI music practice Kena.AI, based in San Francisco, is getting ready to launch a personal music tutor application, designed to teach people how to pick up and master musical instruments. It could become a revolutionary platform that could change the way we acquire skills, “bridging the gap between learning from human-tutors and being self-taught.” Not only is Kena described as being able to provide clear instructions, it is designed to offer a unique coaching experience by tracking students’ progress by “listening” to their performances, creating personalized learning paths and recommending music tailored to their individual tastes.
B2B with AI
The ever-increasing abilities of AI lead us to question whether it could one day replace certain human skills. However, in the music industry, human musicians and algorithms are collaborating to experiment with new sounds and produce inspirational work, broadening our perspective on the relationship we have with contemporary technology.
In 2018, the “AI DJ Project” by the Tokyo-based AI company Qosmo held live performances during which an AI agent and a human DJ collaborated on stage. Described as “a dialogue between human and AI through music,” these events were a fascinating opportunity to see how man and machine could perform under very similar conditions: for example, the AI used the same vinyl records and turntables that the human DJ did. They played alternately, one track at a time, each tasked with the process of selecting an appropriate song and mixing it into the music so that it flowed smoothly from the previous track.
The software was trained to become proficient in three areas: music selection, beat-matching and crowd-reading. To select music, neural networks analyzed what a human DJ was playing, extracted auditory features from that track, such as beat or instrumentation, and chose another track of a similar style. For beat-matching via reinforcement learning, the AI DJ determined how to manipulate turntable speed using robotic fingers. Finally, crowd-reading means that the software was designed with a “deep learning-based tracking technique” that inferred which tracks encouraged the audience to dance the most, helping with future music selection.
These examples of AI involvement in music production all lead to the same observation: that music and the way we assemble it is being rapidly transformed by the emergence of new, intelligent technology. Thanks to breakthroughs like these, we will be able to learn, compose and mix music with the assistance of software, or allow a computer to independently compose music for our own enjoyment, opening up more opportunities for the creativity and expression that the arts endorse.
This post was originally published on the Verne Global blog.
Florence Grist is Verne Global’s latest marketing team member. Based in the London office, Florence is working on events, social media and content. In her spare time, she enjoys music, plays piano, loves history and is currently learning Russian.
Verne Global delivers advanced data center solutions at industrial scale, allowing high performance and intensive machine learning applications to operate in an optimized environment. Founded in 2012, its Icelandic data center campus hosts HPC applications pushing the boundaries of research across a range of industries, including financial services, earth sciences, life sciences, engineering, scientific research and AI.