by Florence Grist, Verne Global
27 September 2019
In my first blog discussing the role of AI in the music industry, which you can check out here, I explored how software is transforming the way music is produced. It led me to explore how AI technology is learning to perform music, and to share my own opinions, speaking as a musician, on the exceeding potential but also definitive limitations of AI.
Rocking out with robots
The idea of a world controlled by robots has often been an inspiration for science fiction films, books and stories, and now that fantasy has become a reality for music. New Zealand composer Nigel Stanford released his hit music video ‘Automatica’ on YouTube in 2017: a jaw-dropping display of robots playing musical instruments and eventually making them explode… Not how all musical performances should end! He described the process as an exploration into “what creativity means as more and more processes become automated.”
Behind the scenes, Stanford worked with Kuka Robotics and their partner Andy Flessas to program the robots’ musical ability. For this they used Flessas’ ‘Robot Animator’ plugin, which allowed them to “animate a physical robot and generate a program that instructs the real robot to move in exactly the same way.” The demand for this kind of stage display could rise in the future, meaning music concerts could host more performances featuring both people and machines, or even shows entirely led by robots… boy bands are so last year!
Robots have all the skills to master musical instruments, from reading the first few notes to rocking out in performance. An earlier project in 2012 by the Tokyo Metropolitan University created a useful handheld device named Gocen: a “handwritten notation interface… for learning music”. In essence, this learning tool allows you to scan a written score, which is read by the device and eventually played in real-time by an accompanying speaker. Similarly to the app Kena.AI that I explored in my first blog, projects like these are improving our ability to learn music, and the ability of algorithms.
The music industry is embracing AI as a brilliant accessory for the production and performance of music. From a commercial point of view, services like Spotify and Amazon Music have modernized consumer access to audio. In 2018, CD sales dropped by 23 percent according to the BBC, indicating that customers were rapidly switching to streaming services instead. AI-powered features such as ‘Spotify’s Discovery Weekly’ are constantly providing us with personally appealing music and playlists that match our mood – so we can avoid aimlessly wandering around HMV!
As for production of music, companies like AIVA, Amper Music and Brain.FM have all created intelligent systems that are capable of composing, reducing time and costs. And as the engineering industry becomes more developed, robots such as those produced by Kuka Robotics are being programmed to play music, which is both useful and visually interesting.
Question of ownership
If AI can compose on an industrial scale, that leads to an increasingly controversial issue: who rightfully owns the music? There are arguments following every step of the way, from the composition of the first few notes to the streaming of the track online. Arguably, it belongs to the designers and programmers of AI software – but if the scores the computer produces are perfected witout any human input, can the people truly be considered artists, without having contributed any musical ability to the finished product?
Alternatively, does the computer itself own the music? The posuition of AI in copyright law is up for a debate: for example in the US copyright law, the word “human” does not appear once. Consequently, it is unclear whether AI can be held accountable for producing music of a very similar style to an existing artist.
Furthermore, if an AI system is trained via studying the songs of a single artist, for example, Beyoncé, as an article from The Verge suggests – “is Beyoncé owed anything?” According to legal experts, the answer is no, despite that seemingly being unfair on the original composer.
If not the computer, does the customer using the system own the music created? Depending on their direct input of filter combinations, or categories such as genre, mood and style, the composition produced is made unique: so surely they played a part in the making of the piece? This problem could impact development of automation in the music industry. As AI systems learn more human abilities, it seems wrong to excuse them from the rules, however ambiguous.
What about feeling?
This leads us to the question: what about the impact of AI-generated music? Despite exploring many brilliant examples of AI enhancing the music industry during my research, I do consider that there are limitations. Sure, AI software can create music with intricate technical accuracy, perfect harmonies and typically correct structure, but what’s it all for? If, in the art world, the sole purpose of each framed piece was to ‘look pretty’, what more could art possibly teach us beyond the conventional meaning of beauty?
Instead, exhibition-goers seek thought-provoking art, work that conveys raw emotion, expression and an insight into the creator’s mind when their idea first blossomed. We can look at music in exactly the same way. When we listen to music, we have an opportunity to empathize with the composer’s wandering mind. I believe that a computer simply cannot offer that insight to us because it lacks the personal experience that grants music its personality.
Music has been an important part of my childhood, and my passion for piano continues today; I would argue that a musician’s ability to play with feeling will always surpass his or her technical expertise. Observing a six-year-old piano maestro seize the trophy I wanted at my local music festival in 2010, I realised that I simply couldn’t match his outstanding accuracy.
However, I discovered that I could do something he, maybe, could not: I could play with feeling and deliver my music as if it were a story, as opposed to an impressive robotic challenge. Feeling is a gift that no amount of technological genius can replace. If it were the case, all the music of the present and future would be AI-generated: a quicker, cheaper, more reliable solution, and yet I don’t think this will ever happen.
With that in mind, I can say for sure that whilst AI is a revolutionary advance in technology for the 21st century, sometimes surpassing the intelligence of its own creators, ultimately, nobody can express us better than ourselves.
This post was originally published on the Verne Global blog.
Florence Grist is Verne Global’s latest marketing team member. Based in the London office, Florence is working on events, social media and content. In her spare time, she enjoys music, plays piano, loves history and is currently learning Russian.
Verne Global delivers advanced data center solutions at industrial scale, allowing high performance and intensive machine learning applications to operate in an optimized environment. Founded in 2012, its Icelandic data center campus hosts HPC applications pushing the boundaries of research across a range of industries, including financial services, earth sciences, life sciences, engineering, scientific research and AI.