The Creative Compromise with AI
Guardrails on what AI can and cannot do are being considered at a regulatory level
The debate over the creative ownership of AI-influenced outputs has been raging since its inception but, despite all the advancements made, we’re not much closer to finding a resolution. This is particularly problematic in software development where the use of AI can be hidden - accidentally or otherwise, overlooked or go undetected.
To fully understand its impact, it is important to first understand the licensing playing field.
Fanning the Flames of the Ownership Debate
The difference between working on open source software (OSS) and a commercial or proprietary product comes down to how that software is licensed. The latter is the intellectual property of an individual or company and cannot be tampered with by external developers, additionally, more often than not, its source code is only available to its creator.
There is a long list of licenses that govern open-source software. We have licenses that are, so-called permissive and others, so-called copyleft. The General Public License (GPL) family of licenses belongs to this latter subgroup. As with all open-source licenses, it is intended to guarantee freedom to use, share and change all versions of a program.
However, the license has a condition which means any modification of the software needs to be released under the same license as the original. If a developer is modifying a piece of code under a GPL license, it means that whatever comes out the other end also needs to be covered by the same license. This is a way to guarantee that the software under this license will remain forever open source.
In the days before AI, this was well known and many companies had policies on which type of licensed open source software was fine to include or copy over. This worked fine because code snippets had a usually traceable lineage. However, today, that is not the case which has created a legal landmine.
A developer might be using AI for assistance, sometimes sharing a problem statement that the AI should provide some code for and possibly making minor tweaks to what AI suggests but the problem is, where does this code come from? This is challenging on a number of levels but, from a moral and ethical point of view, it is one of the main drivers fanning the flames of the ownership of creativity debate.
A Monkey called Naruto
We can chart this back to a monkey called Naruto. You may not know him by name but you’ll likely have seen his face. He’s famous for taking a brilliant selfie in 2011 with a wildlife photographer’s camera. It also started a years-long saga in which the U.S., with the Copyright Office and even Wikipedia weighing in over the issue of ownership. The upshot was that it was agreed by law that the only group able to own creativity are us humans – not animals and certainly not machines.
Bringing this back to AI, this sentence tells us that anything that is not human cannot create something new. AI does not have the capacity for creativity but can only tweak existing materials. Because of this, it means copyright can only be attributed to something new and therefore any modification opens the door to copyright infringement.
That means that legally speaking machines cannot own copyrights because they are not creating things but taking things that already exist, making this output a derived work and not a novel one.
Even the world’s most powerful wizard isn't immune. There are stories of AI regurgitating elements of the Harry Potter script as new ideas and only last year, Stephen Fry said his voice was stolen from the Harry Potter audiobooks and replicated by AI.
This isn’t to say that AI is the source of all copyright issues. We humans have been dealing with this controversy for years, for example in music where authors get inspiration from others, most recently with Ed Sheeran v Marvin Gaye, or the replication in storylines from Dune to Star Wars. The difference now is that machines are involved and cannot be judged by the same standards purely because we have deemed that only humans are capable of innovating and creating.
The Law of Unintended Consequences
The technology world is in a state of permanent reflection regarding this matter. What is creativity? Can machines be creative and do we want them to be? Are we opening the door to Skynet and is the fear real?
Professor Stephen Hawking told the BBC in 2014 that, “The development of full AI could spell the end of the human race….It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”
What we can gather from this is that, if we agree that machines can create, it solves today’s problem. They can create new work based on code they learned from GPL license software and be influenced by multiple sources, much like we do as humans. Unfortunately, this is almost the embodiment of the law of unintended consequences. We might solve copyright infringement today but it might come at great expense in the long run.
Balancing Innovation With Integrity
To move forward, traceability is required. Users of AI, and in this particular case, developers, need to know exactly where code originates from so they know what they’re working with.
The most common OS licenses already require attribution but, if AI learns from projects licensed under those and outputs a piece of code that is almost verbatim to it, the traceability and attribution get lost which is where the problems start. Maybe licensing will come that will prevent source codes from being used in training so we can mitigate this issue but, even still, attribution would still be a problem that remains to be solved.
The bigger picture is that we need to start imposing guardrails on what AI can and cannot do. This is happening or being considered at a regulatory level. For example, the UK recently released the outcome of a consultation with an “approach that is strongly pro-innovation and pro-safety.” Most of the software foundations are coming up with their own set of guidelines too because, as AI creativity capabilities grow, we'll have to grapple with how to balance innovation with integrity.
How can we use AI to spark human creativity while still cherishing that special, intangible touch only a person can bring?
About the Author
You May Also Like