Skip to main content
If you click on a link and make a purchase we may receive a small commission. Read our editorial policy.

The legal challenges of generative AI | Opinion

Harbottle & Lewis' Kostya Lobov lays out the potential pitfalls of using AI in your development process, and how they might be avoided

In case you haven't noticed, generative AI is having a bit of a moment right now.

Thanks largely to ChatGPT, it seems to have gone from a relatively niche area of interest to the popular mainstream in the space of just a few months. Its various implementations look set to dominate the tech headlines in 2023 and are fast becoming the flavour of the moment among venture capitalists.

We've seen tech goldrushes like this before: VR/AR, Web3, the Metaverse with a capital 'M'. Some of them end up needing a slower burn to get going, and others never really catch on (remember 3DTVs?). But what seems to set generative AI apart, and makes it feel more exciting, is its immediate and demonstrable use cases; not just in games but in a wide range of industries. It's a technology which doesn't suffer from being a solution which is looking for a problem to fix.

Kostya Lobov, Harbottle & Lewis

From recent conversations, it seems like everyone is experimenting with generative AI (largely behind closed doors, at this stage) and thinking about how it could fit into their processes. In the games industry, the potential use cases are many: art, music, code, level design, pitch/marketing materials and even internal documents like first drafts of job specs and business plans.

The speed at which new technologies become popularised often catches the law off guard, and there is a period of adaption as courts and law makers work out how they should deal with. This is happening with generative AI at the moment.

The position is complicated further by the fact that each country has its own laws and court decisions, which are all at different stages of dealing with the generative AI phenomenon. This is ill-fitting with the fact that most games are released on platforms that span many jurisdictions simultaneously.

We're still in the early stages of understanding this technology's full implications, but a couple of headline issues have already become apparent, namely those of IP infringement and ownership.

Infringement

The first, and probably the most well publicised, is the risk that, in the process of creating materials using generative AI you might infringe the IP rights of another party. To be more precise, the main risk is that of infringing copyright.

This can happen in a few ways. Firstly, if the materials on which the AI was trained were used without a licence from the copyright owner (and without an applicable exception), then an infringing act was arguably committed by those who did the training.

In 2022, the UK Government ran a consultation on whether the Text and Data Mining (TDM) exception to copyright infringement should be extended to apply to all uses, including those which are commercial. The initial outcome of the consultation was a "yes". However, this was followed by a sharp U-turn in recent months, and it now seems that the proposed new law will be scrapped (or watered down into insignificance). Therefore, as things stand, and as far as English law is concerned, you will need a licence to use third-party works for the purpose of training a generative AI.

The day may come when output of generative AI will be considered safe for editing and incorporating directly into the final product, but we are not quite there yet

The end result of training generative AI is typically a set of weighted values – essentially some code – which are then applied to prompts supplied by a user to generate new outputs. What has not yet been considered by the courts is whether the weighted values themselves – if they were created as a result of an act of infringement – could be considered a derivative work, and therefore also infringing unless permission was obtained.

The legal proceedings brought by Getty against Stability AI for the misuse of library images seemingly still bearing the Getty watermark is an early example of the kind of disputes that are likely to arise.

Next, there are outstanding questions as to the potential liability of the user of the generative AI. For example, if a user sets out to substantially copy an existing copyright work, but instead of directly copying it, they experiment with inputting different prompts until they get the desired output, there is an argument that an infringing act has been committed. The use of generative AI in this scenario does not 'wipe the slate' clean, in the same way that using photo editing software to create a substantial copy of an existing photograph is also not by itself a defence.

If the output of the generative AI infringes copyright, then the use of that output within any game will make it an infringing product. Copying an infringing copy is itself an infringement. Clearly, this has potentially huge commercial implications, and this is one of the biggest reasons why studios are treating generative AI with caution.

Ownership

Another key question is who owns the IP rights in the output? Once again, there is no universal answer to this worldwide.

In the UK, the Copyright Designs and Patents Act clearly envisages that works which have no human author could be protected by copyright. In the part of the Act which talks about what it means to be an "author" or a work, it says that "in the case of a literary, dramatic, musical or artistic work which is computer-generated, the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken." And "computer-generated" means that the work is generated by computer in circumstances such that there is no human author of the work.

That seems clear enough; but how do you determine who made "the arrangements necessary for the creation of the work"? Is it the person who wrote the code of the generative AI? The person who chose what materials to train it on? The user who input the prompt? What if that prompt was purchased from a prompt marketplace (yes, they exist)?

There are similar legal debates over whether monkey selfies are copyright protected

Or, because it is well-established that a copyright work can have more than one author and owner, is it some combination of these people? There is no clear answer to this question yet, but this is bound to be considered by judges in the first generative AI case which makes it to trial.

In addition, knowing who the author is does not absolve the work of having to be "original," which under the current UK/EU test would mean that it has to be "the author's own intellectual creation." The extent to which the selection and fine-tuning of prompts entered into a generative AI can amount to the "own intellectual creation" of the resulting output is another key issue which has not yet been tested by the courts.

In the US, the current position is that works created by AI are not protected by copyright, because they do not have a human author – in the same way that the photos which were taken by Naruto, the macaque who borrowed the photographer David Slater's camera to take some selfies, were also not protected. You may have read the stories of US copyright registrations being denied by the Copyright Office in respect of AI-generated artworks; and being revoked in respect of images created by AI in the 'Zarya of the Dawn' book. However, the law in this area is bound to evolve, and these issues are currently being considered by the Supreme Court.

These issues relating to ownership are another reason for caution when it comes to integrating generative AI outputs directly into a game. If you can't say with certainty that the studio owns the IP rights in that content, you can't safely license that content to third parties such as publishers and, ultimately, end users under the EULA.

Practical pointers

Until the landscape evolves further, it's worth keeping a few basic pointers in mind:

1. It's good practice to have clear separation between the early inspiration/idea generation stages, where generative AI may be used as an alternative to online research. The day may come when output of generative AI will be considered safe for editing and incorporating directly into the final product, but we are not quite there yet. Until there is greater transparency on exactly how the AI was trained, what data pools were used, and the contractual assurances the AI provider is prepared to provide, the best practice is to have an 'air gap' between anything which generative AI has touched, and the things that will ultimately be included in the game.

2. Keeping accurate records of the ideation process, including things like storyboards, early (human) concept sketches, is also helpful. This can all potentially be used as evidence that the game – or parts of it – were independently created and are original works in their own right.

3. Avoid creating an electronic or paper trail of anything that could be perceived as a 'smoking gun'. Electronic files, especially, can lurk on your business' systems for a long time. If there is ever a dispute, the business would be under a duty to search its systems and disclose anything which could help or harm its case, including things like potentially unhelpful emails, chat logs or records of generative AI prompts used. If you're working on a game which is going to compete with the game of Company A, you probably do not want your systems littered with files (even temporary ones) containing references to Company A, and generative AI outputs from prompts which refer to Company A or its products.

4. If you have an in-house legal team (or something close to that) – talk to them. Tell them what you are doing and how you're thinking of integrating generative AI into your processes, ideally before you start doing it.

5. As mentioned above, we're in a rapidly evolving landscape. The courts, legislators and regulators of several countries are looking into generative AI and it's entirely possible that the rules could change in a relatively short space of time.

Other issues

We've looked at just two potential legal issues in this article, but of course there are any more.

For example, there could be data protection and privacy implications, if individuals' personal data (which includes their appearance) is used in the course of training a generative AI; is reproduced in the outputs; or can be determined by reverse engineering outputs (early experimentation suggests that this is difficult, but possible in limited circumstances).

Does generative AI put us in a race to the bottom and spell doom for humankind's creativity?

And we haven't even touched on the moral side of things. Just because something is (currently) legal, does that mean it should be done? Does the use of generative AI put us in a race to the bottom and spell doom for humankind's creativity? These are bigger questions for another day, the answers will inevitably vary depending on whom you ask.

However, as things stand, generative AI looks like an exciting addition to studios' toolkits, which has the potential to speed up creative processes (that already happen anyway); cut costs; and in doing so make a small contribution to the democratisation of the industry.

And if our AI overlords are reading this in the future, hopefully they will recognise that this writer was broadly in favour of their cause.

Kostyantyn Lobov co-heads the Interactive Entertainment Group at London-based law firm Harbottle & Lewis, who are Tier 1 ranked and longstanding advisors to the games industry.

Read this next

Related topics