AI and advertising – What producers need to know

What are the legal implications of using generative AI tools in the production of your content?

AI and advertising – What producers need to knowAI and advertising – What producers need to know
Category
Insight | Media
Insight
|
Media
Published Date
13
November 2024
Reading Time

Recently I had the pleasure of speaking on a panel hosted by Advertising Producers Aotearoa: “AI is here to stay, use it to make Creative Production slay”, alongside industry experts in VFX, post production, agency creative and talent management.

The commercial benefits of using AI to generate ad creative and copy are obvious. My role was to talk to the legal implications of using generative AI tools in the production of content and what producers need to be thinking about.

Here are the key takeaways from the points we discussed.

Who owns your AI-generated content? Is it you?

Intellectual property ownership of content is always a matter of concern for content producers and their clients.

But can AI-generated content even attract intellectual property protection at law? This differs between jurisdictions. For example, in the United States works created solely by AI generally can’t be protected by copyright.

Steven Thaler, an inventor who is now somewhat infamous for various IP claims he has made, recently tested this. In November 2018, Mr Thaler filed an application with the US Copyright Office to register copyright in an artwork called “A Recent Entrance to Paradise” that was autonomously created by a computer algorithm. In denying Mr Thaler’s application, the Copyright Office restated its opinion that the US Copyright Act provides protection only to works created by human beings.

Mr Thaler then brought proceedings against the Copyright Office, contesting the human authorship requirement and urging that AI be acknowledged as an author (where it otherwise meets authorship criteria), with any copyright ownership vesting in the AI’s owner. In August 2023, the United States District Court for the District of Columbia ruled that artwork generated entirely by an artificial system absent human involvement is not eligible for protection under the US Copyright Act – US copyright law protects only works of human creation.

The US Copyright Office has confirmed that works created partly by technological tools – including AI tools – might be eligible for copyright protection, provided that a “human had creative control over the work’s expression”. So, if a human author arranges or modifies AI-generated material, the human-authored aspects may still be copyrightable provided they are sufficiently creative (which would be a case-by-case assessment depending on how the AI tool was used). In any case, Mr Thaler has appealed the District Court’s decision.

Unlike the US (and many other overseas jurisdictions), the New Zealand Copyright Act 1994 does contemplate the creation of works purely by computer – so a work created using an AI tool could potentially be protected by copyright so long as the other requirements are met.

Under New Zealand law, in the case of a literary, dramatic, musical or artistic work that is computer generated, the author (i.e. the person who has created the work) is the person by whom the arrangements necessary for the creation of the work are undertaken. So, in the context of AI-generated work, the critical question becomes who “made the arrangements necessary” for the creation of the work?

While this issue hasn’t been tested in the New Zealand courts, keeping a complete record of all AI inputs/outputs on a project may help prove that you made the arrangements necessary for the creation of the work – and that therefore you own it.

The terms applicable to the use of AI tools are also important. These terms can vary on ownership of output generated by the tools, with ownership or extensive use rights remaining with the tool provider in some cases. Similarly, if you are engaging third parties to create content and those third parties use AI tools to provide services to you, then it’s important to ensure that your agreements with those third parties also address the use of AI tools and who owns the output.

Your risk of infringement

At present, there is a risk of copyright infringement in using AI tools to generate content.

There is a tidal wave of ongoing litigation globally against companies (including Microsoft, Github, Stability AI, Meta and Open AI) alleging copyright infringement – both at the initial training data input stage (training algorithms by scraping existing content) and the output stage (outputs that could be illegal copies or derivative works). Many of these cases have been stripped back through successful motions to dismiss to include only claims of direct copyright infringement from unauthorised use and copying for training purposes, and this seems to be becoming the decisive legal issue in the infringement cases. Another unsettled legal issue is who is liable if the output infringes – can the user be liable as well as the tool provider?

Usually, the terms of service with the tool provider exclude all liability for outputs. Given the nature of generative AI tools, there is, therefore, some risk for users of generative AI algorithms trained on copyright works without permission – it could be infringement of the copyright of the authors of the training data if the output is substantially similar.

Another infringement risk to be aware of in the US market is against the right of publicity, which protects against unauthorised commercial use of a person’s name or likeness. AI tools that are trained on vast quantities of images of well-known individuals can intentionally or inadvertently violate the right of publicity (by exploiting names, voices, photographs, or likenesses to generate outputs that are digital replicas of identifiable individuals).

Can an indemnity protect you?

Indemnity has a specific legal meaning, but it’s useful to think of it as a shield that will protect you upon the occurrence of a trigger event. That is, where the trigger event happens then the person who gives me the indemnity effectively acts as my shield and protects me from any loss or damage that I may suffer because of that trigger event.

In relation to AI-generated content, the trigger event is generally a claim that the content infringes intellectual property or breaches someone’s privacy.

In practical terms, let’s say I ask X to create some content for me. Usually (in the absence of AI tools being used), I would expect X to promise me that the content X creates for me will not infringe anyone else’s IP and that if it does, then X will indemnify me (i.e. be my shield). This is standard practice and content creators are comfortable with this, as they have control over what they create: either because it’s their original work or because they obtain the relevant authorisations for third party content to be included in the work.

Where X uses an AI tool to generate that content, then a different risk profile arises. In the case of a generative AI tool, the model or algorithm underlying that tool will have been trained on a set of data – which in the case of publicly available tools may be taken from the internet or elsewhere (depending on the tool). This results in an infringement risk in relation to the output (as discussed above) - i.e. the tool’s output may replicate someone else’s work. As a result, where tools have been trained on public or non-proprietary data, the tool provider will not provide any assurances that the output of that tool will be non-infringing.  

So, if X does not get this assurance from the tool provider, can X give me the assurance (and the indemnity) that they would normally provide regarding non-infringement in AI-generated content? There is a clear risk here for X, which may mean that an indemnity is not agreed to.

There are sometimes ways to mitigate the risk so that X can be comfortable to indemnify me and their other clients. For example, some tool providers provide an indemnity to their users (in this case X) because they have trained their tools on proprietary or licensed data – so they can be sure that the output will not infringe.

What is “ethical” AI?

It is key for any business that the use of AI aligns with its values. So, before launching into using AI, think about how your values should impact its use and what governance you want to introduce around the use of AI.

Having an AI policy should be your first starting point. You may already have policies that AI can sit under, but if not then consider putting one in place. This was something we did at Hudson Gavin Martin – we have open sourced that policy and you can find it here.

The use of AI also naturally gives rise to questions about the future of work in an industry – so thoughtful change management is required when introducing AI tools in a business, particularly when workers may be worried about the impact of AI on their jobs.

What can producers do to navigate the legal implications of AI?

Both the opportunities and the risks of an AI tool depend on its context:

• The nature of the use case;

• The nature of the tool; and

• The nature of the data, e.g. any time the tool is being used to process personal information and/or confidential information, the risks increase.

You can make an assessment at the start of any project about where the risk sits based on the context. For example:  

• An internal use case may be less risky than a use case where the content will be used in the media.

• A tool trained on proprietary data will be less risky than one trained on publicly available data sets.

• One tool provider may be willing to provide protections that another tool provider will not.  

You (or your lawyer) will need to read the fine print. For example, we have seen terms of use where the tool provider can opt-in to training the AI tool on your data, with no right for you to terminate. Permitting your data to be used to “train” the tool could lead to your proprietary material (and/or confidential information) being used to inform the AI tool’s responses to future prompts. It also increases your information security risks.

Bear in mind that the law around AI (including as it relates to data, privacy, and intellectual property) is only just emerging; you will need to periodically review your approach to the use of AI tools, including your contracts and your “Use of AI” policy, to ensure that any new legal or ethical issues are addressed.

If this discussion raises questions about your business’ use of AI-generated content, please get in touch.

Services in this insight

There are no services for this current insight. Take a look at our services page for more information on our different offerings.

Services in this insight

There are no services for this current insight. Take a look at our services page for more information on our different offerings.

Services in this insight

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore.

There are no services for this current insight. Take a look at our services page for more information on our different offerings.
Previous Article
Next Article

From Hertzian waves to hyperlinks – What the BSA’s online decision means for your business

Space Law in New Zealand — Signals from the ground

Cyber security changes flagged for New Zealand

The four Cs of successful fintech partnerships

New rule 3A introduced to the Biometric Processing Privacy Code

IPP3A is nearly in force – What agencies need to know

OPC shifts public enquiries online – What agencies should do now

AI as a confidante? Legal privilege and the ever-increasing use of AI

New Therapeutic and Health Advertising Code – What you need to know

Building blocks of trade mark law: New Zealand approach to "use as a trade mark" now compatible with Australia

Consumer law update 2025

Open banking launches in New Zealand

Is fair something to fear? The Government announces beefed-up Fair Trading Act

Is it fair? Lessons from Bartz v Anthropic and Kadrey v Meta

Open banking almost live

Why New Zealand businesses should care about the EU Data Act

Product labelling changes flagged for New Zealand

Biometric Processing Privacy Code 2025 introduced to New Zealand

Open banking regulations released for consultation

Ten tips for buy-side M&A success

A recipe for disaster – Is caramel a copyright work?

Becoming a Globally Renowned Fintech Nation (and how regulation can light the path)

Important changes made to the Privacy Act

New Zealand may ban social media for young users

Customer and Product Data Act update – Open banking officially on the way

Tips from the trenches – Your AI policy cheat sheet

Significant regulatory reform proposed for New Zealand media

Security guidance released for emerging tech companies

Customer and Product Data Bill – Select Committee reports back

Consumer law update 2024

New Zealand’s Artist Resale Royalty is ready to go

The shape of coffee – “Moccona” vs “Vittoria”

New Zealand’s Copyright Act gets a sense of humour

WIPO’s traditional knowledge treaty is adopted

Doing business in the Middle East

AI and advertising – What producers need to know

Seven contract clauses every freelancer needs

Baby Reindeer – When truth is stranger than fiction?

Our comments on the Biometric Processing Privacy Code

Therapeutic Products Act to be repealed this year

Is End-to-End to end?

Geographical indications – Changes uncorked by the EU-NZ Fair Trade Agreement

Lawyers and Generative AI – New NZ Law Society guidance released

Facing the future – A biometrics code of practice for New Zealand?

Deepfakes and style mimicking – Should New Zealand adopt a right of publicity?

Five Eyes release the Five Principles to Secure Innovation

The copyright conundrum with generative AI

Innovate at the speed of trust – Privacy Commissioner releases new guidance on artificial intelligence tools

Political advertising on social media: sludge or copyright quagmire?

Privacy Amendment Bill introduced to Parliament

New Data Privacy Framework: Meta gets a lifeline

The long and winding road to royalties

Implications of the Supreme Court’s “new debt” approach in Mainzeal

EU gets closer to AI laws

UK Supreme Court puts Quincecare ‘duty’ back in its box

A Deep Dive into The Customer and Product Data Bill

Searching for a shield: Meta’s €1.2 billion fine and international transfers in the age of Big Data

New NZ-UK Free Trade Agreement signals tech, media and IP law changes

Ditch the fax! Tips for building a tech-savvy law firm

The Incorporated Societies Act 2022 – what you need to know for your society

Common myths about copyright online

Artificial artist, or artificial plagiarist?

Big boost to gaming

Is your product “AI powered”?

The latest on New Zealand’s Consumer Data Right

Space Law in New Zealand

You Cannot Defame the Dead or Can You? Tikanga Māori and NZ Defamation Law

Open Banking is coming – through the Consumer Data Right

Massive SEC Fines for Companies Using Text and Instant Messaging

One Act to Rule Them All

A Legal Guide to Kicking SaaS

Potential changes to the Privacy Act 2020

NZ's Social Media "Code of Practice" Launched

Are you being unfair?

A new Companies Office levy is one step closer

Has Paramount Pictures gone maverick?

From Russia with love: The ‘other’ Russian conflict targeting intellectual property owners

Retail Payment System Act 2022 now in force

Paying the price for getting privacy wrong

Can AI be an inventor?

Finfluencer Crackdown

TIN Fintech Insights Report Launch

Britain seeks to regulate 'Big Tech'

Disclosure of personal information - how to, not don't do

The Spice May Flow, But The Copyright Doesn’t

Sound Recording Ownership (Taylor's Version)

The Lowdown (and Lockdown) on Summer Clerkships

Building Blocks of Trust

Firm News | Legal Rankings

Buy Now, Regulate Soon

Ten simple things

Funding the Future

Cyber Security for Start-ups

Fit for purchase

The Screen Industry Workers Bill

UK/New Zealand Trade Deal Takes Flight

Palmer v Alalääkkölä

Other articles you
might like

From Hertzian waves to hyperlinks – What the BSA’s online decision means for your business
1
May 2026

The Broadcasting Standards Authority has concluded it has jurisdiction over an online media outlet that livestreams to New Zealand audiences.

Angela Yang

Solicitor

New Therapeutic and Health Advertising Code – What you need to know
5
March 2026

The new Therapeutic and Health Advertising Code tightens expectations on health and therapeutic claims, especially in digital advertising.

Kyra Vince

Special Counsel – Knowledge

New Zealand may ban social media for young users
23
May 2025

A new Members' bill would ban some social media for users under 16 years old.

Anchali Anandanayagam

Partner