IP
I
March 21, 2024

Deepfakes and style mimicking – Should New Zealand adopt a right of publicity?

Canadian visual artist Sam Yang creates distinctive anime-style, almost photo-realistic, artworks. He has millions of followers on socials and sells (and displays) his works online. In 2022, he discovered that a generative AI algorithm had been trained on hundreds of his works and was now capable of producing works “in the style of” Sam Yang. He discovered this when he was asked to judge a competition for the best impersonation of his work. Perhaps understandably, he was not impressed. He had spent years developing his style, and a generative AI model had learned to copy it in a matter of months. Users could now quickly create works in his style, in direct competition with him.

In 2023, YouTube channel “Curious Refuge” uploaded a trailer for “The Galactic Menagerie”, which appeared to be Star Wars in the style of director Wes Andersen. However, Mr Andersen had nothing to do with it. Using generative AI, Curious Refuge had arguably copied his very distinctive cinematic style.

Generative AI (in particular Generative Adversarial Networks, or GANs, such as Midjourney, Dall-e and Stable Diffusion) is also being used to create deep-fakes: realistic images, including videos, or audio recordings of real people, that appear very much authentic, but they are not. Some generative AI requires a recording of only three seconds of a person’s speech in order to generate that same voice saying anything at all, including incorporating the speaker’s own particular emotional tone.

Voice actor Bev Standing learned this the hard way in 2021 when she discovered that her voice was being used for TikTok’s original text to speech feature. Users could type anything they liked and then make Standing’s voice say those words (no matter how offensive). Standing had never given permission for the vocal recordings to be used to generate entirely new speech in this way and sued ByteDance.

Musical artist David Guetta played with this concept in February 2023, when he posted a track that he titled “Emin-AI-em” to Twitter (as it was then known). It sounded as if it had been written and performed by Marshall Mathers III. Instead, Guetta created the work himself by asking ChatGPT to “write lyrics in the style of Eminem about future rave”, and then using generative AI (Uberduck) to recreate Marshall’s voice performing the ChatGPT-composed lyrics. The entire process took him one hour. Perhaps sensibly (as Eminem can be fairly litigious) Guetta confirmed “obviously I won’t release this commercially”.

Similarly, in April 2023, a musical track by “Ghostwriter” titled “Heart on my Sleeve” was uploaded to streaming platforms such as Spotify. It instantly gained attention because it sounded like a new co-lab between Drake and The Weeknd, but it was not. It was created using a vocal synthesiser AI, without their permission.

The legal position in New Zealand

While generative AI certainly has its fair share of copyright issues, none of the outputs in the examples above arguably amount to copyright infringement, particularly in common law jurisdictions (like New Zealand) that have no concept of derivative works. Copyright doesn’t protect an idea, or a personal likeness, or a style, per se. So, if someone in New Zealand has used recordings or images of a real person to train an AI to create entirely new recordings or digital replicas of that person, which don’t substantially reproduce the original inputs, there is no copyright issue with those outputs.

Facial images and voices are biometric information, which is regarded as sensitive personal information under the Privacy Act 2020. If your image or voice have been collected in breach of the Privacy Act, the collector agency may be liable for the Privacy Act breach, but what if they have already trained an AI using your photos and voice, and a different entity is now commercially exploiting your digital replica? It is unclear whether the Privacy Act would apply in this scenario.

It is also far from clear that the Harmful Digital Communications Act 2015 would apply, including because that Act requires the relevant conduct to result in “serious emotional distress” and it is arguably designed to address the unauthorised use of real images of individuals, rather than images generated by algorithm.

If the resulting material does not amount to defamation, the tort of invasion of privacy (which carries a “highly offensive to the reasonable person” threshold) or a breach of confidence, and the subject is not “sufficiently famous” to qualify for use of the tort of passing off or consumer protection legislation (such as the Fair Trading Act 1986), it is possible that no remedy for appropriating a voice or a likeness for commercial purposes would arise here.

A right of publicity

But perhaps (in light of what AI can now do) it very much should. The USA and other jurisdictions have adopted a right of publicity (sometimes known as personality rights): the exclusive right to control the commercialisation of a person’s likeness, including identifiable features such as their appearance, name and voice.  

This right developed in the USA in the 1970s as part of a set of four “privacy” torts advocated for by Professor William Prosser:

1. Intrusion upon seclusion;

2. Invasion of privacy;

3. Publicity placing a person in a false light (a little like defamation); and

4. Appropriation of a person’s name or likeness (aka breach of publicity rights).  

New Zealand has adopted the first two of these torts, and the tort of defamation, but not the fourth. And it is the fourth which arguably most appropriately addresses unauthorised commercial deepfake usage, by granting people an actionable monopoly over the commercialisation of their own likeness.

Due to the existence of this fourth tort in the USA, recent generative AI lawsuits concerning algorithms creating outputs “in the style of” certain authors’ works, also claim breach of publicity rights. These lawsuits argue that an author’s particular writing or artistic style is a key component to their personality and likeness and cannot legally be appropriated commercially without their consent. And you can perhaps understand their concern. A style that has taken an artist a lifetime to curate, and on which they rely for earning a living, can now be learned in a matter of months by generative AI. Deepfakes and vocal synthesiser outputs can be created in minutes, and could potentially pose an existential threat to performers, models and actors. Concerns about this threat featured heavily in the recent SAG-AFTRA American actors’ union strike.

Perhaps it shouldn’t be necessary for a subject to be famous in order to prevent their likeness and personal characteristics from being appropriated commercially without their consent. And therefore, arguably, the time is now right for New Zealand to adopt Prosser’s fourth tort.

No items found.

Article Link

Dowload Resource

Dowload Resource

Insights

IP
March 21, 2024

Deepfakes and style mimicking – Should New Zealand adopt a right of publicity?

Canadian visual artist Sam Yang creates distinctive anime-style, almost photo-realistic, artworks. He has millions of followers on socials and sells (and displays) his works online. In 2022, he discovered that a generative AI algorithm had been trained on hundreds of his works and was now capable of producing works “in the style of” Sam Yang. He discovered this when he was asked to judge a competition for the best impersonation of his work. Perhaps understandably, he was not impressed. He had spent years developing his style, and a generative AI model had learned to copy it in a matter of months. Users could now quickly create works in his style, in direct competition with him.

In 2023, YouTube channel “Curious Refuge” uploaded a trailer for “The Galactic Menagerie”, which appeared to be Star Wars in the style of director Wes Andersen. However, Mr Andersen had nothing to do with it. Using generative AI, Curious Refuge had arguably copied his very distinctive cinematic style.

Generative AI (in particular Generative Adversarial Networks, or GANs, such as Midjourney, Dall-e and Stable Diffusion) is also being used to create deep-fakes: realistic images, including videos, or audio recordings of real people, that appear very much authentic, but they are not. Some generative AI requires a recording of only three seconds of a person’s speech in order to generate that same voice saying anything at all, including incorporating the speaker’s own particular emotional tone.

Voice actor Bev Standing learned this the hard way in 2021 when she discovered that her voice was being used for TikTok’s original text to speech feature. Users could type anything they liked and then make Standing’s voice say those words (no matter how offensive). Standing had never given permission for the vocal recordings to be used to generate entirely new speech in this way and sued ByteDance.

Musical artist David Guetta played with this concept in February 2023, when he posted a track that he titled “Emin-AI-em” to Twitter (as it was then known). It sounded as if it had been written and performed by Marshall Mathers III. Instead, Guetta created the work himself by asking ChatGPT to “write lyrics in the style of Eminem about future rave”, and then using generative AI (Uberduck) to recreate Marshall’s voice performing the ChatGPT-composed lyrics. The entire process took him one hour. Perhaps sensibly (as Eminem can be fairly litigious) Guetta confirmed “obviously I won’t release this commercially”.

Similarly, in April 2023, a musical track by “Ghostwriter” titled “Heart on my Sleeve” was uploaded to streaming platforms such as Spotify. It instantly gained attention because it sounded like a new co-lab between Drake and The Weeknd, but it was not. It was created using a vocal synthesiser AI, without their permission.

The legal position in New Zealand

While generative AI certainly has its fair share of copyright issues, none of the outputs in the examples above arguably amount to copyright infringement, particularly in common law jurisdictions (like New Zealand) that have no concept of derivative works. Copyright doesn’t protect an idea, or a personal likeness, or a style, per se. So, if someone in New Zealand has used recordings or images of a real person to train an AI to create entirely new recordings or digital replicas of that person, which don’t substantially reproduce the original inputs, there is no copyright issue with those outputs.

Facial images and voices are biometric information, which is regarded as sensitive personal information under the Privacy Act 2020. If your image or voice have been collected in breach of the Privacy Act, the collector agency may be liable for the Privacy Act breach, but what if they have already trained an AI using your photos and voice, and a different entity is now commercially exploiting your digital replica? It is unclear whether the Privacy Act would apply in this scenario.

It is also far from clear that the Harmful Digital Communications Act 2015 would apply, including because that Act requires the relevant conduct to result in “serious emotional distress” and it is arguably designed to address the unauthorised use of real images of individuals, rather than images generated by algorithm.

If the resulting material does not amount to defamation, the tort of invasion of privacy (which carries a “highly offensive to the reasonable person” threshold) or a breach of confidence, and the subject is not “sufficiently famous” to qualify for use of the tort of passing off or consumer protection legislation (such as the Fair Trading Act 1986), it is possible that no remedy for appropriating a voice or a likeness for commercial purposes would arise here.

A right of publicity

But perhaps (in light of what AI can now do) it very much should. The USA and other jurisdictions have adopted a right of publicity (sometimes known as personality rights): the exclusive right to control the commercialisation of a person’s likeness, including identifiable features such as their appearance, name and voice.  

This right developed in the USA in the 1970s as part of a set of four “privacy” torts advocated for by Professor William Prosser:

1. Intrusion upon seclusion;

2. Invasion of privacy;

3. Publicity placing a person in a false light (a little like defamation); and

4. Appropriation of a person’s name or likeness (aka breach of publicity rights).  

New Zealand has adopted the first two of these torts, and the tort of defamation, but not the fourth. And it is the fourth which arguably most appropriately addresses unauthorised commercial deepfake usage, by granting people an actionable monopoly over the commercialisation of their own likeness.

Due to the existence of this fourth tort in the USA, recent generative AI lawsuits concerning algorithms creating outputs “in the style of” certain authors’ works, also claim breach of publicity rights. These lawsuits argue that an author’s particular writing or artistic style is a key component to their personality and likeness and cannot legally be appropriated commercially without their consent. And you can perhaps understand their concern. A style that has taken an artist a lifetime to curate, and on which they rely for earning a living, can now be learned in a matter of months by generative AI. Deepfakes and vocal synthesiser outputs can be created in minutes, and could potentially pose an existential threat to performers, models and actors. Concerns about this threat featured heavily in the recent SAG-AFTRA American actors’ union strike.

Perhaps it shouldn’t be necessary for a subject to be famous in order to prevent their likeness and personal characteristics from being appropriated commercially without their consent. And therefore, arguably, the time is now right for New Zealand to adopt Prosser’s fourth tort.

No items found.

Article Link

Dowload Resource

Dowload Resource

Insights

Get in Touch