This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Search our site

Viewpoints

| 2 minutes read

Deepfakes, scams, and Fakey Perry

While there are plenty of examples of how AI is being used beneficially in multiple sectors, there is also no shortage of those who would use AI for exploitative purposes.

The concept of deepfakes has been around for some years now, which is essentially using AI deep learning to create fake images, audio and/or videos of real people – the term is a portmanteau of “deep learning” and “fake”. High-profile individuals such as Barack Obama and Mark Zuckerberg have already been deepfaked using this rapidly-developing technology and it is getting harder to differentiate between the real and false. Back in 2019, it is thought that AI was used to replicate the voice of a CEO, resulting in a fraudulent transfer of €220,000 from a UK company to its German parent. This year, a Hong Kong financial institution lost over $25m following a video meeting between a legitimate employee and deepfakes of the CFO and other senior officers.

As AI tools have moved on apace and become more accessible to the general public, it is also becoming increasingly easy for unskilled users to quickly create deepfakes. Deepfake videos can now be created from snippets of video gathered from individuals’ social media accounts, together with replicated audio only requiring seconds of source input to generate. As it becomes more realistic and easier to use, this technology is increasingly being used to simulate calls from relatives in dire straits – usually in need of urgent cash transfers. This plays on people’s emotions in a way that traditional scams have not been able to. Gone are the emails from your long-lost relative offering to transfer large amounts of cash in exchange for your bank details.

Given the nature of deepfake technology, one of the sectors which it has disrupted most is the entertainment industry. A deepfaked photo of Katy Perry attending the 2024 Met Gala went viral on X last week, notably duping Katy’s own mother, with the source as yet unidentified. While Katy did not appear overly perturbed by this (“lol mom the AI got you too, BEWARE!”), record label Universal Music Group was less impressed by AI company Anthrophic distributing copyrighted lyrics from Katy’s “Roar”, among others. The label is concerned not only by the unlicensed distribution of lyrics, but also because Anthtropic has apparently used them to train its AI models. 

Some musicians though have opted to allow deepfakes of themselves to be created, to generate an additional revenue stream for comparatively low effort. However, difficulties may arise from these sorts of relationships if (when) the public representation of the deepfake morally conflicts with the opinions of the actual person. We may see the development of Personality Rights in response to the proliferation of deepfakes, as these rights protect against the unlicensed used of a person’s voice, character or image in the jurisdictions where these rights already exist.

With such a rapid evolution, AI is putting pressure on laws to keep up. Although this technology is problematic for legal frameworks as they stand, there is little indication of a slowdown in its development, putting the ball in the court of lawmakers. IP lawyers will be particularly interested in how we adapt to AI innovation to protect against its misuse.

Tags

intellectual property, artificial intelligence