Deepfakes & Data Protection: What You Need to Know

Amrik Basra

Reading time: 4 minutes

Deepfake technology has exploded into public awareness over the past few years. What once seemed like science fiction, videos of people saying or doing things they never actually did, is now something anyone can create with a laptop and the right software.Understanding deepfakes data protection rights is essential as AI-generated content becomes more sophisticated and accessible

It’s impressive, sometimes entertaining, and increasingly worrying. Deepfakes raise big questions about privacy, trust and how we protect ourselves in a digital world where seeing is no longer believing.

This blog breaks down what deepfakes are, why they matter and how UK data protection law tries (and sometimes struggles) to keep up.

What exactly is a deepfake?

A deepfake is a piece of audio, video or image content that has been digitally manipulated using artificial intelligence to make it look real.

To create one, software is trained on huge amounts of personal data, photos, videos, voice clips, until it can convincingly mimic someone’s face or voice.

Deepfakes can be used for harmless fun, like film production or satire. But they can also be used for harmful purposes, including:

  • Identity theft;
  • Harassment;
  • Blackmail;
  • Political misinformation; and
  • Reputational damage,

and because the technology is getting better and more accessible, the risks are growing.

The UK’s legal challenge: no “Image Rights”

One of the biggest issues is that the UK does not recognise a standalone legal right to your own image or likeness.

That means if someone creates a deepfake of you, there is not a single, simple law that says, “You can’t do that”.

Instead, victims must rely on a patchwork of existing laws, such as:

  • Data protection law (UK GDPR);
  • Misuse of private information;
  • Defamation; and
  • Harassment laws.

These can help, but none were designed with deepfakes in mind, which makes enforcement tricky.

How data protection law comes into play

Under the UK GDPR, your face, voice and other identifiable features count as personal data. In many cases, they also qualify as biometric data, which is treated as especially sensitive.

This means:

  • If someone processes your personal data to create a deepfake;
  • And they don’t have a lawful reason to do so;
  • They may be breaking data protection law.

In legal terms, the deepfake creator could be considered a data controller, responsible for complying with the UK GDPR; things like having a lawful basis, being transparent and respecting your rights.

Of course, many deepfake creators operate anonymously or from outside the UK, which makes enforcement difficult.

Why regulating deepfakes is so hard

Deepfakes are difficult to regulate for several reasons:

  1. They’re hard to detect
    Some deepfakes are so realistic that even experts struggle to spot them.
  1. Creators can hide easily
    People can generate and upload deepfakes anonymously, often from outside the UK.
  1. The internet moves fast
    Once a deepfake is online, it can spread globally in minutes. Even if you get it taken down, copies may already be circulating.
  1. The law wasn’t built for this
    Existing laws help, but none were designed specifically for deepfakes. That leaves gaps.

The Online Safety Act 2023 introduces new offences around harmful online content and future legislation, like the proposed Data (Use and Access) Act 2025, may strengthen protections. But for now, the legal landscape is still evolving.

What can you do?

If you’re targeted by a deepfake, there are legal options, though none are perfect:

  • Injunctions – to force removal of the content;
  • Damages – compensation for harm caused;
  • Data protection complaints – including to the ICO;
  • Defamation claims – if the deepfake damages your reputation; and
  • Misuse of private information claims – especially for intimate deepfakes.

However, these remedies do not always solve the core problem: once something is online, it’s hard to contain.

Where do we go from here?

Deepfakes are here to stay. The technology will keep improving and the law will need to evolve with it.

For now, the UK relies on existing legal tools, such as data protection law, privacy rights and defamation to fill the gaps. But these tools were not built for a world where AI can clone your face or voice in minutes.

Future legislation may offer stronger protection, but enforcement and cross‑border challenges will remain major hurdles.

What is clear is that deepfakes raise serious questions about trust, identity and privacy in the digital age. Understanding the risks is the first step toward protecting ourselves.

How can we help?

Probate Negligence Mediation Consolidation

Amrik Basra is an Associate in our Private Litigation team.

At Nelsons, our team specialises in these types of disputes and includes members of The Association of Contentious Trust and Probate Specialists (ACTAPS). The team is also recommended by the independently researched publication, The Legal 500, as one of the top teams of specialists in the country.

If you have concerns about deepfakes data protection, don’t hesitate to get in touch with Amrik or a member of our expert Dispute Resolution team in DerbyLeicester, or Nottingham on 0800 024 1976 or via our online enquiry form.

Contact us
Contact us today

We're here to help.

Call us on 0800 024 1976

Main Contact Form

Used on contact page

  • Email us