Artificial Intelligence: UK Information Commissioner Warns People Not To Lose Trust In AI

Kevin Modiri

In the recent TechUK Digital Ethics Summit 2023, UK Information Commissioner, John Edwards stated:

“If people don’t trust AI, then they’re less likely to use it, resulting in reduced benefits and less growth or innovation in society as a whole. This needs addressing. 2024 cannot be the year that consumers lose trust in AI.”

The keynote address was in response to recent research that showed the growing concerns by people over Artificial Intelligence (AI). Mr Edwards set out steps the ICO would be taking to support businesses using the smart technology and was clear that there would be no excuses for “bad actors” who do not comply with data protection laws. He also called on tech developers to embed privacy into their products from the very start, as he stated:

“Privacy and AI go hand in hand- there is no either/or here. You cannot expect to utilise AI in our products or services without considering privacy, data protection and how you will safeguard people’s rights…”

Status of EU and UK regulation of Artificial Intelligence

The EU has recently agreed to a landmark new law on artificial intelligence called the AI Act. EU Commissioner Thierry Breton said:

“The AI Act is much more than a rule book- it’s a launch pad for EU start-ups and researchers to lead the global AI race.” 

The provisional deal has come after years of discussion among members of the European parliament and member states. The law will have a two-tier approach, with:

“transparency requirements for all general purpose AI models (such as ChatGPT) as well as stronger requirements for powerful models with systemic impacts across the EU”.

The new laws will balance out interests and implement safeguards for the use of AI technology whilst at the same time avoiding “excessive burden” for companies.

A wide range of high-risk AI systems would be authorised, but subject to a set of obligations and requirements to gain access to the EU market. The AI Act will not, however, apply to systems that are used exclusively for military or defence purposes and for the sole purpose of research and innovation, or for people using AI for non-professional reasons.

The European Parliament will vote on the AI Act proposals early next year, but any legislation will not take effect until at least 2025.

The UK is still in the process of publishing its own guidelines, the approach set out in the UK Government’s white paper of March 2023 is tangibly different to the EU. The Government is proposing to issue high-level principles to guide regulators on an informal basis whilst covering things like appropriate transparency and fairness as well as accountability and governance.

Whilst the EU has come up with a framework that they believe will be the gold standard for AI regulation, the UK is avoiding the challenge of creating a new regulatory framework, but this could lead to complexity such as overlapping jurisdiction in some areas. The UK has hoped to mitigate such difficulties by having an AI sandbox, which would allow the use of AI under the supervision of a regulator.

Comment

There is a clear consensus that AI regulation is much needed due to the rapid development of smart technology across all sectors. However, there appears to be a clear tension in the approaches taken by the EU, UK, and others. Whilst the UK wants to follow the US approach of a regulation light-environment. If it wishes to access EU markets, there will need to be compliance with the EU AI Act.

How can we help?

If you have any questions concerning the subjects discussed in this article, please do not hesitate to contact a member of our Dispute Resolution team in Derby, Leicester, or Nottingham on 0800 024 1976 or via our online enquiry form.

Contact us
Contact us today

We're here to help.

Call us on 0800 024 1976

Main Contact Form

Used on contact page

  • Email us