AI In The Workplace – Potential Risks For Employers

Laura Kearsley

The use of AI in everyday life has been largely accepted and this has also filtered into the workplace. Many high-profile businesses, including Microsoft, Walmart, and Amazon have adopted some degree of AI to help manage and run their businesses.

Both employers and employees have been concerned about how to effectively use AI, especially with the EU Parliament approving the Artificial Intelligence Act in March 2024 (EU AI Act). While we may all think we know what AI is and the potential risks of using AI in the workplace, it is worth looking at the EU AI Act to be clear on the exact provisions that are being used to provide a common regulatory and legal framework for AI.

What is AI?

The EU AI Act defines AI systems as a machine-based system designed to operate with a certain level of autonomy and that, based on machine and/or human provided data and inputs, infers how to achieve a given set of human defined objectives using machine learning and/or logic and knowledge based approaches.

In the workplace, we have seen ChatGPT (a language model developed by OpenAI) and Bard (developed by Google) which can engage in natural language conversations with humans. It is designed to understand and respond to a wide variety of questions and topics, ranging from general knowledge to specific domains, including science, technology, history, and literature. This is a specific type of generative AI and uses what is known as a large language model (LLM) to essentially work like predictive text using a set of algorithms.

This is based on a deep learning architecture that has been trained on large amounts of data to enable it to produce relevant responses to queries. ChatGPT and Bar can understand the meaning and context of natural language inputs and provide informative responses.

What are the potential risks to employers when using AI?

According to research published by career-based social networking app, Fishbowl, in January 2023, the main risk is that employers aren’t aware that their employees are using AI. The research revealed that:

  • 68% of employees who are using it are doing so without their employer being aware; and
  • 43% of those surveyed had used AI for work purposes

Additional research from here and here shows the potential risks of using AI:

  • 84% of employees who use AI while at work have said they have publicly within three months exposed their business data
  • Almost a third of CEOs are redesigning work rather than depending on employees
  • AI uncovers gaps in skills and erodes morale.
  • AI processes a huge amount of sensitive data which if not appropriately secured could cause enormous consequences.
  • AI data can often be bias, therefore, leading to inappropriate decisions based on the protected characteristics for example, gender, race, religion, etc. This could then lead to unfair practices which employers must be aware of.

Security risks

Not only is the technology being used by an increasing number of employees, but according to research carried out by Cyberhaven, 11% of what employees are pasting into AI Systems is sensitive data. Cyberhaven recorded that on average, per 100,000 employees, during the week commencing 26 February 2023 – 4 March 2023, there were 199 incidents of confidential internal-only documents being uploaded to an AI Systems and 173 incidents of client data being uploaded.

The problem with uploading confidential/sensitive information is that AI systems learn from the information that is inputted. If a worker uploads any sort of confidential information, therefore, that information could be used by the system to develop and improve it. The only way this can be prevented is by a user specifically opting out.

As a result of these types of security risks, companies such as Amazon have warned employees not to upload confidential information into AI Systems. Some companies, like JP Morgan, have even gone a step further and blocked access to AI Systems like ChatGPT altogether.

Accuracy of information and potential copyright infringement risks

In addition to security concerns, there are also concerns about the accuracy of the information produced by AI systems and the potential risk of copyright infringement.

When ChatGPT was initially launched, it came with a warning that:

“ChatGPT sometimes writes plausible sounding but incorrect or nonsensical answers.”

This means that the information it produces may be presented as fact when in truth it is misinformation. In addition, because the system was initially designed based on data sets from 2021, its output may be outdated. As such, if an employee uses an AI System like ChatGPT to create a working document, it is vital that they fact-check the information before releasing it.

In respect of copyright infringement, the information that AI Systems has been developed on has come from the internet and some of this content could be subject to copyright. Therefore, if the chatbot uses this content to create an answer to a query, it could potentially amount to copyright infringement. If the content is then used by an employee, it could put them and the employer at risk of a copyright infringement/IP claim.

Recruitment

It has been reported that ChatGPT is frequently used to create CVs and application letters, which could create a false impression of a prospective employee. This should be considered by HR teams during the recruitment process and emphasises the importance of interviews and verifying information such as qualifications.

What can employers be doing?

What is clear from the rise in popularity of AI and the number of employees using it, is that the technology is likely to be here to stay. As a result, employers need to consider sooner rather than later the potential risks it poses and whether they should put in place an outright ban on its use in a work context or not.

Whilst businesses shouldn’t automatically assume that their staff members are using AI Systems, if they don’t want their employees to be using it for work purposes then they should make that clear. Employers should therefore consider confirming within a staff policy that the use of systems are prohibited.

For employers who do not want to prevent their workers from using it, will of course still want to ensure that it doesn’t result in any issues for them, so appropriate controls and guidance need to be implemented. These should also be set out in an internal employee policy, for the avoidance of doubt and to prevent confusion from arising.

How can we help?

If you would like any advice concerning the subjects discussed in this article, please contact another member of our expert Employment Law team in DerbyLeicester or Nottingham on 0800 024 1976 or via our online enquiry form.

Contact us today

We're here to help.

Call us on 0800 024 1976

Main Contact Form

Used on contact page

  • Email us