Workplace monitoring – justified or intrusive?

October 2023

Almost one in five people believe they’ve been monitored by an employer, and would be reluctant to take a new job if they knew they were going to be monitored. Research commissioned by the UK’s Information Commissioner’s Office (ICO) also shows 70% of the public believe it’s intrusive to be monitored in the workplace.

However, the research also shows workers generally understand employers might carry out checks on the quality and quantity of their work. Similarly, they appreciate the necessity of monitoring for health and safety reasons, or to meet other regulatory requirements.

There are plenty of reasons why employers might want to monitor staff; to check they’re working, to detect and prevent criminal activity, ensuring policy compliance, and for safety and security reasons.

With more people working from home and advances in technology, there are multiple options for employers seeking to monitor their workforces;

  • Camera surveillance, including body worn cameras
  • Webcams and screenshots
  • Monitoring timekeeping or access control
  • Keystroke monitoring
  • Internet tracking for misuse
  • Covert audio recording

I’ve even heard of AI which sentiment checks emails. This scans language to detect content that might be discriminatory, bullying or aggressive. Personally, I find this terrifying. Imagine if this technology were available during the ‘Reds under the bed’ paranoia of 1950s America, or indeed 1930s Germany?

The fundamental question is this – just because you can monitor staff, should you?

The ICO has recently published guidance: Employment practices and data protection – monitoring workers. Emily Keaney, Deputy Commissioner – Regulatory Policy at the Information Commissioner’s Office, says; “While data protection law does not prevent monitoring, our guidance is clear that it must be necessary, proportionate and respect the rights and freedoms of workers. We will take action if we believe people’s privacy is being threatened.”

Summary of workplace monitoring considerations

1. Is your workplace monitoring lawful, fair and transparent?

To be lawful you need to identify a lawful basis under UK GDPR and meet relevant conditions. Remember consent would only work where employees have a genuine choice. Often an imbalance of power means consent is not appropriate in an employee context.

To be fair you should only monitor workers in ways they would reasonably expect, and in ways which wouldn’t have unjustified adverse effects on them. The ICO says you should conduct a Data Protection Impact Assessment to make sure monitoring is fair.

To be transparent you must be open and upfront about what you’re doing, monitoring should not routinely be done in secret. Monitoring conducted without transparency is fundamentally unfair. There may however be exceptional circumstances where covert monitoring is justified.

2. Will monitoring gather sensitive information?

If monitoring involves special category data, you’ll need to identify a special category condition, as well as a lawful basis.

Special category data includes data revealing racial or ethnic origin, religious, political or philosophical beliefs, trade union membership, genetic and biometric data, data concerning health or data about a person’s sex life or sexual orientation.

You may not automatically think this is relevant, but be mindful even monitoring emails, for example, is likely to lead to the processing of special category data.

3. Have you clearly set out your purpose(s) for workplace monitoring?

You need to be clear about your purpose(s) and not monitor workers ‘just in case’ it might be useful. Details captured should not subsequently be used for a different purpose, unless this is assessed to be compatible with an original purpose.

4. Are you minimising the personal details gathered?

Organisations are required to not collect more personal information than they need to achieve their defined purpose(s). This should be approached with care as many monitoring technologies and methods have the capability to gather more information than is necessary. You should take steps to limit the amount of data collected and retained.

5. Is the information gathered accurate?

The ICO says organisations must take all reasonable steps to make sure the personal information gathered through monitoring workers is not incorrect or misleading and people should have the ability to challenge the results of any monitoring.

6. Have you decided how long information will be kept?

Personal information gathered must not be kept for any longer than is necessary. It shouldn’t be kept just in case it might be useful in future. Organisations must have a data retention schedule and delete any information in line with this. The UK GDPR doesn’t tell us precisely how long this should be, organisations need to be able to justify any retention periods they set.

7. Is the information kept securely?

You must have appropriate organisational and technical measures in place to protect personal information. Data security risks should be assessed, access should be restricted, and those handling the information should receive appropriate training.

If monitoring is outsourced to a third-party processor, you’ll be responsible for compliance with data protection law. Processors will have their own security obligations under UK GDPR.

8. Are you able to demonstrate your compliance with data protection law?

Organisations need to be able to demonstrate their compliance with UK GDPR. This means making sure appropriate policies, procedures and measures are put in place for workplace monitoring activities. As with everything this must be proportionate to the risks. The ICO says organisations should make sure “overall responsibility for monitoring workers rest at the higher senior management level”.

Monitoring people is by its very nature intrusive, it must be proportionate, justified and people should in most circumstances be told it’s happening. The overriding message from the ICO is carry out a Data Protection Impact Assessment if you’re considering monitoring people in the workplace. This should fully explore any impact on people’s rights and freedoms.

Is bias and discrimination in AI a problem?

September 2022

Artificial Intelligence - good governance will need to catch up with the technology

The AI landscape

We hear about the deployment and use of AI in many settings. The types and frequency of use are only going to increase. Major uses include:

  • Cybersecurity analysis to identify anomalies in IT structures
  • Automating repetitive maintenance tasks and guiding technical support teams
  • Ad tech to profile and segment audiences for advertising targeting and optimise advertising buying and placement
  • Reviewing job applications to identify the best-qualified candidates in HR
  • Research scientists looking for patterns in health to identify new cures for cancer
  • Predicting equipment failure in manufacturing
  • Detecting fraud in banking by analysing irregular patterns in transactions.
  • TV and movie recommendations for Netflix users
  • Inventory optimisation and demand forecasting in retail & transportation
  • Programming cars to self-drive

Overall, the different forms of AI will serve to improve our lives but from a privacy point of view, there is a danger that the governance around AI projects is lagging behind the evolving technology solutions.  

In that context, tucked away in its three-year plan, published in July, the ICO highlighted that AI driven discrimination might become more of a concern. In particular, the ICO is planning to investigate concerns about the use of algorithms to sift recruitment applications. 

Why recruitment applications?

AI is used widely in the recruitment industry. A Gartner report suggested that all recruitment agencies used it for some of their candidate sifting. The CEO of Ziprecruiter website in US is quoted as saying that three-quarters of submitted CVs are read by algorithms. There is plenty of scope for data misuse, hence the ICO’s interest. 

The Amazon recruitment tool – an example of bias/discrimination

The ICO are justified in their concerns around recruitment AI. Famously, Amazon developed their own tool to sift through applications for developer roles. Their model was based on 10 years of recruitment data for an employee pool that was largely male. As a result, the model discriminated against women and reinforced the gender imbalance by filtering out all female applications.

What is AI?

AI can be defined as: 

“using a non-human system to learn from experience and imitate human intelligent behaviour”

The reality is that most “AI” applications are machine learning. That is, models are trained to calculate outcomes using data collected from past data. Pure AI is technology designed to simulate human behaviour. For simplicity, let’s call machine learning AI.  

Decisions made using AI are either fully automated or with a “human in the loop”. The latter can safeguard individuals against biased outcomes by providing a sense check of outcomes. 

In the context of data protection, it is becoming increasingly important that those impacted by AI decisions should be able to hold someone to account.

You might hear that all the information is in a “black box” and that how the algorithm works cannot be explained. This excuse isn’t good enough – it should be possible to explain how a model has been trained and risk assess that activity. 

How is AI used? 

AI can be used to make decisions:

1.     A prediction – e.g. you will be good at a job

2.     A recommendation – e.g. you will like this news article

3.     A classification – e.g. this email is spam. 

The benefits of AI

AI is generally a force for good:

1.     It can automate a process and save time

2.     It can optimise the efficiency of a process or function (often seen in factory or processing plants)

3.     It can enhance the ability of individuals – often by speeding processes

Where do data protection and AI intersect?

An explanation of AI-assisted decisions is required: 

1.     If there is a process without any human involvement

2.     It produces legal or similarly significant effects on an individual – e.g. not getting a job. 

Individuals should expect an explanation from those accountable for an AI system. Anyone developing AI models using personal data should ensure that appropriate technical and organisational measures are in place to integrate safeguards into processing. 

What data is in scope?

  • Personal data used to train a model
  • Personal data used to test a model
  • On deployment, personal data used or created to make decisions about individuals

If no personal data is included in a model, AI is not in scope for data protection. 

How to approach an AI project?

 Any new AI processing with personal data would normally require a Data Protection Impact Assessment (DPIA). The DPIA is useful because it provides a vehicle for documenting the processing, identifying the privacy risks as well as identifying the measures or controls required to protect individuals. It is also an excellent means of socialising the understanding of AI processing across an organisation. 

Introducing a clear governance framework around any AI projects will increase project visibility and reduce the risks of bias and discrimination. 

Where does bias/discrimination creep in?

Behaviour prohibited under The Equality Act 2010 is any that discriminates, harasses or victimises another person on the basis of any of these “protected characteristics”:

  • Age
  • Disability
  • Gender reassignment
  • Marriage and civil partnership
  • Pregnancy and maternity
  • Race
  • Religion and belief
  • Sex
  • Sexual orientation. 

When using an AI system, your decision-making process needs to ensure and are able to show that this does not result in discrimination. 

Our Top 10 Tips

  1. Ask how the algorithm has been trained – the “black box” excuse isn’t good enough
  2. Review the training inputs to identify possible bias with the use of historic data
  3. Test the outcomes of the model – this really seems so obvious but not done regularly enough
  4. Consider the extent to which the past will predict the future when training a model – recruitment models will have an inherent bias if only based on past successes
  5. Consider how to compensate for bias built into the training – a possible form of positive discrimination
  6. Have a person review the outcomes of the model if it is challenged and give that person authority to challenge
  7. Incorporate your AI projects into your data protection governance structure
  8. Ensure that you’ve done a full DPIA identifying risks and mitigations
  9. Ensure that you’ve documented the processes and decisions to incorporate into your overall accountability framework
  10. Consider how you will address individual rights – can you easily identify where personal data has been used or has it been fully anonymised? 

In summary

AI is complex and fast-changing. Arguably the governance around the use of personal data is having to catch up with the technology. When people believe that these models are mysterious and difficult to understand, a lack of explanation for how they work is not acceptable. 

In the future clearer processes around good governance will have to develop to understand the risks and consider ways of mitigating those risks to ensure that data subjects are not disadvantaged.