The Little Book of Data Protection Nuggets





















































Copyright DPN
The information provided and opinions expressed in our content represent the views of the Data Protection Network and our contributors. They do not constitute legal advice.
It’s increasing common for online meetings and phone calls to be recorded and/or transcribed. A plethora of AI-enabled tools have popped up to make this very easy to do. Transcriptions can be really helpful to provide a written record, a short summary of the key points, or even to automate key actions. Often handy for those who can’t attend or for people with certain disabilities. Some apps can combine words with recorded video or audio content for reference.
However, while we rush to take advantage of these apps, we should be mindful of some privacy risks and be sure to have some measures and controls in place.
Are people in your organisation going ahead with a ‘free trial’ and using recording or transcription services which have not been properly vetted or approved? This could result in poor controls on the outputs and data leakage to third parties. People need to know what they’re permitted to do, and what is not company policy. The safest bet is to go with an Enterprise version, so you can make sure there’s sufficient control and oversight of its use.
Some apps are set to ‘on’ by default, so the settings may need editing to stop them automatically recording or transcribing when you don’t want them to.
It’s important to make sure everyone’s happy for the meeting to be recorded and/or transcribed. Good practice would be to let participants know in advance when there will be a recording and/or transcription made and ask them to let you know if they object. Also remind them at the start of the meeting, before you actually click ‘start’.
AI transcription tools can be extremely accurate, often better than humans. But even so, AI can still make mistakes. For example, AI can misinterpret certain nuances in the human voice or behaviours, or fail to grasp the context. This could affect the accuracy of the written output, or even its meaning. What we say isn’t always what we mean! Take different forms of humour, such as sarcasm, which might not come across well in raw text.
Human oversight is key – don’t assume everything you read is 100% accurate to the words or the context.
Do we really need both a video recording and a transcription? Depending on the nature of meetings, this could create a significant volume of personal data, or perhaps commercially sensitive data. One of the first things we should think about is deleting anything we don’t need at the earliest opportunity.
Have we set any restrictions on who the outputs are shared with an in what form? We should take particular care to prevent unauthorised disclosure of sensitive information – either of a personal, confidential or commercial nature.
Just because a meeting is of a sensitive nature, doesn’t necessarily mean it can’t be recorded or transcribed. We know of circumstances where both parties have been in agreement on this, for example in grievance proceedings meetings. However, in such cases all the other points above can become even more important – is it an approved app? is the output accurate? who should have access to it? And so on.
If recording and transcription tools are not set up and managed well, they may cause an unwelcome headache further down the line. Recordings and transcriptions may all be in scope if you receive a DSAR or erasure request. It’s therefore good to nail down, how long materials will be kept for, where they will be saved, and making sure they are searchable.
1. DPIA: Depending on your planned use and how sensitive the personal data captured is likely to be, consider if a DPIA is required (or advisable).
2. Internal policy / guidelines for usage: Set guidelines on when and how recording and transcription services should and should not be used. Include expected standards such as telling people in advance, giving them an opportunity to object, rules on sharing, deletion etc
3. Access controls: Update your access controls to make sure only authorised individuals can access recordings and transcriptions.
4. Retention: Update your data retention policy/schedule to confirm retention periods. Clearly there may be exceptions to the rule, if there is information which needs to be kept longer.
5. DSARs: Update your DSAR procedure to reflect personal data captured in recordings and transcriptions may be within scope.
The Artificial Intelligence landscape’s beginning to remind me of a place Indiana Jones might search for hidden treasure. The rewards are near-magical, but the path is littered with traps. Although, in the digital temple of ‘The New AI’, he’s not going to fall into a pit of snakes or be squished by a huge stone ball. No, Indy is more likely to face other traps. Leaking sensitive information. Litigation. Loss of adventuring advantage to competing explorers. A new, looming regulatory environment, one even Governments have yet to determine.
And the huge stone ball? That will be when the power of the Lost AI goes awry, feeding us with incorrect information, biased outcomes and AI hallucinations.
Yes, regulation is important in such a fast-moving international arena. So is nimble decision-making, as even the European Commission considers pausing its AI Act. Nobody wants to be left behind. Yet, as China and the US vie for AI supremacy, are countries like the UK sitting on the fence?
AI has an equal number of devotees and sceptics, very broadly divided along generational lines. Gen Z and X are not as enamoured with AI as Millennials (those born between 1981 and 1996). A 2025 Mckinsey report found Millennials to be the most active AI users. My Gen Z son, says of AI, ‘I’m not asking a toaster a question.’ He also thinks AI’s insatiable thirst for energy will make it unsustainable in the longer term.
Perhaps he has a point, but I think every industry will somehow be impacted, disrupted and – perhaps – subsumed by AI. And as ever, with transformational new technologies, mistakes will be made as organisations balance risk versus advantage.
How, in this ‘Temple of the New AI,’ do organisations find treasure… without falling into a horrible trap?
While compliance with regulations will be a key factor for many organisations, protecting the business and brand reputation may be an even bigger concern. The key will be making sure AI is used in an efficient, ethical and responsible way.
The most obvious solution is to approach AI risk and governance with a clear framework covering accountability, policies, ongoing monitoring, security, training and so on. Organisations already utilising AI may have already embedded robust governance. For others, here are some pointers to consider:
■ Strategy and risk appetite
Senior leadership needs to establish the organisation’s approach to AI; your strategy and risk-appetite. Consider the benefits alongside the potential risks associated with AI and implement measures to mitigate them.
■ AI inventory
Create an inventory to record what AI systems are already in use across the business, the purposes they are used for, and why.
■ Stakeholders, accountability & responsibilities
Identify which key individuals and/or departments are likely to play a role in governing how AI is developed, customised and/or used in your organisation. Put some clear guard rails in place. Determine who is responsible and accountable for each AI system. Establish clear roles and responsibilities for AI initiatives to make sure there’s accountability for all aspects of AI governance.
■ Policies and guidelines
Develop appropriate policies and procedures, or update existing policies so people understand internal standards, permitted usage and so on.
■ Training and AI literacy
Provide appropriate training. Consider if this needs to be role specific, and factor in ongoing training in this rapidly evolving AI world. Remember, the EU AI ACT includes a requirement for providers and developers of AI systems to make sure their staff have sufficient levels of AI literacy.
If you don’t know where to start, Use AI Securely provide a pretty sound free introductory course.
■ AI risk assessments
Develop and implement a clear process for identifying potential vulnerabilities and risks associated with each AI system.
For many organisations who are not developing AI systems themselves, this will mean a robust method for assessing the risks associate with third-party AI tools, and how you intend to use those tools. Embedding an appropriate due diligence process when looking to adopt (perhaps also customise) third-party AI SAAS solutions is crucial.
Clearly not all AI systems or tools will pose the same level of risk, so having a risk-based methodology to enable you to prioritise risk, will also prove invaluable.
■ Information security
Appropriate security measures are of critical importance. Vulnerabilities in AI models can be exploited, input data can be manipulated, malicious attacks can target training datasets, unauthorised parties may access sensitive, personal and/or confidential data. Data can be leaked via third party AI solutions.
We also need to be mindful of how online criminals exploit AI to create ever more sophisticated and advanced malware. For example, to automate phishing attacks. On this point, the UK Government has published a voluntary AI cyber security code of practice.
■ Transparency and explainability
Are you being open and up front about your use of AI? Organisations need to be transparent about how AI is being used, especially when it impacts on individuals or makes decisions that affect them. A clear example here is AI tools being used for recruitment – is it clear to job seekers you’re using AI? Are they being fairly treated? Using AI Tools in Recruitment
Alongside this there’s the crucial ‘explainability’ piece – the ability to understand and interpret the decision-making processes of artificial intelligence systems.
■ Audits and monitoring
Implement a method for ongoing monitoring of the AI systems and/or AI tools you are using .
■ Legal and regulatory compliance
Keep up to date with latest developments and how to comply with relevant laws and regulations in different jurisdictions relevant for your operations.
My colleague Simon and I recently completed the IAPP AI Governance Professional training, led by Oliver Patel. I’d highly recommend his Substack which is packed with tips and detailed information on how to approach AI Governance.
European Union
The EU AI Act was implemented in August 2024, and is coming into effect in stages. Some people fear this comprehensive and strict approach will hold back innovation and leave Europe languishing behind the rest of the world. It’s interesting the European Commission is considering pausing its entry into application, which DLA Piper has written about here.
On 2nd February this year, rules came into effect in relation to AI literacy requirements, definition of an AI system and a limited number of prohibited AI use cases, which the EU determines pose an unacceptable risk.
Like GDPR, the AI Act has extra-territorial scope, meaning it applies to organisations based outside the EU (as well as inside) where they place AI products on the market or put them into service in the EU, and/or where outputs produced by AI applications are used by people within the EU. We’ve already seen how EU regulation has led to organisations like Meta and Google excluding the EU from use of its new AI products for fear of enforcement under the Act.
The European Commission has published guidelines alongside prohibited practices coming into effect. Guidelines on Prohibited Practices & Guidelines on Definition of AI System
UK
For the time being it looks unlikely the UK will adopt a comprehensive EU-style regulation. A ‘principles-based framework’ is supported for sector specific regulators to interpret and apply. Specific legislation for those developing the most powerful AI models looks the most likely direction of travel.
The Information Commissioner’s Office published a new AI and biometrics strategy on 5th June with a focus on promoting compliance with data protection law, preventing harm but also enabling innovation. Further ICO activity will include:
■ Developing a statutory code of practice for organisations developing or deploying AI.
■ Reviewing the use of automated decision making (ADM) systems for recruitment purposes
■ Conducting audits and producing guidance on the police’s use of facial recognition technology (FRT)
■ Setting clear expectations to protect people’s personal information when used to train generative AI foundation models
v Scrutinising emerging AI risks and trends.
The soon to be enacted Data (Use and Access) Act will to a degree relax current strict rules in relation to automated decision making which produces legal or similarly significant effects. The ICO for it’s part is committed to producing updated guidance on ADM and profiling by Autumn 2025. DUA Act: 15 key changes ahead
Other jurisdictions are also implementing or developing a regulatory approach to AI, and it’s worth checking the IAPP Global AI Regulation Tracker.
AI is here. It’s transformative and far-reaching. To take the fullest advantage of AI’s possibilities, keeping abreast of developments along with agile and effective AI governance will be key.
No one ever wrote a thriller about record keeping. Denzel, Keanu, Keira and Brad are not required on set. But here’s why we should give it due attention.
Put simply, without adequate records it’s difficult to demonstrate compliance with data protection legislation (GDPR and UK GDPR). Records are core to meeting the accountability principle, i.e. being ready and able to demonstrate evidence of compliance.
Let’s step back for a moment. Each organisation needs to know what personal data they hold, where it’s located and what purposes it’s being used for. Only then can you be sure what you’re using it for is fair and lawful, and gain confidence you’re meeting other GDPR obligations.
To put it another way, how confident is your organisation in answering the following questions?
All of the above feed into transparency requirements, and what we tell people in our privacy notices.
In my opinion, you can’t answer these questions with confidence unless you map your organisation’s use of personal data and maintain a central record. This may be in the form of a Records of Processing Activity (RoPA).
Okay, so the absence of data protection records might only come to light if your organisation is subject to regulatory scrutiny. But not putting this cornerstone in place could result in gaps and risks being overlooked – which could potentially materialise into a serious infringement.
In my view, a RoPA is a sensible and valuable asset for most organisations. I fully appreciate creating and maintaining a RoPA can feel like a Herculean task, especially if resources are overstretched. That’s why we often recommend taking a proportionate and achievable approach, focussing on special category data use and higher risk activities first. Then build on this foundation when you can.
The requirements apply to both controllers and processors and include keeping records covering:
and more…
Do you employ less than 250 people?
If so, record keeping requirements may be less stringent. But you’ll still be required to maintain a RoPA if:
You can read more about the requirements in ICO records of processing guidance.
Here are just some of the benefits you can get from your RoPA.
1. Understanding the breadth and sensitivity of your data processing.
2. Visibility of where data protection risks lie. This will help establish priorities and focus efforts to tackle key risks.
3. Confidence your activities are lawful and meet specific regulatory requirements.
4. Tackle over retention of data – it’s a common challenge. By establishing your purposes for processing personal data, you can determine how long you need to keep that data. Then you can take practical steps to delete any data you no longer need.
5. Transparency – An up-to-date RoPA feeds into your privacy notice, making sure the information you provide accurately reflects what you are really doing.
6. Data breaches – Your RoPA should be the ‘go to’ place if you suffer a data breach. It can help you to quickly identify what personal data may have been exposed and how sensitive the data is, which processors might be involved and so on. Helping you to make a rapid risk assessment (within 72 hours) and helping you make positive decisions to mitigate risks to protect individuals.
7. Supply chain – Keeping a record of your suppliers (‘processors’) is a key aspect of supplier management along with due diligence, contractual requirements and international data transfers.
8. Privacy rights – If you receive a Data Subject Access Request, your records can help to locate and access the specific data required to fulfil the request. If you receive an erasure request, you can quickly check your lawful basis for processing and see if the right applies, and efficiently locate what systems the data needs to be deleted from.
Here are a few very quick tips on how to commence a RoPA project or breathe new life into an outdated spreadsheet you last looked at in 2018!
Who?
No DPO or data protection team can create and maintain these records their own – they need support from others. Enlist the support of your Senior Leadership Team, as you’ll need them to back you and drive this forward.
Confirm who is or should be is accountable for business activities which use personal data within all your key business functions – the data owners. For example, Human Resources (employment & recruitment activities), Sales & Marketing (customer/client activities), Procurement (suppliers), Finance, and so on. Data owners are usually best placed to tell you what data they hold and what it’s currently used for, so get them onside.
What?
Make sure you’re capturing all the right information. The detail of what needs to be recorded is slightly different if you act as a controller or processor (or indeed both). If you need to check take look at the ICO guidance on documentation.
When?
There’s always some new system, new activity and/or change of supplier, isn’t there? You should aim to update your records whenever you identify new processing or changes to existing processing – including identifying when you need carry out a Data Protection Impact Assessment or Legitimate Interests Assessment. Good stakeholder relations can really help with this.
In conclusion, record keeping might not win many Oscars, but it really is the cornerstone of data protection compliance. Adequate records, even if not massively detailed, can be really beneficial in so many ways, not just if the ICO (or another Data Protection Authority) comes calling.
There is a distinct subset of personal data which is awarded ‘special’ protection under data protection law. This subset includes information for which people have been persecuted in the past, or suffered unfair treatment or discrimination, and still could be. These special categories of personal data are considered higher risk, and organisations are legally obliged to meet additional requirements when they collect and use it.
Employees need to be aware special category data should only be collected and used with due consideration. Sometimes there will be a clear and obvious purpose for collecting this type of information; such as a travel firm needing health information from customers, or an event organiser requesting accessibility requirements to facilitate people’s attendance. In other situations it will be more nuanced.
Special Categories of Personal Data under UK GDPR (and it’s EU equivalent), are commonly referred to as special category data, and are defined as personal data revealing:
The definition also covers:
Sometimes your teams might not realise they’re collecting and using special category data, but they might well be.
It’s likely if you have inferred or made any assumptions based on what you know about someone, for example they’re likely to have certain political opinions, or likely to suffer from a certain health condition, this will mean you are handling special category data.
There was an interesting ICO investigation into an online retailer which found it was targeting customers who’d bought certain products, assuming from this they were likely to be arthritis sufferers. This assumption meant the retailer was judged to be processing special category data.
If you collect information about dietary requirements these could reveal religious beliefs, for example halal and kosher. It’s also worth noting in 2020 a judge ruled that ethical veganism qualifies as a philosophical belief under the Equality Act 2010.
There’s sometimes confusion surrounding what might be considered ‘sensitive’ data and what constitutes special category data. I hear people say “why is financial data not considered as sensitive as health data or ethnic origin?’ Of course, people’s financial details are sensitive and organisations do still need to make sure they’ve got appropriate measures in place to protect such information and keep it secure. However, UK GDPR (and EU) sets out specific requirements for special category data which don’t directly apply to financial data.
To understand why, it’s worth noting special protection for data such as ethnicity, racial origin, religious beliefs and sexual orientation was born in the 1950s, under the European Convention on Human Rights, after Europe had witnessed people being persecuted and killed.
In a similar way to all personal data, any handling of special category data must be lawful, fair and transparent. Organisations need to make sure their collection and use complies with all the core data protection principles and requirements of UK GDPR. For example;
What makes special category data unique is it will be considered a higher risk than other types of data, and also requires you to choose a special category condition.
Confirm whether you need to conduct a Data Protection Impact Assessment for your planned activities using special category data. DPIAs are mandatory for any type of processing which is likely to be high risk. This means a DPIA is more likely to be needed when handling special category data. That’s not to say it will always be essential, it really will depend on the necessity, nature, scale and your purpose for using this data.
Alongside a lawful basis, there’s an additional requirement to consider your purpose(s) for processing this data and to select a special category condition. These conditions are set out in Article 9, UK GDPR.
(a) Explicit consent
(b) Employment, social security and social protection (if authorised by law)
(c) Vital interests
(d) Not-for-profit bodies
(e) Made public by the data subject
(f) Legal claims or judicial acts
(g) Reasons of substantial public interest (with a basis in law)
(h) Health or social care (with a basis in law)
(i) Public health (with a basis in law)
(j) Archiving, research and statistics (with a basis in law)
Five of the above conditions are solely set out in Article 9. The others require specific authorisation or a basis in law, and you’ll need to meet additional conditions set out in the Data Protection Act 2018.
If you are relying on any of the following you also need to meet the associated condition in UK law. This is set out in Part 1, Schedule 1 of the DPA 2018.
If you are relying on the substantial public interest condition you also need to meet one of 23 specific substantial public interest conditions set out in Part 2 of Schedule 1 of the DPA 2018.
The ICO tells us for some of these conditions, the substantial public interest element is built in. For others, you need to be able to demonstrate that your specific processing is ‘necessary for reasons of substantial public interest’, on a case-by-case basis. The regulator says we can’t have a vague public interest argument, we must be able to ‘make specific arguments about the concrete wide benefits’ of what we are doing.
Almost all of the substantial public interest conditions, plus the condition for processing employment, social security and social protection data, require you to have an APD in place. The ICO Special Category Guidance in includes a template appropriate policy document.
A privacy notice should explain your purposes for processing and the lawful basis being relied on in order to collect and use people’s personal data, including any special category data. Remember, if you’ve received special category data from a third party, this should be transparent and people should be provided with your privacy notice.
You only have to report a breach to the ICO if it is likely to result in a risk to the rights and freedoms of individuals, and if left unaddressed the breach is likely to have a significant detrimental effect on individuals. Special category data is considered higher risk data, and therefore if a breach involves data of this nature, it is more likely to reach the bar for reporting. It is also more likely to reach the threshold of needing to notify those affected.
In summary, training and raising awareness are crucial to make sure employees understand what special category data is, how it might be inferred, and to know that collecting and using this type of data must be done with care.
Shakespeare wrote (I hope I remembered this correctly from ‘A’ level English), ‘When sorrows come, they come not single spies but in battalions.’ He could’ve been writing about the UK Conservative Party which, let’s be honest, hasn’t been having a great time recently.
The Telegraph is reporting the party suffered it’s second data breach in a month. An error with an app led to the personal information of leading Conservative politicians – some in high government office – being available to all app users.
Launched in April, the ‘Share2Win’ app was designed as a quick and easy way for activists to share party content online. However, a design fault meant users could sign up to the app using just an email address. Then, in just a few clicks, they were able to access the names, postcodes and telephone numbers of all other registrants.
This follows another recent Tory Party email blunder in May, where all recipients could see each other’s details. Email data breaches.
In the heat of a General Election, some might put these errors down to ‘yet more Tory incompetence’. I’d say, to quote another famous piece of writing, ‘He that is without sin among you, let him first cast a stone’! There are plenty of examples where other organisations have failed to take appropriate steps to make sure privacy and security are baked into their app’s architecture. And this lack of oversight extends beyond apps to webforms, online portals and more. It’s a depressingly common, and easily avoided.
In April, a Housing Associate was reprimanded by the ICO after launching an online customer portal which allowed users to access documents (revealing personal data) they shouldn’t have been able to see. These related to, of all things, anti social behaviour. In March the ICO issued a reprimand to the London Mayor’s Office after users of a webform could in click on a button and see every other query submitted. And the list goes on. This isn’t a party political issue. It’s a lack of due process and carelessness issue.
It’s easy to see how it happens, especially (such as in a snap election) when there’s a genuine sense of urgency. Some bright spark has a great idea, senior management love it, and demand it’s implemented pronto! Make it happen! Be agile! Be disruptive! (etc).
But there’s a sound reason why the concept of data proteciton by design and by default is embedded into data protection legislation, and it’s really not that difficult to understand. As the name suggests, data protection by design means baking data protection into business practices from the outset; considering the core data protection principles such as data minimisation and purpose limitation as well as integrity & confidentiality. Crucially, it means not taking short-cuts when it comes to security measures.
GDPR may have it’s critics, but this element is just common sense. Something most people would get onboard with. A clear and approved procedure for new systems, services and products which covers data protection and security is not a ‘nice to have’ – it’s a ‘must have’. This can go a long way to protect individuals and mitigate the risk of unwelcome headlines further down the line, when an avoidable breach puts your customers’, clients’ or employees’ data at risk.
Should we conduct a DPIA?
A clear procedure can also alert those involved to when a Data Protection Impact Assessment is required. A DPIA is mandatory is certain circumstances where activities are higher risk, but even when not strictly required it’s a handy tool for picking up on any data protection risks and agreeing measures to mitigate them from Day One of your project. Many organisations would also want to make sure there’s oversight by their Information Security or IT team, in the form of an Information Security Assessment for any new applications.
Developers, the IT team and anyone else involved need to be armed with the information they need to make sound decisions. Data protection and information security teams need to work together to develop apps (or other new developments) which aren’t going to become a leaky bucket. Building this in from the start actually saves time too.
In all of this, don’t forget your suppliers. If you want to outsource the development of an app to a third-party supplier, you need to check their credentials and make sure you have necessary controller-to-processor contractual arrangements and assessment procedures in place – especially if once the app goes live, the developer’s team still has access to the personal data it collects. Are your contractors subbing work to other third party subcontractors? Do they work overseas? Will these subcontractors have access to personal data?
The good news? There’s good practice out there. I remember a data protection review DPN conducted a few years back. One of the areas we looked at was an app our client developed for students to use. It was a pleasure to see how the app had been built with data protection and security at its heart. We couldn’t fault with the team who designed it – and as such the client didn’t compromise their students, face litigation, look foolish or be summoned to see the Information Commissioner!
In conclusion? Yes, be fast. Innovate! Just remember to build your data protection strategy into the project from Day One.
WhatsApp is a great communication tool. Millions use it for chatting with friends, vitally important stuff like sharing cat/dog memes and organising our daily lives. However, what about using messaging apps in a work context? It certainly raises some challenges and data protection concerns.
Inappropriate use of messaging apps can, and has, resulted in serious consequences for both employees and employers. WhatsApp is an excellent example of how technology can blur our private and professional lives. It’s easy to see how it happens – it’s just so darn convenient. Not to mention virtually free.
There have been a number of high-profile cases where WhatsApp messages have led to reputational damage, as well as individuals and organisations being penalised. From police officers and firefighters sending racist, sexist and homophobic content in ‘private’ groups, to politicians and civil servants failing to retain or surrender WhatsApp messages to public inquiries. Aggrieved employees have won damages in tribunal cases for being excluded from work-related group chats. Then there was the famous case of former Health Secretary, Matt Hancock, who handed over thousands of sensitive political messages to a journalist he was working with on his autobiography!
This smorgasbord of drama is before data protection comes into play. 26 members of staff at NHS Lanarkshire used a WhatsApp Group on multiple occasions to share patient data; names, phone numbers, addresses, images, videos and screenshots were shared, including sensitive clinical information. Police officers were caught sharing crime scene images. And so on.
These are egregious examples. In others, however, Gen Z can be cut some slack. They live in an era of fast-moving technology and take instant messaging for granted.
The risks are evident. Employers might have limited control over employees setting up their own WhatsApp group, which are routinely private and set up on personal mobiles. But left unchecked? They can lead to the sharing of offensive content, confidential or commercially sensitive information, or can be the cause of a personal data breach.
Furthermore, employers have no control over how messages are then shared to any number of recipients beyond the organisation. In fact, employers might not know a group exists until a problem arises. In the wrong hands, messaging apps can be like the world’s leakiest chain email.
Mitigating the risks
In light of the risks, an outright ban on the use of WhatsApp for work-related matters may seem like a good idea, but in practice in many organisations this is unlikely to be enforceable. So what can employers do to mitigate the risks?
The answer probably lies in raising awareness, educating staff and setting clear boundaries. Clear policy guidelines on the use of messaging apps such as WhatsApp can help to prevent something nasty flaring up. In much the same way as you would tell people what is deemed acceptable use for email and internet use in the workplace, you can extend this to WhatsApp. Policy guidelines can clearly set out;
📌 what’s acceptable and unacceptable content
📌 don’t share sensitive company information
📌 don’t share personal information relating to customers, business partners, colleagues and so on
📌 don’t share images of people, especially children or vulnerable people
📌 don’t use WhatsApp to harass or bully other employees
📌 don’t deliberately exclude people from a work-related group chat without a good reason.
📌 the risks & consequences of inappropriate use for those involved
Your policy guidelines can distinguish between different types of group. For example, making it clear a WhatsApp group set up to arrange after-work socialising, be it a sports team or going for drinks, is either work-sanctioned or it isn’t. If it isn’t, the responsibility for the content of the chat lies with the users of that group. A fair, transparent policy is unlikely to be criticised if applied consistently and fairly.
Guidelines can be created with clear examples and case studies which resonate with your staff. There’s no shortage of examples out there – several police officers in the example above were sent to prison. Regularly remind people and consider including an ‘acceptable use of WhatsApp’ input during team training.
Should line managers, as part of their duties, be asked to act as moderators or gatekeepers for such groups? Should the DPO be asked to dip sample them? It might work for some organisations.
You can send a clear warning to staff that a breach of the policy is likely to lead to disciplinary action. You can also warn them, WhatsApp messages can (and have!) been used in evidence in legal disputes and civil litigation. They might think what they are doing is private, but it might turn out not be.
Given its huge popularity, there’s little doubt WhatsApp (or similar apps) will continue to be widely used as a simple and cost-effective way of communicating with people in the workplace. But, as with any form of communication, the key is to remain clear, open and transparent about the rules of use to make sure the rights of employees and the data your organisation handles remains protected.
Get DPN updates direct to your inbox. Insight, free resources, guides, events & services from DPN Associates (publishers of DPN). All our emails have an opt-out. For more information see our Privacy Statement.