There is a distinct subset of personal data which is awarded ‘special’ protection under data protection law. This subset includes information for which people have been persecuted in the past, or suffered unfair treatment or discrimination, and still could be. These special categories of personal data are considered higher risk, and organisations are legally obliged to meet additional requirements when they collect and use it.
Employees need to be aware special category data should only be collected and used with due consideration. Sometimes there will be a clear and obvious purpose for collecting this type of information; such as a travel firm needing health information from customers, or an event organiser requesting accessibility requirements to facilitate people’s attendance. In other situations it will be more nuanced.
Special Categories of Personal Data under UK GDPR (and it’s EU equivalent), are commonly referred to as special category data, and are defined as personal data revealing:
The definition also covers:
Sometimes your teams might not realise they’re collecting and using special category data, but they might well be.
It’s likely if you have inferred or made any assumptions based on what you know about someone, for example they’re likely to have certain political opinions, or likely to suffer from a certain health condition, this will mean you are handling special category data.
There was an interesting ICO investigation into an online retailer which found it was targeting customers who’d bought certain products, assuming from this they were likely to be arthritis sufferers. This assumption meant the retailer was judged to be processing special category data.
If you collect information about dietary requirements these could reveal religious beliefs, for example halal and kosher. It’s also worth noting in 2020 a judge ruled that ethical veganism qualifies as a philosophical belief under the Equality Act 2010.
There’s sometimes confusion surrounding what might be considered ‘sensitive’ data and what constitutes special category data. I hear people say “why is financial data not considered as sensitive as health data or ethnic origin?’ Of course, people’s financial details are sensitive and organisations do still need to make sure they’ve got appropriate measures in place to protect such information and keep it secure. However, UK GDPR (and EU) sets out specific requirements for special category data which don’t directly apply to financial data.
To understand why, it’s worth noting special protection for data such as ethnicity, racial origin, religious beliefs and sexual orientation was born in the 1950s, under the European Convention on Human Rights, after Europe had witnessed people being persecuted and killed.
In a similar way to all personal data, any handling of special category data must be lawful, fair and transparent. Organisations need to make sure their collection and use complies with all the core data protection principles and requirements of UK GDPR. For example;
What makes special category data unique is it will be considered a higher risk than other types of data, and also requires you to choose a special category condition.
Confirm whether you need to conduct a Data Protection Impact Assessment for your planned activities using special category data. DPIAs are mandatory for any type of processing which is likely to be high risk. This means a DPIA is more likely to be needed when handling special category data. That’s not to say it will always be essential, it really will depend on the necessity, nature, scale and your purpose for using this data.
Alongside a lawful basis, there’s an additional requirement to consider your purpose(s) for processing this data and to select a special category condition. These conditions are set out in Article 9, UK GDPR.
(a) Explicit consent
(b) Employment, social security and social protection (if authorised by law)
(c) Vital interests
(d) Not-for-profit bodies
(e) Made public by the data subject
(f) Legal claims or judicial acts
(g) Reasons of substantial public interest (with a basis in law)
(h) Health or social care (with a basis in law)
(i) Public health (with a basis in law)
(j) Archiving, research and statistics (with a basis in law)
Five of the above conditions are solely set out in Article 9. The others require specific authorisation or a basis in law, and you’ll need to meet additional conditions set out in the Data Protection Act 2018.
If you are relying on any of the following you also need to meet the associated condition in UK law. This is set out in Part 1, Schedule 1 of the DPA 2018.
If you are relying on the substantial public interest condition you also need to meet one of 23 specific substantial public interest conditions set out in Part 2 of Schedule 1 of the DPA 2018.
The ICO tells us for some of these conditions, the substantial public interest element is built in. For others, you need to be able to demonstrate that your specific processing is ‘necessary for reasons of substantial public interest’, on a case-by-case basis. The regulator says we can’t have a vague public interest argument, we must be able to ‘make specific arguments about the concrete wide benefits’ of what we are doing.
Almost all of the substantial public interest conditions, plus the condition for processing employment, social security and social protection data, require you to have an APD in place. The ICO Special Category Guidance in includes a template appropriate policy document.
A privacy notice should explain your purposes for processing and the lawful basis being relied on in order to collect and use people’s personal data, including any special category data. Remember, if you’ve received special category data from a third party, this should be transparent and people should be provided with your privacy notice.
You only have to report a breach to the ICO if it is likely to result in a risk to the rights and freedoms of individuals, and if left unaddressed the breach is likely to have a significant detrimental effect on individuals. Special category data is considered higher risk data, and therefore if a breach involves data of this nature, it is more likely to reach the bar for reporting. It is also more likely to reach the threshold of needing to notify those affected.
In summary, training and raising awareness are crucial to make sure employees understand what special category data is, how it might be inferred, and to know that collecting and using this type of data must be done with care.
Shakespeare wrote (I hope I remembered this correctly from ‘A’ level English), ‘When sorrows come, they come not single spies but in battalions.’ He could’ve been writing about the UK Conservative Party which, let’s be honest, hasn’t been having a great time recently.
The Telegraph is reporting the party suffered it’s second data breach in a month. An error with an app led to the personal information of leading Conservative politicians – some in high government office – being available to all app users.
Launched in April, the ‘Share2Win’ app was designed as a quick and easy way for activists to share party content online. However, a design fault meant users could sign up to the app using just an email address. Then, in just a few clicks, they were able to access the names, postcodes and telephone numbers of all other registrants.
This follows another recent Tory Party email blunder in May, where all recipients could see each other’s details. Email data breaches.
In the heat of a General Election, some might put these errors down to ‘yet more Tory incompetence’. I’d say, to quote another famous piece of writing, ‘He that is without sin among you, let him first cast a stone’! There are plenty of examples where other organisations have failed to take appropriate steps to make sure privacy and security are baked into their app’s architecture. And this lack of oversight extends beyond apps to webforms, online portals and more. It’s a depressingly common, and easily avoided.
In April, a Housing Associate was reprimanded by the ICO after launching an online customer portal which allowed users to access documents (revealing personal data) they shouldn’t have been able to see. These related to, of all things, anti social behaviour. In March the ICO issued a reprimand to the London Mayor’s Office after users of a webform could in click on a button and see every other query submitted. And the list goes on. This isn’t a party political issue. It’s a lack of due process and carelessness issue.
It’s easy to see how it happens, especially (such as in a snap election) when there’s a genuine sense of urgency. Some bright spark has a great idea, senior management love it, and demand it’s implemented pronto! Make it happen! Be agile! Be disruptive! (etc).
But there’s a sound reason why the concept of data proteciton by design and by default is embedded into data protection legislation, and it’s really not that difficult to understand. As the name suggests, data protection by design means baking data protection into business practices from the outset; considering the core data protection principles such as data minimisation and purpose limitation as well as integrity & confidentiality. Crucially, it means not taking short-cuts when it comes to security measures.
GDPR may have it’s critics, but this element is just common sense. Something most people would get onboard with. A clear and approved procedure for new systems, services and products which covers data protection and security is not a ‘nice to have’ – it’s a ‘must have’. This can go a long way to protect individuals and mitigate the risk of unwelcome headlines further down the line, when an avoidable breach puts your customers’, clients’ or employees’ data at risk.
Should we conduct a DPIA?
A clear procedure can also alert those involved to when a Data Protection Impact Assessment is required. A DPIA is mandatory is certain circumstances where activities are higher risk, but even when not strictly required it’s a handy tool for picking up on any data protection risks and agreeing measures to mitigate them from Day One of your project. Many organisations would also want to make sure there’s oversight by their Information Security or IT team, in the form of an Information Security Assessment for any new applications.
Developers, the IT team and anyone else involved need to be armed with the information they need to make sound decisions. Data protection and information security teams need to work together to develop apps (or other new developments) which aren’t going to become a leaky bucket. Building this in from the start actually saves time too.
In all of this, don’t forget your suppliers. If you want to outsource the development of an app to a third-party supplier, you need to check their credentials and make sure you have necessary controller-to-processor contractual arrangements and assessment procedures in place – especially if once the app goes live, the developer’s team still has access to the personal data it collects. Are your contractors subbing work to other third party subcontractors? Do they work overseas? Will these subcontractors have access to personal data?
The good news? There’s good practice out there. I remember a data protection review DPN conducted a few years back. One of the areas we looked at was an app our client developed for students to use. It was a pleasure to see how the app had been built with data protection and security at its heart. We couldn’t fault with the team who designed it – and as such the client didn’t compromise their students, face litigation, look foolish or be summoned to see the Information Commissioner!
In conclusion? Yes, be fast. Innovate! Just remember to build your data protection strategy into the project from Day One.
WhatsApp is a great communication tool. Millions use it for chatting with friends, vitally important stuff like sharing cat/dog memes and organising our daily lives. However, what about using messaging apps in a work context? It certainly raises some challenges and data protection concerns.
Inappropriate use of messaging apps can, and has, resulted in serious consequences for both employees and employers. WhatsApp is an excellent example of how technology can blur our private and professional lives. It’s easy to see how it happens – it’s just so darn convenient. Not to mention virtually free.
There have been a number of high-profile cases where WhatsApp messages have led to reputational damage, as well as individuals and organisations being penalised. From police officers and firefighters sending racist, sexist and homophobic content in ‘private’ groups, to politicians and civil servants failing to retain or surrender WhatsApp messages to public inquiries. Aggrieved employees have won damages in tribunal cases for being excluded from work-related group chats. Then there was the famous case of former Health Secretary, Matt Hancock, who handed over thousands of sensitive political messages to a journalist he was working with on his autobiography!
This smorgasbord of drama is before data protection comes into play. 26 members of staff at NHS Lanarkshire used a WhatsApp Group on multiple occasions to share patient data; names, phone numbers, addresses, images, videos and screenshots were shared, including sensitive clinical information. Police officers were caught sharing crime scene images. And so on.
These are egregious examples. In others, however, Gen Z can be cut some slack. They live in an era of fast-moving technology and take instant messaging for granted.
The risks are evident. Employers might have limited control over employees setting up their own WhatsApp group, which are routinely private and set up on personal mobiles. But left unchecked? They can lead to the sharing of offensive content, confidential or commercially sensitive information, or can be the cause of a personal data breach.
Furthermore, employers have no control over how messages are then shared to any number of recipients beyond the organisation. In fact, employers might not know a group exists until a problem arises. In the wrong hands, messaging apps can be like the world’s leakiest chain email.
Mitigating the risks
In light of the risks, an outright ban on the use of WhatsApp for work-related matters may seem like a good idea, but in practice in many organisations this is unlikely to be enforceable. So what can employers do to mitigate the risks?
The answer probably lies in raising awareness, educating staff and setting clear boundaries. Clear policy guidelines on the use of messaging apps such as WhatsApp can help to prevent something nasty flaring up. In much the same way as you would tell people what is deemed acceptable use for email and internet use in the workplace, you can extend this to WhatsApp. Policy guidelines can clearly set out;
📌 what’s acceptable and unacceptable content
📌 don’t share sensitive company information
📌 don’t share personal information relating to customers, business partners, colleagues and so on
📌 don’t share images of people, especially children or vulnerable people
📌 don’t use WhatsApp to harass or bully other employees
📌 don’t deliberately exclude people from a work-related group chat without a good reason.
📌 the risks & consequences of inappropriate use for those involved
Your policy guidelines can distinguish between different types of group. For example, making it clear a WhatsApp group set up to arrange after-work socialising, be it a sports team or going for drinks, is either work-sanctioned or it isn’t. If it isn’t, the responsibility for the content of the chat lies with the users of that group. A fair, transparent policy is unlikely to be criticised if applied consistently and fairly.
Guidelines can be created with clear examples and case studies which resonate with your staff. There’s no shortage of examples out there – several police officers in the example above were sent to prison. Regularly remind people and consider including an ‘acceptable use of WhatsApp’ input during team training.
Should line managers, as part of their duties, be asked to act as moderators or gatekeepers for such groups? Should the DPO be asked to dip sample them? It might work for some organisations.
You can send a clear warning to staff that a breach of the policy is likely to lead to disciplinary action. You can also warn them, WhatsApp messages can (and have!) been used in evidence in legal disputes and civil litigation. They might think what they are doing is private, but it might turn out not be.
Given its huge popularity, there’s little doubt WhatsApp (or similar apps) will continue to be widely used as a simple and cost-effective way of communicating with people in the workplace. But, as with any form of communication, the key is to remain clear, open and transparent about the rules of use to make sure the rights of employees and the data your organisation handles remains protected.
Clearing out personal data your business no longer needs is a really simple concept, but in practice it can be rather tricky to achieve! It throws up key considerations such as whether to anonymise or how to make sure its deleted or securely destroyed. Let’s take a look at the key considerations and how to implement a robust plan.
Data protection law stipulates organisations must only keep personal data as long as necessary and only for the purposes they have specified. There are risks associated with both keeping personal data too long, or not keeping it long enough. These risks include, but are not limited to:
To manage this legal obligation successfully, you’ll need to start with an up-to-date data retention policy and schedule. These should clearly identify which types of personal data your business processes, for what purposes, how long each should typically be kept and under what circumstances you might need to hold it for longer.
If your data retention policy or schedule is lacking, first focus on making sure these are brought up to scratch. Our Data Retention Data Retention Guidance has some useful templates.
When an agreed retention period is reach (as per your retention schedule), we’d recommend taking the following steps:
There are different approaches an organisation can take when the data retention period is reached, such as:
Deletion of records might seem the obvious choice, and it’s often the best one too, but take care how you delete data. Sometimes deleting whole records can affect key processes on your systems such as reporting, algorithms and other programs. Check with your IT colleagues first.
Most organisations want to extract increasing information and value from their digital assets. In some situations, it can be helpful to remove any personal identifiers so you can keep the data that remains after the retention period has been reached. For example,
To be clear, anonymisation is the process of removing ALL information which could be used to identify a living person, so the data that remains can no longer be attributed back to any unique individuals.
Once these personal identifiers are deleted, data protection laws do not apply to the anonymised information that remains, so you may continue to hold it. But you have to make sure it is truly anonymised.
The ICO stresses you should be careful when attempting to anonymise information. For the information to be truly anonymised, you must not be able to re-identify individuals. If at any point reasonably available means could be used to re-identify the individuals, the data will not have been effectively anonymised, but will have merely been pseudonymised. This means it should still be treated as personal data.
Whilst pseudonymising data does reduce the risks to data subjects, in the context of retention, it is not sufficient for personal data you longer need to keep.
There are software methods of deleting data, which may involve removing whole records from a dataset or overwriting them. For example, using of zeros and ones to overwrite the personal identifiers in the data.
Once the personal identifiers are overwritten, that data will be rendered unrecoverable, and therefore it’s no longer classed as personal data.
This deletion process should include backup copies of data. Whilst personal data may be instantly deleted from live systems, personal data may still remain within the backup environment, until it is overwritten.
If the backup data cannot be immediately overwritten it must be put ‘beyond use’, i.e. you must make sure the data is not used for any other purpose and is simply held on your systems until it’s replaced, in line with an established schedule.
Examples of where data may be put ‘beyond use’ are:
The ICO (for example) will be satisfied that information is ‘beyond use’ if the data controller:
Destruction is the final action for about 95% of most organisations’ physical records. Physical destruction may include shredding, pulping or burning paper records.
Destruction is likely to be the best course of action for physical records when the organisation no longer needs to keep the data, and when it does not need to hold data in an anonymised format.
Controllers are accountable for the way personal data is processed and consequently, the disposal decision should be documented in a disposal schedule.
Many organisations use other organisations to manage their disposal or destruction of physical records. There are benefits of using third parties, such as reducing in-house storage costs.
Remember, third parties providing this kind of service will be regarded as a data processor, therefore you’ll need to make sure an appropriate contract is in place which includes the usual data protection clauses.
Destruction may be carried out remotely following an agreed process. For instance, a processor might provide regular notifications of batches due to be destroyed in line with documented retention periods.
Retention periods will also apply to unstructured data which contains personal identifiers. The most common being electronic communications records such emails, instant messages, call recordings and so on.
As you can imagine, unstructured data records present some real challenges. You’ll need to be able to review the records to find any personal data stored there, so it can be deleted in line with your retention schedules, or for an erasure request.
Depending on the size of your organisation, you may need to use specialist software tools to perform content analysis of unstructured data.
In summary, whilst data retention as a concept appears straightforward, it does require some planning, clearly assigned responsibilities for implementing retention periods, and the technical means to do so effectively.
Creating a clear data governance strategy is crucial to making sure data is handled in line with your organisation’s aims and industry best practice.
Data governance is often thought of as the management process by which an organisation protects its data assets and ensures compliance with data laws, such as GDPR. But it’s far broader than compliance. It’s a holistic approach to data and should have people at its very heart. People with defined roles, responsibilities, processes and technologies which help them make sure data (not just personal data) is properly looked after and wisely used throughout its lifecycle.
How sophisticated your organisation’s approach needs to be will depend on the nature and size of your business, the sensitivity of the data you hold, the relationships you have with business partners, and customer or client expectations.
There are many benefits this activity can bring, including:
A strong data governance approach can also help an organisation to make the most of their data assets, improve customer experience and benefits, and leverage competitive advantage.
There are three foundational elements which underpin successful data governance – People, Processes and Technologies.
Engaging with stakeholders across the organisation to establish and embed key roles and responsibilities for data governance.
Many organisations look to establish a ‘Data Ownership Model’ which recognises data governance is an organisational responsibility which requires close collaboration across different roles and levels, including the delegation of specific responsibilities for data activities.
Here’s some examples of roles you may wish to consider:
Think about all the processes, policies, operating procedures and specialist training provided to guide your employees and contractors to enable them to handle data in line with your business expectations – as well to comply with the law. For example:
Without these in place and regularly updated, your people can’t possibly act in the ways you want and expect them to.
In my experience, success comes from keeping these items concise, and as relevant and engaging as possible. They can easily be forgotten or put in the ‘maybe later’ pile… a little time and effort can really pay dividends!
The technologies which underpin all data activities across the data lifecycle. For example, your HR, marketing & CRM, accounting and other operational systems you use regularly. Data governance requires those responsible for adopting technologies to ensure appropriate standards and procedures are in place which ensure appropriate:
Looking at privacy technology in particular, the solutions available have really progressed in recent years in terms of both their capability and ease of use. Giving DPOs and others with an interest in data protection clear visibility of where the risks lie, help to prioritise them and pointers to relevant solutions. They can also help provide clear visibility and oversight to the senior leadership team.
Data governance goes hand in hand with accountability – one of the core principles under GDPR. This requires organisations to be ready to demonstrate the measures and controls they have to protect personal data and in particular, show HOW they comply with the other data protection principles.
Appropriate measures, controls and records need to be in place to evidence accountability. For example, a Supervisory Authority (such as the ICO) may expect organisations to have:
Ready to get started?
If you’re keen to reap the benefits of improved compliance and reduced risk to the business, the first and crucial step is getting buy-in from senior leadership and a commitment from key stakeholders, so I’d suggest you kick-off by seeking their support.
Under EU and UK data protection law businesses need to make sure they have ‘appropriate technical and organisational measures’ in place to protect personal data. Organisational measures include making sure staff receive adequate data protection training and guidance about how they should handle personal data.
In my experience, people are keen to ‘do the right thing’ with personal data, but are sometimes unsure how to go about it.
This is where well-crafted policies can really help, sitting alongside and integrated with employee training. Unfortunately people often have a negative view of policies. Long-winded policies, full of impenetrable jargon which regurgitates the law can turn people off.
A vanilla one-size fits all approach has little value… but there’s a much better way. A well-written, easy-to-read, concise policy can communicate ‘what good looks like’ for your business and explain how your people should behave to deliver good practice.
Yes, you absolutely need to take into account what the law says. A policy should identify key risk areas, but crucially it should also tell your people how they should act to meet your company standards – which include legal compliance.
Don’t shy away from stressing the benefits for your business of acting responsibly. Focus on the needs of your business sector and the unique nature of your businesses processing.
Make policies relevant to your workforce and how your business operates. Even better if you can, tie-in the launch of improved data policies with data protection training, which shares the main themes from the policies, this can really bring them to life , improve awareness and reinforce positive behaviours.
First decide which policies you actually need and how they should fit together. My favoured approach is to have just two ‘parent’ data policies, a Data Protection Policy and an Information Security Policy, then link out to ‘child’ policies or procedures which sit below them.
You might consider a third parent policy, such as Acceptable Use, but personally I prefer information about acceptable use to be included within the Data Protection and Information Security policies, so people don’t have to search around.
Here’s a typical Policy Framework, showing the two ‘parent’ policies and examples of possible ‘child’ policies or procedures below.
The range of policies you’ll need will vary from business to business. A small company, with a handful of employees, processing relatively less sensitive data won’t need a raft of policies.
Many micro or small businesses may just focus on having a Data Protection Policy (which covers the data lifecycle from creation through to retention) and an Information Security Policy. Alongside these you’ll definitely need a clear procedure for handling data breaches and individual privacy rights.
As said, too often policy documents are littered with legalise and jargon. Sometimes it feels like a policy has to be formal and massively detailed. Not true. People shouldn’t need a lot of specialist knowledge to understand your policies, particularly those aimed at ALL staff. Straight-forward instructions are more likely to be read, which means more people are likely to follow them.
Take a look at the way your policies are written. Are they a bit dry? If they could do with freshening up, here are some simple do’s and don’ts to consider:
Do’s
Don’ts
Of course, balance is important. While overly complex policies will gather dust, we need to include enough useful and important information to get key messages across. We’re not talking about talking down to people or patronising them, either.
Of course, we also need to make sure people are aware of relevant policies and can easily lay their hands on them.
I’d recommend you host policies on your Intranet, if you have one, and create them in the form of web pages rather than PDFs. It’s good practice to include hyperlinks to and from topic-specific guidance notes, so people can easily navigate to find more about a specific topic. This helps you to keep the parent policies short and concise – easy to digest.
When you carry out data protection training, remind people where to find related policies. In fact throughout the year use near-misses, news stories and other events to reinforce key messages and point to your policies.
Well-crafted easy to digest data protection related policies will go a long way to guide staff on how you expect them to handle and keep personal data secure in their day-to-day roles. But as always proportionality is key, a smaller business handling fairly insensitive data wouldn’t be expected to have multiple policies.
The recent spate of serious data breaches, not least the awful case involving the Police Service of Northern Ireland (PSNI), left me wondering: who’s really to blame? We’re used to hearing about human error, but is it too easy to point the finger?
Is it really the fault of the person who pressed the send button? An old adage comes to mind, ‘success has a thousand fathers, failure is an orphan.’
Of course, people make mistakes. Training, technology and procedures can easily fail if ignored, either wilfully or otherwise. Yes, people are part of the equation. But that’s what it is. An equation. There are usually other factors at play.
In the PSNI case – one involving safety-critical data – I would argue there’s a strong argument that any system allowing such unredacted material to enter an FOIA environment in the first place is flawed?
Nobody is immune from human error. About nine years ago, on my second day in a new compliance role, I left my rucksack on the train. Doh! Luckily, there was no personal data relating to my new employer inside. I lost my workplace starter pack and had to cancel my debit card. I recall the sinking feeling as my new boss said, ‘well, that’s a bit embarrassing for someone in your job’. It was. But I knew it could have been so much worse.
Approximately 80% of data breaches are classified by the Information Commissioner’s Office as being caused by human error. Common mistakes include:
However, sometimes I hear about human error breaches and don’t think ‘how did someone accidently do that?’ Instead, I wonder…
I could go on.
Rather than human error, should we be blaming a lack of appropriate technical and organisational measures (TOMs) to protect personal data? A fundamental data protection requirement.
We all know robust procedures and security measures can mitigate the risk of human error. A simple example – I know employees who receive an alert if they’re about to send an attachment containing personal data without a password.
Alongside this, data protection training is a must, but it should never be a ‘tick box’ exercise. It shouldn’t be a case of annual online training module completed; no further action required! We need to make sure training is relevant and effective and delivers key learning points and messages. Training should be reinforced with regular awareness campaigns. Using mistakes (big or small) as case studies are a good way to keep people alert to the risks. This is another reason why post-event investigation is so important as a lesson-learning exercise.
Rather than being a liability, if we arm people with enough knowledge they can become our greatest asset in preventing data breaches.
Chatting with my husband about this, he mentioned a boss once asking him to provide some highly sensitive information on a spreadsheet. Despite the seniority and insistence of the individual, my husband refused. He offered an alternative solution, with protecting people’s data at heart. Armed with enough knowledge, he knew what he had been asked to do was foolhardy.
It’s too early to call what precisely led to these recent breaches:
However, we can learn from previous breaches and the findings of previous ICO investigations.
You may recall the case of Heathrow Airport’s lost unencrypted memory stick. Although ostensibly a case of human error, the ICO established the Airport failed not only ‘to ensure that the personal data held on its network was properly secured’, but also failed to provide sufficient training in relation to data protection and information security. The person blamed for the breach was unaware the memory stick should have been encrypted in the first place.
Then there was the Cabinet Office breach in which people’s home addresses we published publicly in the New Year’s Honours list. The actual person who published the list must’ve had a nightmare, when they realised what had happened. But the ICO findings revealed a new IT system was rushed in and set up incorrectly. The procedure given for people to follow was incorrect. A tight deadline meant short-cuts were taken. The Cabinet Office was found to have been complacent.
The lesson here? Data breaches aren’t always solely the fault of the person pressing the ‘send’ button. Too often, systems and procedures have already failed. Data protection is a mindset. A culture. Not an add-on. As the PSNI has sadly discovered, in the most awful of circumstances.
The impact breaches can have on employees, customers, victims of crime, patients and so on, can be devastating. Just the knowledge that their data is ‘out there’ can cause distress and worry.
Data protection law doesn’t spell out what businesses must do. To know where data protection risks lie, we need to know what personal data we have across the business and what it’s being used for. Risks need to be assessed and managed. And the measures put in place need to be proportionate to the risk.
Get DPN updates direct to your inbox. Insight, free resources, guides, events & services from DPN Associates (publishers of DPN). All our emails have an opt-out. For more information see our Privacy Statement.