Google Sandbox launch delayed – what now?

July 2021

A stay of execution for third-party cookies…

When it was originally announced, in late 2019, that Google would no longer support third-party cookies and that they would introduce alternative means of targeting audiences through the Google sandbox and specifically using FLoC (Federated Learning of Cohorts) there were gasps of horror and surprise in the advertising community.

Mozilla and Firefox had already stopped supporting third party cookies but these browsers represented a tiny proportion of all browser users. With Google Chrome representing around 65% of all users, this was a game-changer and spelt the end of behavioural targeting as we knew it. What would advertisers do without their precious third party cookies?

What was FLoC?

In a nutshell, FloC meant that instead of third-party trackers, the browser would do the profiling of users to create behavioural segments which could then be shared with websites and advertisers.

The obvious downside to this project was that Google would have greater control over the deployment of targeting tools. There were enough people who believed that this was anti-competitive.  Many have said it was an exercise to control the advertising market even further. There was also a fear that this targeting would be discriminatory and result in predatory targeting.

Since then, there has been a flurry of activity with publishers and advertisers trying to figure out what alternatives could be used to effectively target new customers in a more privacy-friendly way.

Privacy Sandbox delayed

In late June, Google announced that they had delayed the deadline for deprecating third party cookies. Instead of late 2021, the new self-imposed deadline is late 2023.

They also announced they will end the testing of FloC on July 13th which suggests they are going back to the drawing board to develop a privacy friendly targeting solution that potentially replaces FLoC.

The main question is why and what should advertisers do now?

1. The Sandbox solution isn’t ready and doesn’t work
One obvious answer is that the new targeting solution being developed in the sandbox isn’t ready. Whatever has been built hasn’t been tested in anger and there is no guarantee that they work as well as third party cookies

2. Google’s FLoC has been blocked
Big players such as Amazon have blocked FLoC. This would have been a serious concern if other large retailers/publishers followed suit and rendered the solution unusable.

3. Google are under pressure from regulators
The most likely answer. Google has allowed at least 6 months to talk to the Competitions and Markets Authority in UK (CMA). They are also the subject of other anti-trust investigations across Europe.

There was general concern amongst advertisers and ad tech providers that whatever is being planned is anti-competitive. They believed that removing third-party cookies would undermine publishers and ad tech providers. In addition, whatever is developed has to be privacy-friendly and considered compliant by ICO.

The new timeline for third-party cookie deprecation starts with the first stage in late 2022 when Chrome will test Privacy Sandbox features and monitor for industry adoption. That stage is expected to last nine months. This is also when the CMA will evaluate Chrome’s cookie changes.

If the Privacy Sandbox features are adopted by publishers and developers, and Google gets the go-ahead from the CMA, then Chrome will move to Stage 2: a three-month period when the browser will phase out third-party cookies.

What should advertisers be doing in the meantime?

This delay is a golden opportunity for advertisers to develop their own strategies for replacing third party cookies. There has been significant innovation in this area over the last 2-3 years, although adoption has been relatively slow with many companies not making much progress. An 18 month – 2-year breather will be most welcome.

What are the main options open to advertisers?

1. Build out a larger body of first-party (and zero-party) data
The best and most accurate data is that which you capture directly from your customers. It is freely given and is likely to deliver better response rates and conversions. This could include transaction data and registration data. The recently coined term – zero party data – is effectively a subset of first-party data and is defined as data that has been freely given by the individual through channels such as preference centres, surveys and other data collection methods. Building the first-party database isn’t always easy for advertisers and can take a long time but there is a clear motivation now to get on with it.

2. Share second-party data
A growing area is the development of pseudonymised data pools or clean rooms. In this context advertisers will sign a data-sharing agreement with another data owner to share compliantly collected, pseudonymised data to help enhance their own records.

3. Adopt contextual advertising
Contextual advertising relies on you using your compliantly collected first-party data to create segments and profiles. These can then be used to target new prospects using context as the basis for targeting rather than behaviour. These solutions often rely on data remaining on an individual’s device until the point when they start to consume relevant content – known as edge computing.

Summary

In summary, a lot is going on in advertising and the delay in the delivery of the Privacy Sandbox will be most welcome to advertisers. It gives them time to build out their first-party data as well as investigate the wide variety of new cookie-less targeting solutions that are springing up. No doubt Google will deliver a compliant solution but in the meantime, advertisers have a breathing space to sort out their data strategies.

 

Struggling with data protection? Ease the strain with our no-nonsense advice and support via our flexible Privacy Manager Service. Find out how our experienced team can help you. CONTACT US.

Where next with international data transfers?

April 2021

In July 2020, the Court of Justice of the European Union (CJEU) declared the EU-US Privacy Shield invalid. This was on account of the invasive US surveillance programmes in place, which meant the transfer of personal data on the basis of Privacy Shield Decision was declared illegal.

At the same time the Court stipulated stricter requirements for the transfer of personal data based on Standard Contractual clauses (SCCs).

It stated both Controllers and Processors must ensure the data subject is granted a level of protection equivalent to that guaranteed by the GDPR and the EU Charter of Fundamental Rights. If this wasn’t possible the transfer of personal data should cease.

This came as quite a shock to many organisations. In particular anyone who was using software as a service (SaaS), technology solutions had a big problem.

Many of these suppliers are US based and the entreaties from Max Schrems and co to buy from EU didn’t really cut much ice. Where were the European equivalents of the most successful SaaS suppliers? Nowhere to be found!

What are SCCs?

The ICO definition is pretty snappy:

SCCs are standard sets of contractual terms and conditions which the sender and the receiver of the personal data both sign up to. They include contractual obligations which help to protect personal data when it leaves the EEA and the protection of GDPR.

These need to be used when you are exporting data to any third country – such as USA. You do not need to use them if a country has an adequacy agreement with EU.

What to do after the ruling?

The initial advice was to ensure that anyone who was relying on Privacy Shield should be prepared to sign SCCs. However, signing SCCs isn’t entirely plain sailing.  The court didn’t automatically rule they were invalid but, instead, ruled their use needed to be assessed on a case-by-case basis and it might be necessary to put in place “supplementary measures” to protect the data subject.

What do “supplementary measures” look like?

The main challenge with the US, where the federal government has significant power, was the fear of government surveillance.

Can data be further encrypted? Can data be stored in EU data centres and kept separate from the US data centres? Are these measures sufficient?

The CNIL in France seemed to think so when they ruled that a Covid vaccination booking site (Doctolib) based in France could host its service with the US company Amazon Web Services (AWS) in Luxembourg.

AWS were deemed to have introduced sufficient “supplementary measures” to protect personal data by creating a data silo in Europe which is separate from their service in US.

The new SCCs – what do they look like?

Soon after the court ruling, the EU published their draft version of the updated SCCs which had been in the pipeline for some time.

This was a happy co-incidence although it’s likely these were rushed out once the CJEU judgement was passed down.

The old SCCs were out of date and inflexible with no provision for Processors so everyone welcomed the fact  more useful SCCs were on their way.

What are the differences?

  • The SCCs are now modular meaning that they can accommodate a number of different scenarios, where you can pick the pieces that relates to your particular situation.
  • The SCCs cover four different transfer scenarios and including processor scenarios:
    • Controller to controller
    • Controller to processor
    • Processor to controller
    • Processor to processor
  • More than two parties can accede to the SCCs, meaning additional controllers and processors can be added through the lifetime of the contract. This potentially reduces the administrative burden.

Once adopted the new SCCs need to be phased in within 12 months. For large organisations with many contracts, this may be difficult to complete on time.

What about “supplementary measures”?

At the heart of the Schrems II decision was the opinion that the US surveillance regime had excessive powers to access data and therefore presented a risk for data subjects. It was suggested companies need to consider the introduction of “supplementary measures” to protect data subjects:

  • The definition of supplementary measures is covered in guidance provided by European Data Protection Board meaning you have to read those recommendations as well as the SCCs themselves.
  • The draft SCCs include the need for the data exporter and the data subject to be notified if a legally binding request has been made to access personal data.
  • The draft SCCs suggests a risk-based assessment of whether such data requests have been made in the past and the likelihood of them happening in the future. This does contradict the EDPB which does not believe any subjective assessment of risk should be included.

The bottom line is any data exporter should consider what additional security arrangements should be made when considering transferring data to a third country and that determining those arrangements will, to a large extent, depend on the data protection regime in the recipient country.

How does Brexit affect all of this?

Any country with an adequacy agreement in place with EU does not need to worry about SCCs. The fact the UK has been issued with a draft EU decision is extremely promising news and if adopted means any contract with an EU company does not need to be subject to the inclusion of SCCs.

However there remains the challenge of updating all SCCs for any transfers outside EU (notably US) within the 12-month period once the SCCs been adopted. (And UK based companies are of course still subject to international transfer rules under UK GDPR).

What could you do now?

Until the new SCCs and the UK adequacy decision are finalised, companies are in a state of limbo. Having said that, there is plenty that can be done to reduce the risk:

  • Make sure you’ve mapped all the possible data transfers from UK to EU and other third countries
  • Evaluate which data is exported and ask yourself whether it needs to be exported
  • Consider which contracts already have SCCs in place and where they will they need to be updated
  • Ensure your contract due diligence is in place with a detailed questionnaire for potential suppliers
  • Pay particular attention to which jurisdiction data will be stored in and consider the level of risk – has your supplier created data silos
  • Review whether it’s possible to introduce supplementary measures to protect data. For instance encrypting data to protect it from surveillance
  • Investigate whether there are credible alternatives to US technology partners in EU

 

Need some advice about handling your businesses international transfers, or any other data protection matter? Get in touch – Contact Us 

Data Protection Officers – should we appoint a DPO?

August 2020

I’m still regularly asked the question, ‘Do you think we should appoint a DPO?’.

GDPR introduced a requirement for certain organisations to appoint a Data Protection Officer. Their role is to advise the business on data protection requirements and obligations, monitor compliance and act as a contact point for individuals and data protection regulators (such as the ICO).

The role of a DPO has been well documented elsewhere, but what we find seems to cause most confusion are these questions:

  • Does my organisation need to appoint a DPO?
  • If not, would we be well-advised to appoint one anyway?
  • Can I outsource this role?

So, let’s work through these questions.

Which organisations need to appoint a DPO?

The requirements for organisations to appoint a DPO are clearly laid out on the ICO website and by other Supervisory Authorities in other jurisdictions, so we won’t repeat their advice here. But let’s pick out a few key points.

Firstly, you NEED to appoint a DPO if:

  • you are a public authority or body (except for courts acting in their judicial capacity); or
  • your core activities require large scale, regular and systematic monitoring of individuals (for example, online behaviour tracking); or
  • your core activities consist of large-scale processing of special categories of data or data relating to criminal convictions and offences.

You should remember the requirements apply to both controllers and processors.

And if you don’t need to appoint DPO?

Even if your organisation is not obliged to appoint a DPO, you still need to make sure you have ‘sufficient staff and resources to meet the organisation’s obligations under the GDPR’. So, you need to give thought to how you will achieve this. For example who will…

  • train your staff on data protection & privacy?
  • advise your business functions on data protection obligations and good privacy practices?
  • ensure appropriate people (e.g. function heads) are held accountable for the processing conducted by their teams?
  • make sure any new processing, or changes to processing, are properly assessed?
  • create & maintain your Records of Processing Activities (RoPA)?
  • monitor compliance?
  • act as the liaison point for your staff, customers and others whose data you process, and for any queries / complaints from Regulators?

The conclusion some organisations have come to is to appoint a DPO, whether its strictly necessary or not. Whilst many organisations chose to create a new role, others chose to appoint an existing employee to the role. It’s important to take care their other duties don’t conflict with their obligations under the DPO role.

Others have chosen not to appoint a DPO but have made sure there is a person or team in the business responsible for data protection compliance, for example a Privacy Manager.

It is worth noting that if you do appoint a DPO, this is a unique role. The GDPR sets out specific tasks a DPO is responsible for and the organisation has a duty to support the DPO to help them to fulfil these responsibilities.  (See GDPR Articles 37-39)

For example, the DPO must be independent, an expert in data protection, be adequately resourced and report to the highest management level.

How about an outsourced DPO?

It’s become quite popular to outsource the DPO role to an external supplier – particularly for businesses which are subject to budget pressures, perhaps the result to COVID-19.

Outsourcing can be an efficient lower cost option, particularly for small to medium sized businesses, enabling them to bring in specialist resources on a retainer without the difficulties and expense of recruiting a permanent employee.

There are other benefits of outsourcing this role:

  • Clearly defined job function & boundaries
  • A qualified and experienced person (hopefully!) with access to other supporting resources
  • Independence – prevents the risk of conflict of interests which can plague internal resources, e.g. having to juggle DPO obligations alongside other duties.
  • Support is ‘on-tap’ when you receive a Subject Access Request or if you suffer a data breach
  • Access to third party templates, policies and processes
    Onsite or remote working.

The best option will very much depend on the size and nature of your business and the types of data you’re processing.

Facial Recognition – should we be concerned?

Facial recognition is clever technology. From selfies to surveillance its use is becoming far more widespread.  Should we be worried? Are the right checks and balances being put in place?

The ICO has just announced a joint investigation with the Australian Information Commissioner into the activities of a company called Clearview.  It’s facial recognition app allows users to upload a photo of someone and match it to photos collected from the internet. It’s reported the app use a database of more than 3 billion images, that have been scraped from various social media platforms and other websites.

Facial recognition technologies are also being trialled by several UK police forces and used in other countries for police and security purposes, such as at airports, other major transport hubs and at large events. For example, the Danish supervisory authority has approved its use to identify football fans who’ve been banned from stadiums.

In some areas it’s definitely being seen as a step too far. Both the French and Swedish supervisory authorities have declared its use when trialled in schools as highly intrusive and out of kilter with data protection law.

So are the right checks and balances in place to protect people whose images are captured?

What’s clear is the accuracy of the algorithms behind this technology are being questioned, as are the privacy implications. Where facial recognition is to be used, there needs to be a strong justification for its use and robust safeguards in place to protect people.

What is facial recognition?

Facial Recognition (FR) is a broad term used to describe technologies that identify, catalogue and track human faces. The technology can be used in many ways but probably the biggest topic of debate relates to the use of facial images captured via CCTV or photo which are processed via biometric identifiers.

These identifiers typically include the unique ratios between an individual’s facial features, such as their eyes, nose and mouth. These are matched to an existing database of images and biometric data to identify and track specific individuals.

The risk of false positives

The use of facial recognition in real-time, known as Live Facial Recognition (LFR), raises significant privacy and human rights concerns, such as when it is used to match faces to a database for policing and security purposes.

study of facial recognition technology in the US by National Institute of Standards and Technology (NIST) discovered that systems were far worse at identifying people of colour than white people. Whilst results were dependent on the algorithms used, NIST found that some facial-recognition software produced far higher rates of false positives for black and Asian people than whites, by a factor of 10 to 100 times.

NIST also found the algorithms were worse at identifying women than men. Clearly there are huge concerns to be addressed, brought into sharp focus now with the Black Lives Matter movement.

Interestingly, there was no such dramatic difference in false positives in one-to-one matching between Asian and white faces for algorithms developed in Asia.

How did we get here?

Early in 2018, Amazon began selling a facial recognition AI product to US police departments. “Amazon Rekognition” attracted the condemnation of human rights groups and AI experts, who criticised the product’s high error rate and propensity for mistaking black US Congresspeople for known criminals.

In July 2018, the American Civil Liberties Union (ACLU) found 28 false matches between US Congress members and pictures of people arrested for a crime.

A later article in The Independent in May 2020 revealed that “Rekognition” had incorrectly matched more than 100 photos of politicians in the UK and US to criminals who had previously been arrested by police.

In May 2019 San Francisco became the first US city to ban the use of facial recognition by transport and law enforcement agencies, recognising the threats to civil liberties. Very recently Boston has followed suit.

However other US cities and other countries around the world have been trialling similar technology. Notably police and security forces are testing live facial recognition systems as a way of identifying criminals and terrorists at major transport hubs, such as airports and stations and at events and other public gatherings.

In June 2020, following ongoing criticism, Amazon announced it would stop supplying police with the technology for a year. Microsoft has said it will stop selling facial recognition technology to police departments until there is more regulation in place. IBM also said it will no longer offer its facial recognition software for “mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values”.

Some have speculated other reasons are at play for the apparent change of heart. If people are going to be wearing face masks for the foreseeable future, facial recognition models could be rendered temporarily useless!

UK policing and security applications

In the UK, the Metropolitan Police, Manchester, Leicester and South Wales Police forces have been trialling LFR over recent months. The Met announced they are using LFR to find suspects on watchlists for serious and violent crime and also to help find children and vulnerable people.

In July 2019, a study by Peter Fossey of University of Essex, mentioned in New Statesman found that matches in the London Metropolitan police trials were wrong 80% of the time, potentially leading to serious miscarriages of justice.

At the time of writing this article, a High Court ruling that the South Wales Police’s use of facial recognition technology is lawful is being challenged in the Court of Appeal.

Mr Ed Bridges from Cardiff has launched a legal challenge to South Wales Police’s use of facial recognition after his photo was taken while he was out shopping.  Backed by human rights group Liberty, they are arguing that the technology breaches human rights laws and is discriminatory. Mr Bridges commented:

“This technology is an intrusive and discriminatory mass surveillance tool and I’m optimistic that the court will agree that it clearly threatens our rights.”

Hang on… how are the reference databases of known facial images compiled?

Can you check if your face is on the database? How can you exercise your right to object? All valid questions which we’ve struggled to find answers to. The findings of the joint UK-Australian investigation into Clearview will be interesting. Clearview have provided FR services to law enforcement agencies around the world, including the FBI and the US Department of Homeland Security.

Privacy concerns over the use of biometric data

Any facial recognition technology capable of uniquely identifying an individual is likely to be processing biometric data (i.e. data which relates to the physical, physiological or behavioural characteristics of a person).

Biometric data falls under the definition of ‘special category’ data and is subject to strict rules. Therefore to compliantly process it in the European Union a lawful basis must be identified AND a condition must also be found in GDPR Article 9 to justify the processing. In the absence of explicit consent from the individual however, which is not practical in most FR applications, this may be tricky to achieve.

The EDPB guidelines

The European Data Protection Board (EDPB) issued draft guidelines (3/2019) for public consultation on processing of personal data through video devices. The draft guidelines specifically address the use of facial recognition technology. The following threats to privacy were identified:

  • Lack of transparency – an intrusion into the private lives of members of the public who had not consented to or were aware of the collection or the purposes for which they were collected/stored.
  • Misuse – images retrieved may be used for purposes other than that those notified or consented.
  • Accuracy – inherent technological bias within the technology may result in false positive matches or discrimination.
  • Automated decision-making – decisions which may significantly affect individuals may be based solely on the facial recognition software.

So where does all this leave us?

Its clear LFR represents a step change from the previous generation of CCTV. Police forces and other organisations using this technology need to properly assess their compliance with data protection law and guidance.

This includes how police watchlists are compiled, which images are used and for what purpose, what confidence level for recognition of an individual should be applied and  how the potential for false positives will be addressed. The controller must be ready to demonstrate their compliance.

I for one will be watching out intently for the Court of Appeal verdict on Ed Bridges vs South Wales Police and the findings from the UK-Australian investigations.

Update: Use of automated facial recognition by South Wales Police ruled unlawful – August 2020