Interfering with the fundamental rights of practically the entire European population

In the Digital Rights Ireland case, the Grand Chamber of the CJEU has this week declared invalid the 2006 Directive which provides for the mass retention – and disclosure to policing and security authorities – of individuals’ online traffic data. It found this regime to be a disproportionate interference with privacy rights. Depending on your perspective, this is a major step forward for digital privacy, or a major step backwards in countering terrorism and serious crime. It probably introduces even more uncertainty in terms of the wider project of data protection reform at the EU level. Here is my synopsis of this week’s Grand Chamber judgment.

Digital privacy vs national security: a brief history

There is an overlapping mesh of rights under European law which aims to protect citizens’ rights with respect to their personal data – an increasingly important strand of the broader right to privacy. The Data Protection Directive (95/46/EC) was passed in 1995, when the internet was in its infancy. It provides that personal data must be processed (obtained, held, used, disclosed) fairly and lawfully, securely, for legitimate purposes and so on.

Then, as the web began to mature into a fundamental aspect of everyday life, a supplementary Directive was passed in 2002 (2002/58/EC) on privacy and electronic communications. It is about privacy, confidentiality and the free movement of electronic personal data in particular.

In the first decade of the 21st century, however, security objectives became increasingly urgent. Following the London bomings of 2005 in particular, the monitoring of would-be criminals’ web activity was felt to be vital to effective counter-terrorism and law enforcement. The digital confidentiality agenda needed to make space for a measure of state surveillance.

This is how Directive 2006/24 came to be. In a nutshell, it provides for traffic and location data (rather than content-related information) about individuals’ online activity to be retained by communications providers and made available to policing and security bodies. This data was to be held for a minimum of six months and a maximum of 24 months.

That Directive – like all others – is however subject to the EU’s Charter of Fundamental Rights. Article 7 of that Charter enshrines the right to respect for one’s private and family life, home and communications. Article 8 is about the right to the protection and fair processing of one’s personal data.

Privacy and Digital Rights Ireland prevail

Digital Rights Ireland took the view that the 2006 Directive was not compatible with those fundamental rights. It asked the Irish Courts to refer this to the CJEU. Similar references were made during different litigation before the Austrian Courts.

The CJEU gave its answer this week. In Digital Rights Ireland Ltd v Minister for Communications, Marine and Natural Resources and Others (C‑293/12) joined with Kärntner Landesregierung and Others (C‑594/12), the Grand Chamber held the 2006 Directive to be invalid on the grounds of its incompatibility with fundamental privacy rights.

The Grand Chamber accepted that, while privacy rights were interfered with, this was in pursuit of compelling social objectives (the combatting of terrorism and serious crime). The question was one of proportionality. Given that fundamental rights were being interfered with, the Courts would allow the European legislature little lee-way: anxious scrutiny would be applied.

Here, in no particular order, are some of the reasons why the 2006 Directive failed its anxious scrutiny test (quotations are all from the Grand Chamber’s judgment). Unsurprisingly, this reads rather like a privacy impact assessment which data controllers are habitually called upon to conduct.

The seriousness of the privacy impact

First, consider the nature of the data which, under Articles 3 and 5 the 2006 Directive, must be retained and made available. “Those data make it possible, in particular, to know the identity of the person with whom a subscriber or registered user has communicated and by what means, and to identify the time of the communication as well as the place from which that communication took place. They also make it possible to know the frequency of the communications of the subscriber or registered user with certain persons during a given period.”

This makes for a serious incursion into privacy: “Those data, taken as a whole, may allow very precise conclusions to be drawn concerning the private lives of the persons whose data has been retained, such as the habits of everyday life, permanent or temporary places of residence, daily or other movements, the activities carried out, the social relationships of those persons and the social environments frequented by them.”

Second, consider the volume of data gathered and the number of people affected. Given the ubiquity of internet communications, the 206 Directive “entails an interference with the fundamental rights of practically the entire European population”.

Admittedly, the 2006 regime does not undermine “the essence” of data protection rights (because it is confined to traffic data – the contents of communications are not retained), and is still subject to data security rules (see the seventh data protection principle under the UK’s DPA 1998).

Nonetheless, this is a serious interference with privacy rights. It has objective and subjective impact: “it is wide-ranging, and it must be considered to be particularly serious… the fact that data are retained and subsequently used without the subscriber or registered user being informed is likely to generate in the minds of the persons concerned the feeling that their private lives are the subject of constant surveillance.”

Such a law, said the Grand Chamber, can only be proportionate if it includes clear and precise laws governing the scope of the measures and providing minimum safeguards for individual rights. The 2006 Directive fell short of those tests.

Inadequate rules, boundaries and safeguards

The regime has no boundaries, in terms of affected individuals: it “applies even to persons for whom there is no evidence capable of suggesting that their conduct might have a link, even an indirect or remote one, with serious crime”.

It also makes no exception for “persons whose communications are subject, according to rules of national law, to the obligation of professional secrecy”.

There are no sufficiently specific limits on the circumstances in which this can be accessed by security bodies, on the purposes to which that data can be put by those bodies, or the persons with whom those particular bodies may share the data.

There are no adequate procedural safeguards: no court or administrative authority is required to sign off the transfers.

There are also no objective criteria for justifying the retention period of 6-24 months.

The Grand Chamber’s conclusion

In summary, the Grand Chamber found that “in the first place, Article 7 of Directive 2006/24 does not lay down rules which are specific and adapted to (i) the vast quantity of data whose retention is required by that directive, (ii) the sensitive nature of that data and (iii) the risk of unlawful access to that data, rules which would serve, in particular, to govern the protection and security of the data in question in a clear and strict manner in order to ensure their full integrity and confidentiality. Furthermore, a specific obligation on Member States to establish such rules has also not been laid down…”

There was also an international transfer aspect to its concern: “in the second place, it should be added that that directive does not require the data in question to be retained within the European Union…”

This last point is of course highly relevant to another of the stand-offs between digital privacy and national security which looms in UK litigation, namely the post-Snowden litigation against security bodies.

Robin Hopkins @hopkinsrobin

Steinmetz and Others v Global Witness: latest developments

Panopticon devotees will have noted that important DPA litigation is afoot between a group of businessmen (Beny Steinmetz and others) and the NGO Global Witness. The Economist has recently reported on the latest developments in the case: see here.

I particularly like the article’s subtitle: “Libel laws have become laxer. Try invoking data protection instead”. This is an observation I (and others) have made in the past: see here for example. The point appears to be gathering momentum.

Robin Hopkins @hopkinsrobin

ICO cannot have a second go

Okay, the following points are mainly about procedure, but they are nonetheless quite important for those involved in FOIA litigation before the Tribunals. These points come from a pair of recent Upper Tribunal decisions, both arising out of requests from the same requester.

One is IC v Bell [2014] UKUT 0106 (AAC): Bell UT s58. Question: suppose the First-Tier Tribunal thinks the ICO got it wrong in its decision notice. Can it remit the matter to the ICO for him to think again and issue another decision notice on the same complaint? Answer: no, it can’t; it must dispose of the appeal itself. There are some exceptions, but that is the general view with which parties should approach Tribunal litigation.

That Bell decision also comments on the importance, in relevant circumstances, of the Tribunal ensuring that it gets the input of the public authority and not just of the ICO, as there will be cases where only the public authority can really provide the answers to questions that arise at the Tribunal stage.

That same Bell decision also explores this point, for those with an interest in FOIA and statutory construction (surely there are some of you?): under s. 58 of FOIA, unless the Tribunal is going to dismiss an appeal, it must “allow the appeal or substitute such other notice as could have been served by the Commissioner” (my emphasis). That is curious. Quite often, Tribunals do both of those things at the same time. What to make of this? Judge Jacobs explains in the Bell decision.

There was also a second Bell appeal on the same day: Bell UT s14. Same Bell, different public authority and separate case: IC and MOD v Bell (GIA/1384/2013). This was about s. 14 of FOIA (vexatious requests). The public authority had provided lots of detail about the background to the series of requests to make good its case under s. 14. But there was a paper hearing rather than an oral one and the Tribunal appears to have overlooked some of that detail and it found that s. 14 had been improperly applied.

Judge Jacobs overturned that decision. One reason was this: when a binding and decisive new judgment (here, Dransfield) appears between the date of a hearing and the date of the Tribunal’s final deliberations, justice requires that the parties be given an opportunity to make submissions on the application of that judgment.

Another was that the Tribunal had failed properly to engage with the documentary evidence before it. “That is why the papers were provided: to be read. A tribunal is not entitled to rely on the parties to point to the passages that it should read and to look at nothing else” (my emphasis). This underlined point is obviously of general application to Tribunal litigation.

Robin Hopkins @hopkinsrobin

Data protection and compensation: the “irreversible march” towards revolutionary change

At 11KBW’s Information Law conference this past Tuesday, I talked a bit about the progress of the draft EU Data Protection Regulation. I omitted to mention last week’s development (my reason: I was on holiday in Venice, where data protection seemed less pressing). In a plenary session on 12 March, the European Parliament voted overwhelmingly in support of the Commission’s current draft of the Regulation. This is all explain in this Memo from the European Commission. Here are some key points.

One is the apparently “irreversible” progress towards getting the Regulation onto the EU statute books. “The position of the Parliament is now set in stone and will not change even if the composition of the Parliament changes following the European elections in May. As a reminder, the remaining stage is for the European Council to agree to the proposal. Its ministers are meeting again in early June. So far, they have been broadly supportive.

Another point is about business size and data protection risk: SMEs will not need to notify (so where will the ICO get its funding?), they won’t need to have data protection officers or carry out privacy impact assessments as a default rule. “We want to make sure that obligations are not imposed except where they are necessary to protect personal data: the baker on the corner will not be subject to the same rules as a (multinational) data processing specialist.”

A third point has great consequences for international transfers: “Non-European companies, when offering services to European consumers, will have to apply the same rules and adhere to the same levels of protection of personal data. The reasoning is simple: if companies outside Europe want to take advantage of the European market with more than 500 million potential customers, then they have to play by the European rules”.

Fourth, the “right to be forgotten” is still very much on the agenda. “If an individual no longer wants his or her personal data to be processed or stored by a data controller, and if there is no legitimate reason for keeping it, the data should be removed from their system” (subject to freedom of expression). This “citizen in the driving seat” principle, like the consistency aim (the same rules applied the same away across the whole EU) and the “one-stop shop” regulatory model has been part of the reform package from the outset.

A final point is that the Parliament wants regulators to be able to impose big fines: “It has proposed strengthening the Commission’s proposal by making sure that fines can go up to 5% of the annual worldwide turnover of a company (up from 2% in the Commission’s proposal)”. Monetary penalties will not be mandatory, but they will potentially be huge.

On this last point about money: as under the current law, a regulatory fine is one thing and the individual’s right to be compensated another. At out seminar on Tuesday, we discussed whether there would soon be a sweeping away (see for example the Vidal-Hall v Google litigation) of the long-established Johnson v MDU principle that in order to be compensated for distress under section 13 of the DPA, you need first to prove that you suffered financial loss. That may well be so for the DPA, in which case the short- and medium-term consequences for data protection litigation in the UK will be huge.

But it is important to be clear about the longer term: this is going to happen anyway, regardless of any case-law development in UK jurisprudence. Article 77 of the current draft of the Regulation begins like this “Any person who has suffered damage, including non-pecuniary damage, as a result of an unlawful processing operation or of an action incompatible with this Regulation shall have the right to claim compensation from the controller or the processor for the damage suffered”.

If we are indeed irreversibly on track towards a new Regulation, then data protection litigation – notably, though not only about compensating data subjects – is guaranteed to be revolutionised.

Robin Hopkins @hopkinsrobin

A history and overview of the FOIA/EIR veto

The ‘veto’ (ministerial certificate) provision under s. 53 of FOIA (imported also into the EIRs) has been much discussed – on this blog and elsewhere – of late. Here is another excellent resource on the subject which is worth drawing to the attention of readers who want to understand this issue in more detail. Earlier this week, the House of Commons library published this note by Oonagh Gay and Ed Potton on the veto, its use to date, and comparative jurisdictions (Australia, New Zealand, Ireland).

Robin Hopkins @hopkinsrobin

FOIA disclosures: ‘motive blindness’ and risks to mental health

Some FOIA ‘mantras’ frustrate requesters, such as judging matters as at the time of the request/refusal, regardless of subsequent events. Others tend to frustrate public authorities, such as ‘motive blindness’. A recent Tribunal discusses and illustrates both principles – in the context of the distress (including a danger to mental health) likely to arise from disclosure.

The background is that a certain pupil referral unit (PRU) in County Durham was the subject of complaints; 13 of its 60 staff had been suspended. An independent investigation team reported in November 2012. Later in that same month, the Council received a FOIA request for a copy of the investigators’ report. At that time, disciplinary proceedings were pending against each of the suspended members of staff. Those proceedings were to be conducted on a confidetial basis.

The Council refused the request, relying on section 31 (prejudice to conduct of function for purpose of ascertaining any improper conduct), section 40 (personal data) and 38 (health and safety). The ICO agreed, and so has the Tribunal, dismissing the requester’s appeal in Hepple v IC and Durham County Council (EA/2013/0168).

The Tribunal confirmed that, notwithstanding the appellant’s practical arguments to the contrary, it had to judge matters as they stood at the time of the Council’s refusal of the request (paras 4-7).

Section 31 was engaged: “We are satisfied, having read the Report in full, that disclosure in full would have given rise to a perception of unfairness and pre-judgement that would have prejudiced the disciplinary proceedings. Those deciding the complaint might have avoided being prejudiced but the perception of a disinterested third party would have been that the staff member’s right to a fair hearing had been undermined, particularly if publication had attracted media comment” (para 14). The public interest favoured maintaining the exemption.

Reliance on section 40(2) was upheld: the unwarranted interference to the data subjects prevailed over public interest arguments. The comparative balance may have shifted slightly since the date of the refusal, but that was not the relevant time for the purposes of the appeal.

Reliance on section 38 was also upheld. This exemption for health and safety (here, danger to mental health) seldom surfaces in FOIA caselaw. Here it was upheld, largely because the requester himself had sent certain text messages (for which he was later apologetic) to some of the individuals involved. The Tribunal “drew the clear impression that the texts had been transmitted with the purpose of menacing those whose addresses the Appellant had acquired” (para 37).

Those text messages were sent after the refusal of the request, but the Tribunal was satisfied that they evidenced a state of mind likely to have existed at the relevant time. As to ‘motive blindness’, the Tribunal said that “assessing an information request on this “motive blind” basis ought not to prevent us from considering the potential risk to safety posed by the requester him/herself”.

‘Motive blindness’ may be something of a mantra in FOIA cases, but – as with vexatious request cases – it is a principle which should be applied with appropriate nuance.

Robin Hopkins @hopkinsrobin