‘Vilified’ doctor cannot publish patient’s private information

In the Matter of C (A Child) (Application by Dr X and Y) [2015] EWFC 79 involved, in the words of Munby J, an unusual and indeed unprecedented application. It pitted the right to defend one’s reputation against the privacy and confidentiality rights of others. In this case, the latter won.
Dr X had treated C and C’s mother; he had also been an expert witness in the family court care proceedings concerning C. C’s mother was unhappy about the treatment given by Dr X. She complained about him to the GMC, whose Fitness to Practise panel in due course found the allegations against Dr X to be unproven. C’s mother also criticised Dr X publicly in the media.
Dr X felt that his “otherwise unblemished reputation … has been cataclysmically damaged … through inaccurate reporting and internet postings” and that he has been “unfairly and unjustly pilloried by the mother and, through her, by the press” (his skeleton argument, cited at para 10 of Munby J’s judgment).
Dr X wanted to be able to put his side of the story, and to have the original source documents – from the family court proceedings and the Fitness to Practice proceedings – available, to quote from (while respecting anonymity) if his public statements were challenged. He sought disclosure of documents from those proceedings.
One difficulty he faced was that the law restricts the use to which documents from family proceedings could be put. The court had a discretion to allow disclosure, but generally subject to restrictions on the use to which documents could be put.
A further major difficulty was that he was bound by doctor-patient confidentiality, both as a matter of legal duty and professional confidentiality. That duty permits of exceptions – for example, to allow a doctor who is being unfairly vilified by a patient to defend himself – but even then any departure from confidentiality obligations must be proportionate.
The same applies to interference with patients’ privacy under Article 8 ECHR; privacy rights were particularly acute here, because what was sought (for disclosure, and for deployment in public statements) was “a mass of medical materials relating to the mother’s mental health” (Munby J at paragraph 42). Disclosure of those materials, even in redacted form, would have major implications for the privacy of the child, C.
Those difficulties were fatal to the application. Munby J said that “the remedy being sought by Dr X – permission to put the mother’s medical records and related documents into the public domain, at a time and in circumstances of his own choosing and without any of the safeguards usually imposed – is wholly disproportionate to anything which he can legitimately or reasonably demand”.
In relation to the documents filed in the Fitness to Practise proceedings but which were not part of the documentation filed in the care proceedings, the court had no jurisdiction to grant an application for disclosure. In any event, disclosure of the confidential material Dr X sought for deployment in the public domain would again be wholly disproportionate.
Heather Emmerson of 11KBW appeared for the GMC.​
Robin Hopkins @hopkinsrobin

Disclosing child protection information: make sure you ask the right questions first

High-profile revelations in recent years illustrate the importance of public authorities sharing information on individuals who are of concern in relation to child protection matters. When inaccurate information is shared, however, the consequences for the individual can be calamitous.

AB v Chief Constable of Hampshire Constabulary [2015] EWHC 1238 (Admin) is a recent High Court judgment (Jeremy Baker J) which explores the implications of such inaccurate disclosures. The case is not only about inaccuracies per se, but about why those inaccuracies were not picked up before the disclosure was made.

Perhaps the most notable point from the judgment is this: if such a disclosure is to be necessary, then the data controller must take care to ask themselves reasonable questions about that information, check it against other obvious sources, and make necessary enquiries before disclosure takes place.

In other words, failure to ask the right questions can lead to the wrong course of action in privacy terms. Here is how that principle played out in the AB case.

Background

In 2010, AB was summarily dismissed from his job as a science teacher for inappropriate comments and conduct with potential sexual undertones, as well as a failure to maintain an appropriately professional boundary with students. His appeal against dismissal failed. The Independent Safeguarding Authority, however, decided not to include AB on its barred lists. The General Teaching Council also investigated AB, but it did not find that the allegations of improper conduct were made out.

AB’s dismissal, however, came to the attention of a member of the child abuse investigation public protection unit of the Hampshire Constabulary. Enquiries were made of the college, and certain email correspondence and records were generated and retained on police systems.

Later the following year, AB was offered a teaching job elsewhere. This came to the police’s attention in 2013. There was internal discussion within the police about this. One officer said in an email that, among other things (i) AB had also been dismissed from another school, and (ii) AB’s 2010 dismissal had involved inappropriate touching between himself and pupils. There was no evidence that either of those points was true. That email concluded “From What I’ve been told he should be nowhere near female students. I will put an intel report in on [AB]”.

The above information was passed to the Local Authority Designated Officer (‘LADO’) and in turn to the school, who terminated AB’s employment. He then made a subject access request under the DPA, by which he learnt of the above communication, and also the source of that information, which was said to be a notebook containing a police officer’s notes from 2010 (which did not in fact record either (i) or (ii) above). AB complained of the disclosure and also of the relevant officer’s failures to follow the requisite safeguarding procedures. The police dismissed his complaint.

The Court’s judgment

AB sought judicial review of both the disclosure of the inaccurate email in the email, and of the dismissal of his complaint about the police officer’s conduct in his reporting of the matter.

The Court (Jeremy Baker J) granted the application on both issues. I focus here on the first, namely the lawfulness of the disclosure in terms of Article 8 ECHR.

Was the disclosure “in accordance with the law” for Article 8 purposes?

The Court considered the key authorities in this – by now quite well-developed – area of law (Article 8 in the context of disclosures by the police), notably:

MM v United Kingdom [2010] ECHR 1588 (the retention and disclosure of information relating to an individual by a public authority engages Article 8, and must therefore be justified under Article 8(2));

Tysiac v Poland (2007) 45 EHRR 42, where the ECtHR stressed the importance of procedural safeguards to protecting individuals’ Article 8 rights from unlawful interference by public bodies;

R v Chief Constable of North Wales Ex. Parte Thorpe [1999] QB 396: a decision about whether or not to disclose the identity of paedophiles to members of the public, is a highly sensitive one. “Disclosure should only be made when there is a pressing need for that disclosure”);

R (L) v Commissioner of Police for the Metropolis [2010] 1 AC 410: such cases are essentially about proportionality;

R (A) v Chief Constable of Kent [2013] EWCA Civ 1706: such a disclosure is often “in practice the end of any opportunity for the individual to be employed in an area for which an [Enhanced Criminal Record Certificate] is required. Balancing the risks of non-disclosure to the interests of the members of the vulnerable group against the right of the individual concerned to respect for his or her private life is a particularly sensitive and difficult exercise where the allegations have not been substantiated and are strongly denied”;

R (T) v Chief Constable of Greater Manchester Police & others [2015] AC 49 and R (Catt) v ACPO [2015] 2 WLR 664 on whether disclosures by police were in accordance with the law and proportionate.

The Court concluded that, in light of the above authorities, the disclosure made in AB’s case was “in accordance with the law”. It was made under the disclosure regime made up of: Part V of the Police Act 1997, the Home Office’s Statutory Disclosure Guidance on enhanced criminal records certificates, section 10 of the Children Act 2004 and the Data Protection Act 1998.

See Jeremy Baker J’s conclusion – and notes of caution – at [73]-[75]:

“73. In these circumstances it seems to me that not only does the common law empower the police to disclose relevant information to relevant parties, where it is necessary for one of these police purposes, but that the DPA 1998, together with the relevant statutory and administrative codes, provide a sufficiently clear, accessible and consistent set of rules, so as to prevent arbitrary or abusive interference with an individual’s Article 8 rights; such that the disclosure will be in accordance with law.

74. However, it will clearly be necessary in any case, and in particular in relation to a decision to disclose information to a third party, for the decision-maker to examine with care the context in which his/her decision is being made.

75. In the present case, although the disclosure of the information by the police was to a LADO in circumstances involving the safeguarding of children, it also took place in the context of the claimant’s employment. The relevance of this being, as DC Pain was clearly aware from the contents of his e-mail to PS Bennett dated 10th June 2013, that the disclosure of the information had the potential to adversely affect the continuation of the claimant’s employment at the school….”

Was the disclosure proportionate?

While the disclosure decision was in accordance with the law, this did not remove the need for the police carefully to consider whether disclosure was necessary and proportionate, particularly in light of the serious consequences of disclosure for AB’s employment.

The Court held that the disclosure failed these tests. The crucial factor was that if such information about AB was well founded, then it would have been contained in his Enhanced Criminal Record Certificate – and if it was not, this would have prompted enquiries about the cogency of the information (why, if it was correct, was such serious information omitted from the ECRC?) which would reasonably have been pursued to bottom the matter out before the disclosure was made. These questions had not been asked in this case. See [80]-[81]:

“… In these circumstances, it was in my judgment, a necessary procedural step for DC Pain to ascertain from the DBS unit as to, whether, and if so, what information it had already disclosed on any enhanced criminal record certificate, as clearly if the unit had already disclosed the information which DC Pain believed had been provided to him by the college, then it would not have been necessary for him to have made any further disclosure of that information.

81. If either DC Pain or PS Bennett had taken this basic procedural step, then not only would it have been immediately obvious that this information had not been provided to the school, but more importantly, in the context of this case, it would also have been obvious that further enquiries were required to be made: firstly as to why no such disclosure had been made by the DBS unit; and secondly, once it had been ascertained that the only information which was in the possession of the DBS unit was the exchange of e-mails on the defendant’s management system, as to the accuracy of the information with which DC Pain believed he had been provided by the college.”

Judicial reviews of disclosure decisions concerning personal data: the DPA as an alternative remedy?

Finally, the Court dealt with a submission that judicial review should not be granted as this case focused on what was essentially a data protection complaint, which could have been taken up with the ICO under the DPA (as was suggested in Lord Sumption’s comments in Catt). That submission was dismissed: AB had not simply ignored or overlooked that prospect, but had rather opted to pursue an alternative course of complaint; the DPA did not really help with the police conduct complaint, and the case raised important issues.

Robin Hopkins @hopkinsrobin

Above and below the waterline: IPT finds that Prism and Tempora are lawful

The now famous revelations by US whistleblower Edward Snowden focused on US government programmes under which vast amounts of data about individuals’ internet usage and communications were said to have been gathered. The allegations extended beyond the US: the UK government and security agencies, for example, were also said to be involved in such activity.

Unsurprisingly, concerns were raised about the privacy implications of such activity – in particular, whether it complied with individuals’ rights under the European Convention on Human Rights (privacy under Article 8; freedom of expression under Article 10).

The litigation before the Investigatory Powers Tribunal

Litigation was commenced in the UK by Privacy International, Liberty, Amnesty International and others. The cases were heard by a five-member panel of the Investigatory Powers Tribunal (presided over by Mr Justice Burton) in July of this year. The IPT gave judgment ([2014] UKIPTrib 13_77-H) today.

In a nutshell, it found that the particular information-gathering activities it considered – carried out in particular by GCHQ and the Security Service – are lawful.

Note the tense: they are lawful. The IPT has not determined whether or not they were lawful in the past. The key difference is this: an essential element of lawfulness is whether the applicable legal regime under which such activity is conducted is sufficiently accessible (i.e. is it available and understandable to people?). That turns in part on what the public is told about how the regime operates. During the course of this litigation, the public has been given (by means of the IPT’s open judgment) considerably more detail in this regard. This, says the IPT, certainly makes the regime lawful on a prospective basis. The IPT has not determined whether, prior to these supplementary explanations, the ‘in accordance with the law’ requirement was satisfied.

With its forward-looking, self-referential approach, this judgment is unusual. It is also unusual in that it proceeded to test the legality of the regimes largely by references to assumed rather than established facts about the Prism and Tempora activities. This is because not much about those activities has been publicly confirmed, due to the ‘neither confirm nor deny’ principle which is intrinsic to intelligence and security activity.

Prism

The first issue assessed by reference to assumed facts was called the “Prism” issue: this was about the collection/interception by US authorities of data about individuals’ internet communications and the assumed sharing of such data with UK authorities, who could then retain and use it. Would this arrangement be lawful under Article 8(2) ECHR? In particular, was it “in accordance with the law”, which in essence means did it have a basis in law and was it sufficiently accessible and foreseeable to the potentially affected individuals? (These are the so-called Weber requirements, from Weber and Saravia v Germany [2008] 46 EHRR SE5).

When it comes to intelligence, accessibility and foreseeability are difficult to achieve without giving the game away to a self-defeating extent. The IPT recognised that the Weber principles need tweaking in this context. The following ‘nearly-Weber’ principles were applied as the decisive tests for ‘in accordance with the law’ in this context:

“(i) there must not be an unfettered discretion for executive action. There must be controls on the arbitrariness of that action.

(ii) the nature of the rules must be clear and the ambit of them must be in the public domain so far as possible, an “adequate indication” given (Malone v UK [1985] 7 EHRR 14 at paragraph 67), so that the existence of interference with privacy may in general terms be foreseeable.”

Those tests will be met if:

“(i) Appropriate rules or arrangements exist and are publicly known and confirmed to exist, with their content sufficiently signposted, such as to give an adequate indication of it.

(ii) They are subject to proper oversight.”

On the Prism issue, the IPT found that those tests are met. The basis in law comes from the Security Service Act 1989, Intelligence Services Act 1994 and the Counter-Terrorism Act 2008. Additionally, the Data Protection Act 1998 DPA, the Official Secrets Act 1989 and the Human Rights Act 1998 restrain the use of data of the sort at issue here. Taken together, there are sufficient and specific statutory limits on the information that each of the Intelligence Services can obtain, and on the information that each can disclose.

In practical terms, there are adequate arrangements in place to safeguard against arbitrary of unfettered use of individuals’ data. These included the “arrangements below the waterline” (i.e. which are not publicly explained) which the Tribunal was asked to – and did – take into account.

Oversight of this regime comes through Parliament’s Intelligence and Security Committee and the Interception of Communications Commissioner.

Further, these arrangements are “sufficiently signposted by virtue of the statutory framework … and the statements of the ISC and the Commissioner… and as now, after the two closed hearings that we have held, publicly disclosed by the Respondents and recorded in this judgment”.

Thus, in part thanks to closed evidence of the “below the waterline” arrangements and open disclosure of more detail about those arrangements, the Prism programme (on the assumed facts before the IPT) is lawful, i.e. it is a justified intrusion into Article 8 ECHR rights.

The alleged Tempora interception operation

Unlike the Prism programme, the second matter scrutinised by the IPT – the alleged Tempora programme – involved the interception of communications by UK authorities. Here, in contrast to Prism (where the interception is done by someone else), the Regulation of Investigatory Powers Act 2000 is pivotal.

This works on a system of warrants for interception. The warrants are issued under section 8 of RIPA (supplemented by sections 15 and 16) by the Secretary of State, rather than by a member of the judiciary. The regime is governed by the Interception of Communications Code of Practice.

The issue for the IPT was: is this warrant system (specifically, the section 8(4) provision for ‘certified’ warrants) in accordance with the law, for ECHR purposes?

This has previously been considered by the IPT in the British Irish Rights Watch case in 2004. Its answer was that the regime was in accordance with the law. The IPT in the present cases re-examined the issue and took the same view. It rejected a number of criticisms of the certified warrant regime, including:

The absence of a tightly focused, ‘targeting’ approach at the initial stages of information-gathering is acceptable and inevitable.

There is no call “for search words to be included in an application for a warrant or in the warrant itself. It seems to us that this would unnecessarily undermine and limit the operation of the warrant and be in any event entirely unrealistic”.

There is also “no basis for objection by virtue of the absence for judicial pre-authorisation of a warrant. The United Kingdom system is for the approval by the highest level of government, namely by the Secretary of State”.

Further, “it is not necessary that the precise details of all the safeguards should be published, or contained in legislation, delegated or otherwise”.

The overall assessment was very similar as for Prism: in light of the statutory regime, the oversight mechanisms, the open and closed evidence of the arrangements (above and below the “waterline”) and additional disclosures by the Respondents, the regime for gathering, retaining and using intercepted data was in accordance with the law – both as to Article 8 and Article 10 ECHR.

Conclusion

This judgment is good news for the UK Government and the security bodies, who will no doubt welcome the IPT’s sympathetic approach to the practical exigencies of effective intelligence operations in the digital age. These paragraphs encapsulate the complaints and the IPT’s views:

“158. Technology in the surveillance field appears to be advancing at break-neck speed. This has given rise to submissions that the UK legislation has failed to keep abreast of the consequences of these advances, and is ill fitted to do so; and that in any event Parliament has failed to provide safeguards adequate to meet these developments. All this inevitably creates considerable tension between the competing interests, and the ‘Snowden revelations’ in particular have led to the impression voiced in some quarters that the law in some way permits the Intelligence Services carte blanche to do what they will. We are satisfied that this is not the case.

159. We can be satisfied that, as addressed and disclosed in this judgment, in this sensitive field of national security, in relation to the areas addressed in this case, the law gives individuals an adequate indication as to the circumstances in which and the conditions upon which the Intelligence Services are entitled to resort to interception, or to make use of intercept.”

11KBW’s Ben Hooper and Julian Milford appeared for the Respondents.

Robin Hopkins @hopkinsrobin

Some results may have been removed under data protection law in Europe. Learn more.

This is the message that now regularly greets those using Google to search for information on named individuals. It relates, of course, to the CJEU’s troublesome Google Spain judgment of 13 May 2014.

I certainly wish to learn more.

So I take Google up on its educational offer and click through to its FAQ page, where the folks at Google tell me inter alia that “Since this ruling was published on 13 May 2014, we’ve been working around the clock to comply. This is a complicated process because we need to assess each individual request and balance the rights of the individual to control his or her personal data with the public’s right to know and distribute information”.

The same page also leads me to the form on which I can ask Google to remove from its search results certain URLs about me. I need to fill in gaps like this: “This URL is about me because… This page should not be included as a search result because…” 

This is indeed helpful in terms of process, but I want to understand more about the substance of decision-making. How does (and/or should) Google determine whether or not to accede to my request? Perhaps understandably (as Google remarks, this is a complicated business on which the dust is yet to settle), Google doesn’t tell me much about that just yet.

So I look to the obvious source – the CJEU’s judgment itself – for guidance. Here I learn that I can in principle ask that “inadequate, irrelevant or no longer relevant” information about me not be returned through a Google search. I also get some broad – and quite startling – rules of thumb, for example at paragraph 81, which tells me this:

“In the light of the potential seriousness of that interference, it is clear that it cannot be justified by merely the economic interest which the operator of such an engine has in that processing. However, inasmuch as the removal of links from the list of results could, depending on the information at issue, have effects upon the legitimate interest of internet users potentially interested in having access to that information, in situations such as that at issue in the main proceedings a fair balance should be sought in particular between that interest and the data subject’s fundamental rights under Articles 7 and 8 of the Charter. Whilst it is true that the data subject’s rights protected by those articles also override, as a general rule, that interest of internet users, that balance may however depend, in specific cases, on the nature of the information in question and its sensitivity for the data subject’s private life and on the interest of the public in having that information, an interest which may vary, in particular, according to the role played by the data subject in public life.”

So it seems that, in general (and subject to the sensitivity of the information and my prominence in public life), my privacy rights trump Google’s economic rights and other people’s rights to find information about me in this way. So the CJEU has provided some firm steers on points of principle.

But still I wish to learn more about how these principles will play out in practice. Media reports in recent weeks have told us about the volume of ‘right to be forgotten’ requests received by Google.

The picture this week has moved on from volumes to particulars. In the past few days, we have begun to learn how Google’s decisions filter back to journalists responsible for the content on some of the URLs which objectors pasted into the forms they sent to Google. We learn that journalists and media organisations, for example, are now being sent messages like this:

“Notice of removal from Google Search: we regret to inform you that we are no longer able to show the following pages from your website in response to certain searches on European versions of Google.”

Unsurprisingly, some of those journalists find this puzzling and/or objectionable. Concerns have been ventilated in the last day or two, most notably by the BBC’s Robert Peston (who feels that, through teething problems with the new procedures, he has been ‘cast into oblivion’) and The Guardian’s James Ball (who neatly illustrates some of the oddities of the new regime). See also The Washington Post’s roundup of UK media coverage.

That coverage suggests that the Google Spain ruling – which made no overt mention of free expression rights under Article 10 ECHR – has started to bite into the media’s freedom. The Guardian’s Chris Moran, however, has today posted an invaluable piece clarifying some misconceptions about the right to be forgotten. Academic commentators such as Paul Bernal have also offered shrewd insights into the fallout from Google Spain.

So, by following the trail from Google’s pithy new message, I am able to learn a fair amount about the tenor of this post-Google Spain world.

Inevitably, however, given my line of work, I am interested in the harder edges of enforcement and litigation: in particular, if someone objects to the outcome of a ‘please forget me’ request to Google, what exactly can they do about it?

On such questions, it is too early to tell. Google says on its FAQ page that “we look forward to working closely with data protection authorities and others over the coming months as we refine our approach”. For its part, the ICO tells us that it and its EU counterparts are working hard on figuring this out. Its newsletter from today says for example that:

“The ICO and its European counterparts on the Article 29 Working Party are working on guidelines to help data protection authorities respond to complaints about the removal of personal information from search engine results… The recommendations aim to ensure a consistent approach by European data protection authorities in response to complaints when takedown requests are refused by the search engine provider.”

So for the moment, there remain lots of unanswered questions. For example, the tone of the CJEU’s judgment is that DPA rights will generally defeat economic rights and the public’s information rights. But what about a contest between two individuals’ DPA rights?

Suppose, for example, that I am an investigative journalist with substantial reputational and career investment in articles about a particular individual who then persuades Google to ensure that my articles do not surface in EU Google searches for his name? Those articles also contain my name, work and opinions, i.e. they also contain my personal data. In acceding to the ‘please forget me’ request without seeking my input, could Google be said to have processed my personal data unfairly, whittling away my online personal and professional output (at least to the extent that the relevant EU Google searches are curtailed)? Could this be said to cause me damage or distress? If so, can I plausibly issue a notice under s. 10 of the DPA, seek damages under s. 13, or ask the ICO to take enforcement action under s. 40?

The same questions could arise, for example, if my personal backstory is heavily entwined with that of another person who persuades Google to remove from its EU search results articles discussing both of us – that may be beneficial for the requester, but detrimental to me in terms of the adequacy of personal data about me which Google makes available to the interested searcher.

So: some results may have been removed under data protection law in Europe, and I do indeed wish to learn more. But I will have to wait.

Robin Hopkins @hopkinsrobin

GCHQ’s internet surveillance – privacy and free expression join forces

A year ago, I blogged about Privacy International’s legal challenge – alongside Liberty – against GCHQ, the Security Services and others concerning the Prism/Tempora programmes which came to public attention following Edward Snowden’s whistleblowing. That case is now before the Investigatory Powers Tribunal. It will be heard for 5 days, commencing on 14 July.

Privacy International has also brought a second claim against GCHQ: in May 2014, it issued proceedings concerning the use of ‘hacking’ tools and software by intelligence services.

It has been announced this week that Privacy International is party to a third challenge which has been filed with the Investigatory Powers Tribunal. This time, the claim is being brought alongside 7 internet service providers: GreenNet (UK), Chaos Computer Club (Germany); GreenHost (Netherlands); Jimbonet (Korea), Mango (Zimbabwe), May First/People Link (US) and Riseup (US).

The claim is interesting on a number of fronts. One is the interplay between global reach (see the diversity of the claimants’ homes) and this specific legal jurisdiction (the target is GCHQ and the jurisdiction is the UK – as opposed, for example, to bringing claims in the US). Another is that it sees private companies – and therefore Article 1 Protocol 1 ECHR issues about property, business goodwill and the like – surfacing in the UK’s internet surveillance debate.

Also, the privacy rights not only of ‘ordinary’ citizens (network users) but also specifically those of the claimants’ employees are being raised.

Finally, this claim sees the right to free expression under Article 10 ECHR – conspicuously absent, for example, in the Google Spain judgment – flexing its muscle in the surveillance context. Privacy and free expression rights are so often in tension, but here they make common cause.

The claims are as follows (quoting from the claimants’ press releases):

(1) By interfering with network assets and computers belonging to the network providers, GCHQ has contravened the UK Computer Misuse Act and Article 1 of the First Additional Protocol (A1AP) of the European Convention of Human Rights (ECHR), which guarantees the individual’s peaceful enjoyment of their possessions

(2) Conducting surveillance of the network providers’ employees is in contravention of Article 8 ECHR (the right to privacy) and Article 10 ECHR (freedom of expression)

(3) Surveillance of the network providers’ users that is made possible by exploitation of their internet infrastructure, is in contravention of Arts. 8 and 10 ECHR; and

(4) By diluting the network providers’ goodwill and relationship with their users, GCHQ has contravened A1AP ECHR.

Robin Hopkins @hopkinsrobin

Fingerprints requirement for passport does not infringe data protection rights

Mr Schwarz applied to his regional authority, the city of Bochum, for a passport. He was required to submit a photograph and fingerprints. He did not like the fingerprint part. He considered it unduly invasive. He refused. So Bochum refused to give him a passport. He asked the court to order it to give him one. The court referred to the Court of Justice of the European Union questions about whether the requirement to submit fingerprints in addition to photographs complied with the Data Protection Directive 95/46/EC.

Last week, the Fourth Chamber of the CJEU gave its judgment: the requirement is data protection-compliant.

The requirement had a legal basis, namely Article 1(2) of Council Regulation 2252/2004, which set down minimum security standards for identity-confirmation purposes in passports.

This pursued a legitimate aim, namely preventing illegal entry into the EU.

Moreover, while the requirements entailed the processing of personal data and an interference with privacy rights, the ‘minimum security standards’ rules continued to “respect the essence” of the individual’s right to privacy.

The fingerprint requirement was proportionate because while the underlying technology is not 100% successful in fraud-detection terms, it works well enough. The only real alternative as an identity-verifier is an iris scan, which is no less intrusive and is technologically less robust. The taking of fingerprints is not very intrusive or intimate – it is comparable to having a photograph taken for official purposes, which people don’t tend to complain about when it comes to passports.

Importantly, the underlying Regulation provided that the fingerprints could only be used for identity-verification purposes and that there would be no central database of fingerprints (instead, each set is stored only in the passport).

This is all common-sense stuff in terms of data protection compliance. Data controllers take heart!

Robin Hopkins