Facebook, child protection and outsourced monitoring

Facebook is no stranger to complaints about the content of posts. Usually, one user complains to Facebook about what other users’ posts say about him. By making the offending posts available, Facebook is processing the complainant’s personal data, and must do so in compliance with data protection law.

More unusually, a user could also complain about their own Facebook posts. Surely a complainant cannot make data protection criticisms about information they deliberately posted about themselves? After all, Facebook processes those posts with the author’s consent, doesn’t it?

Generally, yes – but that will not necessarily be true in every instance, especially when it comes to Facebook posts by children. This is the nature of the complaint in striking litigation currently afoot before the High Court in Northern Ireland.

The case is HL v Facebook Inc, Facebook Ireland Ltd, the Northern Health & Social Care Trust and DCMS [2015] NIQB 61. It is currently only in its preliminary stages, but it raises very interesting and important issues about Facebook’s procedures for preventing underage users from utilising the social network. Those issues are illuminated in the recent judgment of Stephen J, who is no stranger to claims against Facebook – he heard the recent case of CG v Facebook [2015] NIQB 11, concerning posts about a convicted paedophile.

From the age of 11 onwards, HL maintained a Facebook page on which she made posts of an inappropriate sexual nature. She was exposed to responses from sexual predators. She says that Facebook is liable for its failure to prevent her from making these posts. She alleges that Facebook (i) unlawfully processed her sensitive personal data, (ii) facilitated her harassment by others, and (iii) was negligent in failing to have proper systems in place to minimise the risks of children setting up Facebook accounts by lying about their age.

The data protection claim raises a number of issues of great importance to the business of Facebook and others with comparable business models. One is the extent to which a child can validly consent to the processing of their personal data – especially sensitive personal data. Minors are (legitimately or not) increasingly active online, and consent is a cornerstone of online business. The consent issue is of one of wide application beyond the HL litigation.

A second issue is whether, in its processing of personal data, Facebook does enough to stop minors using their own personal data in ways which could harm them. In her claim, for example, HL refers to evidence given to a committee of the Australian Parliament – apparently by a senior privacy advisor to Facebook (though Facebook was unable to tell Stephens J who he was). That evidence apparently said that Facebook removes 20,000 under-age user profiles a day.

Stephens J was also referred to comments apparently made by a US Senator to Mark Zuckerberg about the vulnerability of underage Facebook users.

Another element of HL’s case concerns Facebook’s use of an outsourcing company called oDesk, operating for example from Morocco, to moderate complaints about Facebook posts. She calls into question the adequacy of these oversight measures: ‘where then is the oversight body for these underpaid global police?’ (to quote from a Telegraph article referred to in the recent HL judgment). Facebook says that – given its number of users in multiple languages across the globe – effective policing is a tall order (an argument J summed up at paragraph 22 as ‘the needle in a haystack argument, there is just too much to monitor, the task of dealing with underage users is impossible’).

In short, HL says that Facebook seems to be aware of the scale and seriousness of the problem of underage use of its network and has not done enough to tackle that problem.

Again, the issue is one of wider import for online multinationals for whom personal data is stock-in-trade.

The same goes for the third important data protection issue surfacing in the HL litigation. This concerns jurisdiction, cross-border data controllers and section 5 of the Data Protection Act 1998. For example, is Facebook Ireland established in the UK by having an office, branch or agency, and does it process the personal data in Facebook posts in the context of that establishment?

These issues are all still to be decided. Stephens J’s recent judgment in HL was not about the substantive issues, but about HL’s applications for specific discovery and interrogatories. He granted those applications. In addition to details of HL’s Facebook account usage, he ordered the Facebook defendants to disclose agreements between them and Facebook (UK) Ltd and between them and o-Desk (to whom some moderating processes were outsourced). He has also ordered the Facebook defendants to answer interrogatory questions about their procedures for preventing underage Facebook use.

In short, the HL litigation has – thus far – raised difficult data protection and privacy issues which are fundamental to Facebook’s business, and it has required Facebook to lay bare internal details of its safeguarding practices. The case is only just beginning. The substantive hearing, which is listed for next term, could groundbreaking.

Robin Hopkins @hopkinsrobin

DRIPA 2014 declared unlawful

In a judgment of the Divisional Court handed down this morning, Bean LJ and Collins J have declared section 1 of the Data Retention and Investigatory Powers Act 2014 (DRIPA) to be unlawful.

For the background to that legislation, see our posts on Digital Rights Ireland and then on the UK’s response, i.e. passing DRIPA in an attempt to preserve data retention powers.

That attempt has today suffered a serious setback via the successful challenges brought by the MPs David Davis and Tom Watson, as well as Messrs Brice and Lewis. The Divisional Court did, however, suspend the effect of its order until after 31 March 2016, so as to give Parliament time to consider how to put things right.

Analysis to follow in due course, but for now, here is the judgment: Davis Watson Judgment.

Robin Hopkins @hopkinsrobin

Google and the ordinary person’s right to be forgotten

The Guardian has reported today on data emerging from Google about how it has implemented the Google Spain ‘right to be forgotten’ principle over the past year or so: see this very interesting article by Julia Powles.

While the data is rough-and-ready, it appears to indicate that the vast majority of RTBF requests actioned by Google have concerned ‘ordinary people’. By that I mean people who are neither famous nor infamous, and who seek not to have high-public-interest stories erased from history, but to have low-public-interest personal information removed from the fingertips of anyone who cares to Google their name. Okay, that explanation here is itself rough-and-ready, but you get the point: most RTBF requests come not from Max Mosley types, but from Mario Costeja González types (he being the man who brought the Google Spain complaint in the first place).

As Julia Powles points out, today’s rough-and-ready is thus far the best we have to go on in terms of understanding how the RTBF is actually working in practice. There is very little transparency on this. Blame for that opaqueness cannot fairly be levelled only at Google and its ilk – though, as the Powles articles argues, they may have a vested interest in maintaining that opaqueness. Opaqueness was inevitable following a judgment like Google Spain, and European regulators have, perhaps forgivably, not yet produced detailed guidance at a European level on how the public can expect such requests to be dealt with. In the UK, the ICO has given guidance (see here) and initiated complaints process (see here).

Today’s data suggests to me that a further reason for this opaqueness is the ‘ordinary person’ factor: the Max Mosleys of the world tend to litigate (and then settle) when they are dissatisfied, but the ordinary person tends not to (Mr González being an exception). We remain largely in the dark about how this web-shaping issue works.

So: the ordinary person is most in need of transparent RTBF rules, but least equipped to fight for them.

How might that be resolved? Options seem to me to include some combination of (a) clear regulatory guidance, tested in the courts, (b) litigation by a Max Mosley-type figure which runs its course, (c) litigation by more Mr González figures (i.e. ordinary individuals), (d) litigation by groups of ordinary people (as in Vidal Hall, for example) – or perhaps (e) litigation by members of the media who object to their stories disappearing from Google searches.

The RTBF is still in its infancy. Google may be its own judge for now, but one imagines not for long.

Robin Hopkins @hopkinsrobin

Austria will not host Europe vs Facebook showdown

As illustrated by Anya Proops’ recent post on a Hungarian case currently before the CJEU, the territorial jurisdiction of European data protection law can raise difficult questions.

Such questions have bitten hard in the Europe vs Facebook litigation. Max Schrems, an Austrian law graduate, is spearheading a massive class action in which some 25,000 Facebook users allege numerous data protection violations by the social media giant. Those include: unlawful obtaining of personal data (including via plug-ins and “like” buttons); invalid consent to Facebook’s processing of users’ personal data; use of personal data for impermissible purposes, including the unlawful analysing of data/profiling of users (“the Defendant analyses the data available on every user and tries to explore users’ interests, preferences and circumstances…”); unlawful sharing of personal data with third parties and third-party applications. The details of the claim are here.

Importantly, however, the claim is against Facebook Ireland Ltd, a subsidiary of the Californian-based Facebook Inc. The class action has been brought in Austria.

Facebook challenged the Austrian court’s jurisdiction. Last week, it received a judgment in its favour from the Viennese Regional Civil Court. The Court said it lacks jurisdiction in part because Mr Schrems is not deemed to be a ‘consumer’ of Facebook’s services. In part also, it lacks jurisdiction because Austria is not the right place to be bringing the claim. Facebook argued that the claim should be brought either in Ireland or in California, and the Court agreed.

Mr Schrems has announced his intention to appeal. In the meantime, the Austrian decision will continue to raise both eyebrows and questions, particularly given that a number of other judgments in recent years have seen European courts accepting jurisdiction to hear claims against social media companies (such as Google: see Vidal-Hall, for example) based elsewhere.

The Austrian decision also highlights the difficulties of the ‘one-stop shop’ principle which remains part of the draft Data Protection Regulation (albeit in more nuanced and complicated formulation than had earlier been proposed). In short, why should an Austrian user have to sue in Ireland?

Panopticon will report on any developments in this case in due course. It will also report on the other strand of Mr Schrems’ privacy campaign, namely his challenge to the lawfulness of the Safe Harbour regime for the transferring of personal data to the USA. That challenge has been heard by the CJEU, and the Advocate General’s opinion is imminent. The case will have major implications for those whose business involves transatlantic data transfers.

Robin Hopkins @hopkinsrobin

Disclosing child protection information: make sure you ask the right questions first

High-profile revelations in recent years illustrate the importance of public authorities sharing information on individuals who are of concern in relation to child protection matters. When inaccurate information is shared, however, the consequences for the individual can be calamitous.

AB v Chief Constable of Hampshire Constabulary [2015] EWHC 1238 (Admin) is a recent High Court judgment (Jeremy Baker J) which explores the implications of such inaccurate disclosures. The case is not only about inaccuracies per se, but about why those inaccuracies were not picked up before the disclosure was made.

Perhaps the most notable point from the judgment is this: if such a disclosure is to be necessary, then the data controller must take care to ask themselves reasonable questions about that information, check it against other obvious sources, and make necessary enquiries before disclosure takes place.

In other words, failure to ask the right questions can lead to the wrong course of action in privacy terms. Here is how that principle played out in the AB case.

Background

In 2010, AB was summarily dismissed from his job as a science teacher for inappropriate comments and conduct with potential sexual undertones, as well as a failure to maintain an appropriately professional boundary with students. His appeal against dismissal failed. The Independent Safeguarding Authority, however, decided not to include AB on its barred lists. The General Teaching Council also investigated AB, but it did not find that the allegations of improper conduct were made out.

AB’s dismissal, however, came to the attention of a member of the child abuse investigation public protection unit of the Hampshire Constabulary. Enquiries were made of the college, and certain email correspondence and records were generated and retained on police systems.

Later the following year, AB was offered a teaching job elsewhere. This came to the police’s attention in 2013. There was internal discussion within the police about this. One officer said in an email that, among other things (i) AB had also been dismissed from another school, and (ii) AB’s 2010 dismissal had involved inappropriate touching between himself and pupils. There was no evidence that either of those points was true. That email concluded “From What I’ve been told he should be nowhere near female students. I will put an intel report in on [AB]”.

The above information was passed to the Local Authority Designated Officer (‘LADO’) and in turn to the school, who terminated AB’s employment. He then made a subject access request under the DPA, by which he learnt of the above communication, and also the source of that information, which was said to be a notebook containing a police officer’s notes from 2010 (which did not in fact record either (i) or (ii) above). AB complained of the disclosure and also of the relevant officer’s failures to follow the requisite safeguarding procedures. The police dismissed his complaint.

The Court’s judgment

AB sought judicial review of both the disclosure of the inaccurate email in the email, and of the dismissal of his complaint about the police officer’s conduct in his reporting of the matter.

The Court (Jeremy Baker J) granted the application on both issues. I focus here on the first, namely the lawfulness of the disclosure in terms of Article 8 ECHR.

Was the disclosure “in accordance with the law” for Article 8 purposes?

The Court considered the key authorities in this – by now quite well-developed – area of law (Article 8 in the context of disclosures by the police), notably:

MM v United Kingdom [2010] ECHR 1588 (the retention and disclosure of information relating to an individual by a public authority engages Article 8, and must therefore be justified under Article 8(2));

Tysiac v Poland (2007) 45 EHRR 42, where the ECtHR stressed the importance of procedural safeguards to protecting individuals’ Article 8 rights from unlawful interference by public bodies;

R v Chief Constable of North Wales Ex. Parte Thorpe [1999] QB 396: a decision about whether or not to disclose the identity of paedophiles to members of the public, is a highly sensitive one. “Disclosure should only be made when there is a pressing need for that disclosure”);

R (L) v Commissioner of Police for the Metropolis [2010] 1 AC 410: such cases are essentially about proportionality;

R (A) v Chief Constable of Kent [2013] EWCA Civ 1706: such a disclosure is often “in practice the end of any opportunity for the individual to be employed in an area for which an [Enhanced Criminal Record Certificate] is required. Balancing the risks of non-disclosure to the interests of the members of the vulnerable group against the right of the individual concerned to respect for his or her private life is a particularly sensitive and difficult exercise where the allegations have not been substantiated and are strongly denied”;

R (T) v Chief Constable of Greater Manchester Police & others [2015] AC 49 and R (Catt) v ACPO [2015] 2 WLR 664 on whether disclosures by police were in accordance with the law and proportionate.

The Court concluded that, in light of the above authorities, the disclosure made in AB’s case was “in accordance with the law”. It was made under the disclosure regime made up of: Part V of the Police Act 1997, the Home Office’s Statutory Disclosure Guidance on enhanced criminal records certificates, section 10 of the Children Act 2004 and the Data Protection Act 1998.

See Jeremy Baker J’s conclusion – and notes of caution – at [73]-[75]:

“73. In these circumstances it seems to me that not only does the common law empower the police to disclose relevant information to relevant parties, where it is necessary for one of these police purposes, but that the DPA 1998, together with the relevant statutory and administrative codes, provide a sufficiently clear, accessible and consistent set of rules, so as to prevent arbitrary or abusive interference with an individual’s Article 8 rights; such that the disclosure will be in accordance with law.

74. However, it will clearly be necessary in any case, and in particular in relation to a decision to disclose information to a third party, for the decision-maker to examine with care the context in which his/her decision is being made.

75. In the present case, although the disclosure of the information by the police was to a LADO in circumstances involving the safeguarding of children, it also took place in the context of the claimant’s employment. The relevance of this being, as DC Pain was clearly aware from the contents of his e-mail to PS Bennett dated 10th June 2013, that the disclosure of the information had the potential to adversely affect the continuation of the claimant’s employment at the school….”

Was the disclosure proportionate?

While the disclosure decision was in accordance with the law, this did not remove the need for the police carefully to consider whether disclosure was necessary and proportionate, particularly in light of the serious consequences of disclosure for AB’s employment.

The Court held that the disclosure failed these tests. The crucial factor was that if such information about AB was well founded, then it would have been contained in his Enhanced Criminal Record Certificate – and if it was not, this would have prompted enquiries about the cogency of the information (why, if it was correct, was such serious information omitted from the ECRC?) which would reasonably have been pursued to bottom the matter out before the disclosure was made. These questions had not been asked in this case. See [80]-[81]:

“… In these circumstances, it was in my judgment, a necessary procedural step for DC Pain to ascertain from the DBS unit as to, whether, and if so, what information it had already disclosed on any enhanced criminal record certificate, as clearly if the unit had already disclosed the information which DC Pain believed had been provided to him by the college, then it would not have been necessary for him to have made any further disclosure of that information.

81. If either DC Pain or PS Bennett had taken this basic procedural step, then not only would it have been immediately obvious that this information had not been provided to the school, but more importantly, in the context of this case, it would also have been obvious that further enquiries were required to be made: firstly as to why no such disclosure had been made by the DBS unit; and secondly, once it had been ascertained that the only information which was in the possession of the DBS unit was the exchange of e-mails on the defendant’s management system, as to the accuracy of the information with which DC Pain believed he had been provided by the college.”

Judicial reviews of disclosure decisions concerning personal data: the DPA as an alternative remedy?

Finally, the Court dealt with a submission that judicial review should not be granted as this case focused on what was essentially a data protection complaint, which could have been taken up with the ICO under the DPA (as was suggested in Lord Sumption’s comments in Catt). That submission was dismissed: AB had not simply ignored or overlooked that prospect, but had rather opted to pursue an alternative course of complaint; the DPA did not really help with the police conduct complaint, and the case raised important issues.

Robin Hopkins @hopkinsrobin

Google and the DPA – RIP section 13(2)

Well, isn’t this an exciting week (and I don’t mean Zayn leaving One Direction)? First, Evans and now Vidal-Hall. We only need Dransfield to appear before Easter and there will be a full red bus analogy. Robin opened yesterday’s analysis of Evans by remarking on the sexiness of FOIA. If there is one thing you learn quickly as an information law practitioner, it is not to engage in a sexiness battle with Robin Hopkins. But high-profile though Evans is, the judgment in Vidal-Hall will be of far wider significance to anyone having to actually work in the field, rather than simply tuning every now and then to see the Supreme Court say something constitutional against a FOIA background. Vidal-Hall might not be the immediate head-turner, but it is probably going to be the life-changer for most of us. So, while still in the ‘friend zone’ with the Court of Appeal, before it all gets serious, it is important to explain what Vidal-Hall v Google [2015] EWCA Civ 311 does.

The Context

The claims concern the collection by Google of information about the internet usage of Apple Safari using, by cookies. This is known as “browser generated information” or “BGI”. Not surprisingly, it is used by Google to more effectively target advertising at the user. Anyone who has experienced this sort of thing will know how bizarre it can sometimes get – the sudden appearance of adverts for maternity clothes which would appear on my computer followed eerily quickly from my having to research pregnancy information for a discrimination case I was doing. Apple Safari users had not given their consent to the collection of BGI. The Claimants brought claims for misuse of private information, breach of confidence and breach of the DPA, seeking damages under section 13. There is yet to be full trial; the current proceedings arise because of the need to serve out of the jurisdiction on Google.

The Issues

These were helpfully set out in the joint judgment of Lord Dyson MR and Sharp LJ (with whom Macfarlane LJ agreed) at [13]. (1) whether misuse of private info is a tort, (2) whether damages are recoverable under the DPA for mere distress, (3) whether there was a serious issue to be tried that the browser generated data was personal data and (4) whether permission to serve out should have been refused on Jameel principles (i.e. whether there was a real and substantial cause of action).

Issues (1) and (4) are less important to readers of this blog, and need only mention them briefly (#spoilers!). Following a lengthy recitation of the development of the case law, the Court held that the time had come to talk not of cabbages and kings, but of the tort of misuse of private information, rather than being an equitable action for breach of confidence: at [43], [50]-[51]. This allowed service out under the tort gateway in PD6B. The comment of the Court on issue (4) is worth noting, because it held that although claims for breach of the DPA would involve “relatively modest” sums in damages, that did not mean the claim was not worth the candle. On the contrary, “the damages may be small, but the issues of principle are large”: at [139].

Damages under Section 13 DPA

Issue (2) is the fun stuff for DP lawyers. As we all know, Johnson v MDU [2007] EWCA Civ 262 has long cast a baleful glare over the argument that one can recover section 13 damages for distress alone. The Court of Appeal have held such comments to be obiter and not binding on them: at [68]. The word ‘damage’ in Art 23 of the Directive had to be given an autonomous EU law meaning: at [72]. It also had to be construed widely having regard to the underlying aims of the legislation: the legislation was primarily designed to protect privacy not economic rights and it would be strange if data subjects could not recover compensation for an invasion of their privacy rights merely because they had not suffered pecuniary loss, especially given Article 8 ECHR does not impose such a bar: at [76]-[79]. However, it is not necessary to establish whether there has also been a breach of Article 8; the Directive is not so restricted (although something which does not breach Article 8 is unlikely to be serious enough to have caused distress): at [82].

What then to do about section 13(2) which squarely bars recovery for distress alone and is incompatible with that reading of Article 23? The Court held it could not be ‘read down’ under the Marleasing principle; Parliament had intended section 13(2) to impose this higher test, although there was nothing to suggest why it had done so: at [90]-[93]. The alternative was striking it down on the basis that it conflicted with Articles 7 and 8 of the EU Charter of Fundamental Rights, which the Court of Appeal accepted. In this case, privacy and DP rights were enshrined as fundamental rights in the Charter; breach of DP rights meant that EU law rights were engaged; Article 47 of the Charter requires an effective remedy in respect of the breach; Article 47 itself had horizontal direct effect (as per the court’s conclusion in Benkharbouche v Embassy of Sudan [2015] EWCA Civ 33); the Court was compelled to disapply any domestic provision which offended against the relevant EU law requirement (in this case Article 23); and there could be no objections to any such disapplication in the present case e.g. on the ground that the Court was effectively recalibrating the legislative scheme: at [95]-[98], [105].

And thus, section 13(2) was no more. May it rest in peace. It has run down the curtain and joined the bleedin’ choir invisible.

What this means, of course, is a potential flood of DP litigation. All of a sudden, it will be worth bringing a claim for ‘mere’ distress even without pecuniary loss, and there can be no doubt many will do so. Every breach of the DPA now risks an affected data subject seeking damages. Those sums will invariably be small (no suggestion from the Court of Appeal that Article 23 requires a lot of money), and perhaps not every case will involve distress, but it will invariably be worth a try for the data subject. Legal costs defending such claims will increase. Any data controllers who were waiting for the new Regulation with its mega-fines before putting their house in order had better change their plans…

Was BGI Personal Data

For the DP geeks, much fun was still to be had with Issue (3). Google cannot identify a particular user by name; it only identifies particular browsers. If I search for nasal hair clippers on my Safari browser, Google wouldn’t recognise me walking down the street, no matter how hirsute my proboscis. The Court of Appeal did not need to determine the issue, it held only that there was a serious issue to be tried. Two main arguments were run. First, whether the BGI looked at in isolation was personal data (under section 1(1)(a) DPA); and secondly, whether the BGI was personal data when taken together with gmail account data held by Google (application of limb (b)).

On the first limb, the Court held that it was clearly arguable that the BGI was personal data. This was supported by the terms of the Directive, an Article 29 WP Opinion and the CJEU’s judgment in Lindqvist. The fact that the BGI data does not name the individual is immaterial: it clearly singles them out, individuates them and therefore directly identifies them: at [115] (see more detail at [116]-[121]).

On the second limb, it was also clearly arguable that the BGI was personal data. Google had argued that in practice G had no intention of amalgamating them, therefore there was no prospect of identification. The Court rejected this argument both on linguistic grounds (having regard to the wording of the definition of personal data, which does not require identification to actually occur) and on purposive grounds (having regard to the underlying purpose of the legislation): at [122]-[125].

A third route of identification, by which enable individual users could be identified by third parties who access the user’s device and then learn something about the user by virtue of the targeted advertising, the Court concluded it was a difficult question and the judge was not plainly wrong on the issue, and so it should be left for trial: at [126]-[133].

It will be interesting to see whether the trial happens. If it does, there could be some valuable judicial discussion on the nature of the identification question. For now, much is left as arguable.

Conclusion

The Court of Appeal’s judgment in Vidal-Hall is going to have massive consequences for DP in the UK. The disapplication of section 13(2) is probably the most important practical development since Durant, and arguably more so than that. Google are proposing to seek permission to appeal to the Supreme Court, and given the nature of the issues they may well get it on Issues (1) and (2) at least. In meantime, the Court’s judgment will repay careful reading. And data controllers should start looking very anxiously over their shoulders. The death of their main shield in section 13(2) leaves them vulnerable, exposed and liable to death by a thousand small claims.

Anya Proops and Julian Milford appeared for the ICO, intervening in the Court of Appeal.

Christopher Knight

PS No judicial exclamation marks to be found in Vidal-Hall. Very restrained.