Back to news

Policing Online Comments in Europe: New Human Rights Case Law in the Real World

April 11, 2016

This is the second of four posts on real-world consequences of the European Court of Human Rights’ (ECHR) rulings in Delfi v. Estonia and MTE v. Hungary. Both cases arose from national court rulings that effectively required online news portals to monitor users’ speech in comment forums. The first case, Delfi, condoned a monitoring requirement in a case involving threats and hate speech. The second, MTE held that a nearly identical requirement in a trade defamation case violated free expression guarantees in the European Convention on Human Rights (Convention).

The two rulings are explained in more detail in Post 1. This post considers how – or whether – the two cases will affect daily operations for Internet intermediaries.

The ECHR rulings do not consider every law governing the parties – only the fundamental rights guarantees of the Convention. Real world Internet platforms operate under a more complex set of rules, including national implementations of the eCommerce Directive in the EU. For platforms secure in their existing national law protections, the ECHR rulings don’t change anything. Hosts that qualify for immunity under eCommerce Directive Article 14 still cannot be subject to general monitoring obligations, because of limitations in Article 15. All the Delfi and MTE rulings tell us is that countries could make news platforms liable for hate speech and threats in user comments, in circumstances like those in Delfi, without violating the European Convention on Human Rights.

But for a platform uncertain how it would fare under the eCommerce Directive, or operating in countries outside the EU without comparable black letter law, Delfi is bad news.  And, unfortunately, MTE doesn’t really change that. There is nothing in MTE to alter a hosting platform’s calculus in deciding whether to monitor user expression, and the ECHR’s new ruling won’t cause many platforms to leave controversial content online when they find it.

The first problem, as several commenters have observed, is that monitoring is an all or nothing proposition. It doesn’t matter if only worst-of-the-worst content, like hate speech, can trigger a monitoring duty. Intermediaries by definition don’t know if they have hate speech on their sites, so that duty is always there for platforms in Delfi’s situation.

Second, employees looking for comments that constitute Delfi-level threats and hatred will inevitably come across other user expression that might be MTE-level defamation. The ECHR says intermediaries can’t be required to look for those.  But once employees see them, those comments will presumably be removed, too. That’s what the eCommerce Directive and many tort laws require when an intermediary gains “knowledge” of unlawful content, whether through notice or other means. Because platform employees can’t assess the truth of disputed facts, are hard pressed to make legal judgments that are difficult even for courts, and risk liability by leaving comments up, they have little incentive to stand up for speech that could potentially get them sued. Empirical studies tell us that we should expect over-removal of “gray area” content in the face of such uncertainty.

Thus, even for the kinds of controversial speech at issue in the MTE case itself, the ruling does not get intermediaries out of the monitoring business. Under a Delfi/MTE rule, tech platforms would still go looking for hate speech, find other potentially unlawful content, and presumably remove it -- with precisely the “foreseeable negative consequences on the comment environment of an Internet portal” and “chilling effect on the freedom of expression on the Internet” that the Court identified and tried to avoid. (P. 86)

The Delfi ruling adds one other odd element for platforms trying to operationalize these rulings – and for practitioners trying to understand the Court’s logic. Delfi held that the news portal did not have to monitor user content before it appeared on their website, but instead must act “without delay after publication” to remove hate speech. This is a pretty meaningless distinction for anyone running a hosting service – a matter of deleting content the second after it flickers onto the screen, instead of the second before.

The Court thinks this sequencing matters for free expression. It says that “regard to the freedom to impart information as enshrined in Article 10” drives its conclusion that the Estonian ruling requires only post-publication removal, and that the Grand Chamber “[c]onsequently” finds no “disproportionate interference with its freedom of expression.” (My emphasis. You can try parsing this language for yourself in P. 153 of Delfi.) This may be about avoiding a nominal prior restraint. Or it may mean the Court thinks it would be technically difficult for platforms to hold comments for pre-publication review, and it wants to give at least some temporal window for compliance. For most platforms I expect the distinction doesn’t matter much. In either case, it adds a bottleneck of human review for every comment posted by users.

* * *

Tomorrow's post will look at how Delfi and MTE may affect current and future litigation in cases involving platform liability.

This article was originally published at the CIS Blog Policing Online Comments in Europe: New Human Rights Case Law in the Real World
Date published: April 12, 2016
Region
Council of Europe
Topic, claim, or defense
Hate Speech
Freedom of Expression