Out-Law Analysis 11 min. read
17 Apr 2019, 12:00 am
Further thought is needed on the government's proposals to avoid the creation of a new regulatory framework that raises more problems than it solves.
Before getting into the detail of what is being proposed, it is worth taking a step back and thinking about this in a wider context.
Nearly 20 years ago the EU's E-Commerce Directive marked a general international consensus, also reflected in the Digital Millennium Copyright Act in the US, that internet intermediaries should enjoy a high degree of protection from liability in relation to the use of their services by third parties. This protection was put into effect by way of 'safe harbours' written into law.
The overarching motivation for legislating in this way was to encourage investment and innovation in online technology, the benefits of which we now take for granted. Those old enough to remember what technology was like at the turn of the millennium will recall how, for example, websites were static in content before the introduction of Web 2.0, and messaging was limited to email and text messaging from phones which now look archaic.
Since then, some of the concern in relation to stifling innovation has fallen away as a result of the successes that many tech companies have enjoyed, with the general view now being that tech companies do not need to be protected from risks to their revenues.
The way that technology has developed allows for much more user interaction, meaning everybody can produce content, both written and audio-visual, which can become widely accessible. This is both liberating and risky: liberating because it means the ability to share information and to debate online is no longer limited to the few; risky because bad actors have capitalised on the opportunity to manipulate the way in which information can be shared.
Examples include the controversy over interference in elections, use of click farms, and the growth of hate speech on social media platforms. Those examples are based on scale. Some online harms are more localised, and can be dangerous in a different way. The case of Molly Russell, whose death prompted profound concern over the availability online of content relating to suicide and self-harm, is well-known, and was one of the catalysing factors for the proposals set out in the white paper.
How to respond to these kinds of misuse of internet companies’ services is not straightforward. In addition to the government’s internet safety strategy and digital charter, of which the white paper is the latest step, there have in recent years been numerous other developments which touch, to a greater or lesser extent, on these important issues. These include:
In addition, various papers have been published by NGOs, charities or other interest groups, and a number of prominent campaigns have been pursued, including the NSPCC’s '#WildWestWeb' campaign and The Telegraph’s 'Duty of Care' campaign. There was also effective lobbying from the Carnegie UK Trust, on whose behalf professor Lorna Woods and William Perrin submitted evidence to the House of Lords Communications Committee in May 2018. Their submissions included many of the proposals, including the duty of care, which feature in the white paper.
Amongst this widespread push towards regulation, the courts have of course continued to deal with many of these, and related, issues in the way they always have, namely by applying and developing the law incrementally. But this government has very clearly reached the view that neither self-regulation by the internet companies, nor existing laws, have provided sufficient protection, and that something therefore needs to change.
David Barker
Partner
If some form of regulation seems to be largely uncontroversial, the proposed duty of care is more complicated. The creation of a specific statutory duty of care is a highly unusual thing in itself and is not a step which has been, or should be, taken lightly.
The white paper proposes a new statutory framework for internet companies. It proposes that a duty of care should be established “to make companies take more responsibility for the safety of their users and tackle harm caused by content or activity on their services”, and that compliance with the duty will be overseen and enforced by an independent regulator. The regulator would set out in codes of practice how companies can fulfil their new legal duty.
Central to the proposed new regulatory framework will be developing a “culture of transparency, trust and accountability”, with companies likely to be required to provide annual transparency reports outlining the prevalence of harmful content on their platforms and the measures taken to address them. These reports will be published online by the regulator, allowing users and parents to make informed decisions about internet use.
The regulator’s proposed enforcement powers will include the issuing of fines, and the government is consulting on what powers should be available in the most serious cases. Proposals include imposing liability on individual members of companies’ senior management, and even blocking non-compliant services in this jurisdiction, which the paper describes as “an enforcement action of last resort”. These are broad proposals, and it is important that further thought is given to the effects they may have on the way in which internet use develops.
There appears to be little resistance from the big tech companies to the idea of being regulated. In fact, for some time now there seems to have been a recognition that regulation is likely to be both necessary and useful. That has already been seen in the US, where the big tech companies have been lobbying Congress for a national privacy statute. And here, in anticipation of the white paper, those companies wrote to ministers in February to set out their thoughts on how regulation in this jurisdiction might work. They argued that regulation should:
In a recent article published in the Washington Post, Mark Zuckerberg stated that the internet needs new rules, specifically in the areas of harmful content, election integrity, privacy and data portability, and he argued for “a more active role for governments and regulators”. In relation to harmful content, he stated that Facebook has created an independent body to enable people to appeal Facebook’s decisions. Acknowledging that “internet companies should be accountable for enforcing standards on harmful content”, he also proposed that a third party body should “set standards governing the distribution of harmful content and to measure companies against those standards”. This chimes with the approach that the government is proposing.
Some representatives of smaller tech companies believe it is no surprise that the large tech companies have come down in favour of regulation since, they say, it is only the large tech companies that have the resources to comply with the new regime, leaving the smaller companies most vulnerable to enforcement action. Yet that concern is something which the big tech companies themselves raised in their letter, and it is also a point which the white paper meets head on, It said: “To ensure a proportionate approach and avoid being overly burdensome, the application of the regulatory requirements and the duty of care model will reflect the diversity of organisations in scope, their capacities, and what is technically possible in terms of proactive measures”.
If some form of regulation seems to be largely uncontroversial, the proposed duty of care is more complicated. The creation of a specific statutory duty of care is a highly unusual thing in itself and is not a step which has been, or should be, taken lightly.
In general, the law of negligence has developed incrementally through case law. There are some limited examples of a duty of care being imposed on a statutory basis, for example in relation to occupiers’ liability, but these were created to address specific anomalies rather than to forge new legal frontiers. The duty of care being proposed is more vague. It relates to all harms addressed in the white paper, i.e. content which is illegal and content which is legal but harmful, with more stringent requirements applying to the former.
What seems to be being proposed is that a company will be in breach of the duty of care if it fails to comply with the codes of practice which the regulator will publish. Those codes will set out “the systems, procedures, technologies and investment, including in staffing, training and support of human moderators, that companies need to adopt to help demonstrate that they have fulfilled their duty of care to their users”. So it is apparently envisaged that where there has been some sort of systemic failing on the part of a company to meet the requirements set out in the codes of practice then this will amount to a failure to fulfil the duty of care.
However, it is not clear whether or how this really alters the position for an individual who wishes to bring a claim in the courts based on content on an internet company’s service. The duty does not seem to give rise to a new cause of action enabling individuals to bring claims in negligence against internet companies for hosting content to which they object. However, the white paper does envisage that “the regulatory model will provide evidence and set standards which may increase the effectiveness of individuals’ existing legal remedies”.
This section of the white paper refers specifically to currently available negligence and breach of contract claims, but then states that “if the regulator has found a breach of the statutory duty of care, that decision and the evidence that has led to it will be available to the individual to use in any private action”. It is unclear whether the government has really thought this aspect of the proposals through.
The interaction between the new regime and the existing safe harbours is also unclear. It is notable that the white paper states that “the new regulatory framework will increase the responsibility of online services in a way that is compatible with the EU’s e-Commerce Directive”, though it does appear to anticipate “mandating specific monitoring that targets where there is a threat to national security or the physical safety of children”.
There is an argument that developments in the law which enhance individuals’ ability, or the authorities’ powers, to take action against the actual perpetrators of online harm should be the focus for tackling that harm. It is unclear whether the current proposals will assist that objective at all.
Other implications of an expansion in the law also need to be thought through. For example, a finding that a company is in breach of the statutory duty of care may well be seized upon by claimant law firms and funders keen to get class actions off the ground, with their own motives in mind.
There have also been some concerns raised about the effect that the proposals could have on freedom of speech, with some suggesting that, by requiring the removal of legal content, even if that content may be harmful to some, the proposals are tantamount to censorship. Some might argue that this is a very good reason why the courts, which of course frequently grapple with issues around Article 10 of the European Convention on Human Rights and its interaction with other rights, would provide the best forum for developing the law in this area, despite legitimate concerns around the pace of change brought about this way.
There is also surely a question as to whether the regulation does anything to address the root cause of the harms. To state the obvious, there is an inevitable limit to what regulating the internet companies can really achieve, since even the most optimistic advocates of the proposed new regime would presumably accept that all the fines in the world, and even blocking specific platforms, won’t stop people finding ways to publish or broadcast harmful content online.
There is an argument that developments in the law which enhance individuals’ ability, or the authorities’ powers, to take action against the actual perpetrators of online harm should be the focus for tackling that harm. It is unclear whether the current proposals will assist that objective at all.
Finally, there is a big question around the extent to which the government intends private communications to be caught by the proposals. The paper talks of a “differentiated approach” for such communications in order to reflect the importance of privacy, but it is consulting on “appropriate definitions and what regulatory requirements can and should apply to private communication services”. Conceptually, it is difficult to see why a WhatsApp group - i.e. a private communication - with, say, thirty participants, should be treated differently from a closed group on Facebook with the same number of members. Harmful content could plainly appear in either.
The idea of regulating internet companies seems to be favoured by government, the public and the internet companies themselves, or at least the larger ones. There remain, however, a number of issues with the proposed regime, including in relation to scope and efficacy, and it remains to be seen whether the government is able to iron out these issues following the consultation.
More thinking is needed if we are to avoid establishing a regulatory framework which creates more problems than it solves.
David Barker and Alex Keenlyside are experts in media law at Pinsent Masons, the law firm behind Out-Law.com.