Skip to content
Menu
  • Blog
  • 8th Apr 2019

How behavioural insights can help us regulate tech companies and keep people safe online

New White Paper from the UK government aims to tackle online harms

Mark Zuckerberg, Facebook’s Chief Executive, published this call to action in the Washington Post last week:

I believe we need a more active role for governments and regulators. By updating the rules for the Internet, we can preserve what’s best about it — the freedom for people to express themselves and for entrepreneurs to build new things — while also protecting society from broader harms.

While his op-ed was met with a healthy dose of scepticism about Facebook’s underlying commercial incentives, Zuckerberg made several sensible proposals – such as transparency reports that detail what companies are doing to identify and remove harmful content – that merit public debate and discussion.

Today, with the same intention of balancing innovation, freedom of expression and protection, the UK Government published a White Paper setting out its approach to tackle serious and clearly defined online harms, like terrorist activity and child sexual exploitation and abuse (CSEA). Its central proposal is to create a new independent regulator to enforce a statutory duty of care for companies to keep their users safe online.

We think it is a well-considered and sensible approach, and one that responds to the real and appropriate public concerns about the power and conduct of tech companies, and how technology is shaping how we behave and interact with one another.

The merits of a duty of care, and the resulting sanctions for companies and individuals, will be widely debated over the coming weeks and months. So here, we offer reflections on how embedding behavioural insights into the design of the regulatory regime can shift how users behave online, as well as help incentivise Facebook and other companies to make their platforms as safe as possible.

The new regulator should use their enforcement powers in a way that targets brand and reputation. The White Paper proposes that the regulator has powers to gather information on what companies are doing, for example to identify and remove disinformation on their platforms. It is just as important, however, to think carefully about how and when to reflect that information back to companies and users to shift behaviour. For example, rather than simply putting transparency reports online on the regulator’s website, they should be used to rate and rank companies: which websites are safe for children, more appropriate for adults, or those in which many adults might feel uncomfortable or want to exercise a greater degree of caution? Or that information could be given directly to advertisers so that they can make choices about the volume and type of harmful content that their brand is being seen alongside. For Facebook and other tech companies, their public brand and reputation are amongst their most precious assets – the regulator should use the information it gathers to target these, creating more direct commercial incentives for companies to act to address online harms.

The new regulator should use behavioural insights to help citizens understand and engage with the duty of care. While we agree with the ambition of ‘citizens who understand the risks of online activity, challenge unacceptable behaviours and know how to access help’, it’s important to note that this level of consumer engagement hasn’t happened organically in other markets (see for example the issue of the Loyalty Penalty in essential services markets). Realising this vision requires attention and careful design on the part of the regulator. Simply explaining the duty of care in terms and conditions is unlikely to cut it. For example, companies could be required to prompt users with small chunks of information about the duty of care when they encounter triggers. It’s also essential that the complaint mechanisms are truly low friction and consistent across platforms, and that companies are judged not just on response time but how satisfactorily they resolve the complaint.

The new regulator, in partnership with companies, should create a place for people to negotiate with each other to reach a collective view on what constitutes harmful content and behaviours and what to do about it. We are encouraged to see that the regulator will have a responsibility to consider the views of users. It’s not a trivial matter to consider exactly how to engage users more deeply in the deliberation and discussion about how to navigate the grey areas of online harms, especially for harms that aren’t clearly defined. It is important that citizens are able to play a part in shaping these definitions and rules: it is their lives and relationships that are the matter being discussed.

Realistically, everyone can’t spend their lives debating the rules for our online world. But it is possible to build in deliberative mechanisms that take representative samples of users, immerse them in the issues for a couple of days, and ask them to give a view on behalf of the community.

There is more that can be done alongside regulation to change the behaviour of companies and users. We’ve been working on a new report on the behavioural science of online harm and manipulation, which explores the nudges and behavioural solutions that can, and should, sit alongside the regulatory powers set out in the White Paper.

Authors

Join the discussion—sign up to be the first to receive our upcoming paper on online harms