The reality of the Online Safety Bill should terrify us all

An image of a camera representing the Online Safety Bill
Steve Topple

The Online Safety Bill has hit the headlines this week. This is because the government published its response to a consultation about the issues, including plans for a new bill in 2021.

But while much of the establishment media ran with the angle of social media companies facing hefty fines, the reality of what the government is proposing is much scarier. Because it poses a grave threat to many of our freedoms and rights.

Policing the internet

The Evening Standard reported that:

In the Online Safety Bill, to be brought forward next year, Ofcom will be given the power to fine companies up to £18 million or 10 per cent of global turnover, whichever is higher, for failing to abide by a duty of care to their users – particularly children and the vulnerable.

It also noted how Ofcom will have:

the power to block non-compliant services from being accessed in the UK

And that:

the proposals will see Ofcom require firms to use highly accurate targeting technology to monitor, identify and remove illegal material such as that linked to child sexual exploitation and abuse.

On the face of it, the government policing the internet to protect vulnerable people does not seem like a bad thing. But there are numerous problems with this. The first of these being for news organisations.

A free press?

The government claims it is:

committed to defending the invaluable role of a free media and is clear that online safety measures must do this

It has said that news organisation’s own sites will not fall under the bill. But it was vaguer about when people share this content on social media:

In order to protect media freedom, legislation will include robust protections for journalistic content shared on in-scope services. The government will continue to engage with a wide range of stakeholders to develop proposals that protect the invaluable role of a free media

This must be viewed in tandem with several factors.

Clamping down on uncomfortable news?

Firstly, by the government’s own admission, social media companies already crack-down on news content. For example, in October, Twitter limited the sharing of an article about Joe Biden’s son, published by the Rupert Murdoch-owned New York Post. Whatever your opinion on Murdoch, the right-wing Post, or the article – limiting the reach of a major news outlet is a fairly bold move from Twitter.

And then there’s the scandal of Facebook intentionally restricting the reach and visibility of left-wing news outlets via an algorithm. The point being, social media companies already restrict the reach of news. This is sometimes regardless of whether the source is a corporate giant or an independent outlet.

But what the Online Safety Bill will also do, is allow Ofcom to fine social media companies. This can be up to 10% of their annual income or £18m, whichever is greater. These will be for breaches of their “duty of care”. So, threats of fines will now hang over social media companies’ heads. Executive director of the Society of Editors Ian Murray told the Press Gazette:

The digital platforms when faced with huge fines for non-adherence to the new regulations may resort to the use of sweeping algorithms to remove content deemed as harmful.

Could it be that the Online Safety Bill will inadvertently cause even more censorship across social media? If the current state of the algorithms is anything to go by, then yes.

Out-of-control algorithms

As The Canary previously reported, Twitter’s algorithms are already subject to controversy. People have alleged that vexatious complaints by others have caused Twitter to suspend high-profile socialist accounts. These included The Canary‘s editor-at-large Kerry-Anne Mendoza. On many of these occasions, it was Twitter’s algorithms working without context. For example, it suspended Mendoza for sharing her own email address. She believes it was after people put in complaints about her.

But this is not just a UK phenomenon as the East African reported. Ruth Omondi wrote about Zimbabwean activists falling foul of Twitter’s algorithms, noting that it:

highlights the issues surrounding the use of algorithms and the inherent biases therein. We are perhaps getting to that point that some scholars in 2012 called “automation bias running rampant”. While social media platforms like Facebook, YouTube, and Twitter are increasingly banking on artificial intelligence technology to flag and stop the spread of hate speech, disinformation and other abusive and offence content, studies are showing that algorithms that flag hate speech and disinformation online are biased against a certain category of people. Instead of filtering out disinformation and hate speech online, the algorithms trained to identify these may instead amplify the biases.

So, dodgy algorithms coupled with inherent bias and the threat of fines may well equal even more censorship. But the Online Safety Bill wants to take this even further. It intends to invade our private messages and internet viewing, too.

Big Brother will be watching you

Tech Crunch wrote that:

The online safety “duty of care” rules are intended to cover not just social media giants like Facebook but a wide range of internet services — from dating apps and search engines to online marketplaces, video sharing platforms and instant messaging tools, as well as consumer cloud storage and even video games that allow relevant user interaction.

P2P [peer-to-peer] services, online forums and pornography websites will also fall under the scope of the laws, as will quasi-private messaging services

It said:

That raises troubling questions about whether the legal requirements could put pressure on companies not to use end-to-end encryption (i.e. if they face being penalized for not being able to monitor robustly encrypted content for illegal material).

In other words, the government aims to extend its reach into almost every aspect of our internet life. Now, some people may claim that ‘if you’re not doing anything wrong, then you have nothing to hide’. Sadly, it’s never quite as simple as that.

WhatsApp? Facebook?

So far, the government hasn’t specified which platforms will fall under the bill’s remit. But as it states:

The regulatory framework will apply to public communication channels and services where users expect a greater degree of privacy – for example online instant messaging services and closed social media groups.

So, instant messaging services like WhatsApp may be included. The government hasn’t finalised how it will regulate these. It claims it’s doing this in the context of child abuse. But delve deeper into the government’s proposals, and this changes. Under the area of “safety by design”, where it discusses how tech is built, the government states that:

a user journey that allows the user to forward messages to an endless number of people risks limiting the user’s ability to critically assess content, and leaves them more vulnerable to engaging with misinformation and disinformation online.

This would seemingly apply to WhatsApp’s group message facility. This is where you can add your contacts to a WhatsApp group and then post the message to all those people. It allows people who aren’t contacts to communicate as well. Groups can have up to 256 members – and you may only have one of them as a contact. Technically, you can also attempt to add any mobile number to your WhatsApp contacts. From personal experience, I know a lot of activists who use this to communicate.

It’s a similar story with Facebook “secret conversations” – where you can message with end-to-end (E2E) encryption anyone who uses Facebook. Non-E2E services like Instagram private messaging (PM) have a similar facility – letting you contact anyone who uses the platform. And across all these services, you can send the forward message to “endless” people – depending on what the government’s definition of “endless” is.

But the government trying to clamp down on platforms like WhatsApp is not new.

The truth becomes clear

Privacy International wrote in 2019 that:

Make no mistake, government agencies are ramping up efforts to access secure end-to-end encrypted communications. Earlier this year, government representatives from Australia, Canada, New Zealand, the UK, and the US (“The 5 Eyes”) issued a statement calling on tech companies to “include mechanisms in the design of their encrypted products and services whereby governments, acting with appropriate legal authority, can obtain access to data in a readable and usable format”.

It now seems that the Online Safety Bill with its “safety by design” elements will go hand-in-hand with this snooping – being also coupled with the contentious Investigatory Powers Act 2016.

But if the government has designed the bill to protect people from harmful content like child sexual abuse imagery and self-harm, then what’s the issue? The issue is what precisely the definition of harmful content will be.

Definitions to suit agendas?

The obvious question is who will define what harmful content is. The answer? The government will. As its response states:

The legislation will set out a general definition of harmful content and activity. A limited number of priority categories of harmful content, posing the greatest risk to users, will be set out in secondary legislation. This will provide legal certainty for companies and users.

Secondary pieces of legislation are also called statutory instruments (SIs). Parliament’s website notes that:

Secondary legislation is used to add information or make changes to an existing Act of Parliament.

Sometimes MPs can vote on secondary legislation. But not always. So-called negative procedure SIs can be passed without a vote. So, the government could push through its definition of “harmful content and activity” with little scrutiny. And here is where the Online Safety Bill will fit perfectly into the Tories’ personal agendas.

The bigger picture

We’ve already seen the police branding organised activist movements like Extinction Rebellion “domestic extremists”. The government has repeatedly called the Kurdish People’s Protection Units (YPG) ‘terrorists’, and the Crown Prosecution Service (CPS) has already tried to criminalise people supporting them. Historically, senior Green Party members have been subject to domestic extremism monitoring. And in the independent media, official government inquiries have threatened to target outlets like The Canary under the guise of antisemitism.

As if censorship on the internet wasn’t bad enough already, the Online Safety Bill will just entrench and further it. It arms both the government and social media companies with extra tools. They could use these to even more actively crack-down on dissent, legitimate protest, and opposition.

The Online Safety Bill appears to aim to reduce the harm online content can cause to people. But when viewed in tandem with various other pieces of legislation, and the government’s own distinctly authoritarian agenda, this may well end up not being the case. It represents another potential and terrifying assault on our freedom and privacy; one the government is ushering in under the guise of acting in our best interests.

Featured image via TheDigitalArtist – pixabay

We need your help ...

The coronavirus pandemic is changing our world, fast. And we will do all we can to keep bringing you news and analysis throughout. But we are worried about maintaining enough income to pay our staff and minimal overheads.

Now, more than ever, we need a vibrant, independent media that holds the government to account and calls it out when it puts vested economic interests above human lives. We need a media that shows solidarity with the people most affected by the crisis – and one that can help to build a world based on collaboration and compassion.

We have been fighting against an establishment that is trying to shut us down. And like most independent media, we don’t have the deep pockets of investors to call on to bail us out.

Can you help by chipping in a few pounds each month?

The Canary Support us