The British Government has released plans for a comprehensive overhaul of how tech companies monitor user content. In an effort to combat a rise in digital crime and the spread of harmful content, the Department of Digital, Culture, Media and Sport released a draft Online Safety Bill in the middle of 2021. Since its release the Bill has undergone a radical redrafting in the hope that Britain can become the “safest place in the world to go online”.
While the Bill will be subject to parliamentary scrutiny and likely amendment, the key aspects of the current draft of the Bill are:
Tech companies will have a duty of care to protect users from harmful content by:
- preventing illegal content and activity online (like terrorism and fraud);
- ensuring children are not exposed to inappropriate content; and
- for ‘Category 1’ companies only (such as Twitter and Facebook), monitoring and removing legal but harmful content. These companies will be required to set out in their terms of services how such content will be dealt with. The Government has also signalled that it will provide additional guidance on this via additional legislation.
There is support for increased responsibilities, but the idea that harmful content that is otherwise legal must be removed has been met with significant concern. The Government has tried to counter this concern by introducing a right of appeal for users who feel their content has been removed unfairly.
The legislation will create three new specific online offences. These offences are:
- Banning the posting or sending of a threatening message that expresses a threat of serious harm. This is intended to better capture online threats to kill or cause serious harm and will carry a sentence of up to five years imprisonment.
- Making it illegal to send a communication that is intended to cause psychological harm. This offence will carry a prison sentence of up to two years and is aimed at criminalising social media “pile-on’s” (where online hate is directed at an individual).
- Preventing the deliberate sending of false messages that have the intention of causing harm (such as bomb hoaxes). This will carry a prison sentence of up to 51 weeks.
Online platform providers will be expected to do more to protect users from fraudulent adverts and scams. Some providers will also be required to carry out age checks on restricted content – this fulfils a long time British Government commitment to restrict the viewing of certain content.
It is intended that Ofcom, Britain’s communications regulator, will be responsible for policing the regulatory requirements and will have the power to fine a breaching technology company up to 10% of the offending company’s global turnover. Ofcom will also have the ability to prosecute company executives where they fail to comply with regulatory requests. If liable, executives could face a penalty of up to two years imprisonment.
The proposals have received criticism in Britain, with one Government MP labelling it a “censor’s charter”. However, the Bill is yet another recent example of global attempts to impose new regulations on technology companies. Late last year, the Australian Government announced its intention to introduce legislation that would force social media companies operating in Australia to collect the personal details of all users. Companies would also be required to have an established complaints process where users can ask for content to be taken down if they consider it defamatory – if a post is not removed, then the social media company can be compelled by court order to reveal a user’s identity. More recently the US President, Joe Biden, has urged Congress to introduce legislation that would strengthen privacy protections and prohibit companies from collecting children’s personal information.
We have not seen any similar proposed regulatory changes in New Zealand, but we would expect that as other like-minded countries impose restrictions, we will see similar actions taken here. We will keep you updated on any developments.