March 15 footage still used to motivate jihad

Proposed overhaul: Social media giants could face fines for failing to take down illegal content

Facebook, Twitter and YouTube could face $200,000 fines for failing to take down “objectionable” content in a major proposal for dealing with digital harm in the wake of the March 15 terrorist attacks in Christchurch.

A civil fine is one aspect of a new bill, introduced today by Internal Affairs Minister Tracey Martin, that would make social media platforms more responsible for the content they host in New Zealand.

Someone livestreaming content like the footage of the terror attack would also face criminal punishment.

But questions remain over how enforceable a fine would be, given that many of these online platforms are based offshore.

Currently the likes of Facebook and Twitter, under the Harmful Digital Communications Act (HDCA), are not considered traditional publishers and are protected from legal liability under the “safe harbour” provisions of that law.

But the new Films, Videos, and Publications Classification Amendment Bill would create a new position – “Inspector of Publications” – who could issue take-down notices for objectionable content.

life2

Examples of such content, deemed “injurious” to the public good, include depictions of torture, sexual violence, child sexual abuse, or terrorism.

Failure to comply as soon as “reasonably practicable” could lead to a civil fine as determined by the District Court, and up to $200,000.

This is not as severe as similar laws proposed overseas, some of which allow for bigger fines (Australia) or include specific times for compliance, such as within 24 hours (Germany).

The new criminal offence for livestreaming objectionable content would be punishable by 14 years’ jail for an individual or a fine up to $200,000 for a company.

However, a social media platform would not be criminally liable simply for hosting the livestreaming content, and the livestreamer could only be charged if they were in New Zealand.

In an acknowledgement of how quickly harmful content can go viral – the March 15 footage was uploaded 1.5 million times within 24 hours – the bill would also provide a way for the Chief Censor to quickly declare a publication illegal.

Internal Affairs Minister Tracey Martin said that the new bill was part of a wider Government programme to tackle violent extremism. Photo / Mark Mitchell
Internal Affairs Minister Tracey Martin said that the new bill was part of a wider Government programme to tackle violent extremism. Photo / Mark Mitchell

“This is about protecting New Zealanders from harmful content they can be exposed to on their everyday social media feeds,” Martin said in a statement.

What’s wrong with the current law

• There is no specific offence for livestreaming illegal material
• There is no specific legal power to issue take-down notices to social media giants
• There is no mechanism for the Chief Censor to quickly declare content illegal
• The likes of Facebook, Twitter are protected from legal liability under “safe harbour” provisions

What is proposed

• Criminal offence to livestream illegal material
• A new inspector position could issue take-down notices; failure to comply could lead to a $200,000 fine
• Such notices would override the “safe harbour” protections
• The Chief Censor could quickly make content illegal

Why is this needed

• It took the Censor’s office three days to declare the March 15 footage illegal
• The current rules around digital harm are made up of from a patchwork of laws and are no longer fit for purpose
• Overseas countries are moving to make social media platforms more responsible for the content they host

The bill has been anticipated since Prime Minister Jacinda Ardern, on her way back from the Christchurch Call summit in Paris a year ago, said that domestic laws around digital harm were not fit for purpose.

The March 15 attacks laid bare the inadequacies of the current laws. It took three days for the Censor’s Office to classify the gunman’s livestream as illegal, objectionable content.

It is currently mostly incumbent on social media platforms to take down violent content voluntarily in line with their own guidelines, rather than being compelled to by New Zealand law.

There is no specific legal power to order the likes of Facebook to remove content, nor is livestreaming a specific offence.

Social media giants are also currently free from legal liability if they comply with the “safe harbour provisions” in the HDCA.

The new bill would take precedence over those provisions, making online content providers liable for content that was the subject of a take-down notice.

PM Jacinda Ardern, pictured here with Facebook 2IC Sheryl Sandberg in New York, said a year ago that New Zealand laws around digital harm needed to change. Photo / Supplied
PM Jacinda Ardern, pictured here with Facebook 2IC Sheryl Sandberg in New York, said a year ago that New Zealand laws around digital harm needed to change. Photo / Supplied

The bill would also set up a framework for a government-backed web filter to block clearly defined illegal content.

Such a filter, a voluntary one, already exists to block child sexual exploitation material, but any new filter would face scrutiny because of the potential to impact on freedom of expression.

Some of the industry concerns about the bill, highlighted in a Regulatory Impact Assessment (RIA), include issues around censorship and unduly limiting freedom of expression.

The possibility that take-down notices might be ineffective was also raised.

“Many online content hosts are based overseas without a New Zealand footprint, which means they cannot be found liable under New Zealand law,” the RIA said.

“We can partially mitigate this risk by imposing civil pecuniary penalties that can be enforced either in New Zealand, or by other countries where mutual agreements support this.”

The RIA said that smaller companies were more likely to fail to comply and may “actively seek to pervert classification decisions by harnessing violent extremist sympathisers and incite further attacks”.

“As large online content hosts are often reluctant to run the reputational risk of openly flouting the laws of the countries in which they operate, and the obligations on them are quite clear, there is little reason to assume that non-compliance will be their default position.”

Martin said the was part of a wider government programme to address violent extremism.

The bill, if it passes a first reading, will go through the scrutiny of a select committee process. If it eventually becomes law, it is expected to come into effect in the middle of next year.

Proposed overhaul: Social media giants could face fines for failing to take down illegal content