Christchurch shooting live stream didn’t have ‘enough gore’ for system to catch it

This Stuff story reveals the NZ media understand two important things:
1. Blood splatter is an important element for artificial intelligence to determine reality or fiction. It didn’t have ‘enough gore’ for the Facebook system to censor it.
2. Philip Arps was jailed for sharing a video that didn’t have ‘enough gore’, was not the live stream, but one of the many later edits.

didn’t have 'enough gore

A top Facebook director specialising in counter-terrorism told a committee of US members of congress that their system failed to detect the livestream of the Christchurch shooting because it was not “gruesome” enough, The Daily Beast is claiming.

Brian Fishman told members of the US House Homeland Security Committee in a closed-door briefing on March 27 that there was “not enough gore” in the video for artificial intelligence to catch it, it reported.

The American website quoted an anonymous staffer who was in the room as saying there was significant backlash to Fishman’s claim.

One member of Congress reportedly said the video was so violent that it looked like a graphic video game.

Facebook’s powerful  AI software has the ability to scrub images and videos which breach community guidelines. Earlier this year it announced new software which could detect and remove non-consensual intimate images – often called revenge porn – automatically, before it is even reported.

In the past, Facebook had relied on users to report content which was in breach of guidelines, and then used image matching software to prevent it from reappearing elsewhere on the site.

300,000 copies of the graphic video were shared publicly on Facebook in the first 24 hours after the Christchurch attack. didn’t have 'enough gore
GEORGE HEARD/STUFF
300,000 copies of the graphic video were shared publicly on Facebook in the first 24 hours after the Christchurch attack.

Facebook said that the same image matching software is used to take down terrorist videos, though in the case of the March Christchurch shooting, in which 50 people died, it ran into some trouble.

Chris Sonderby, Facebook VP and Deputy General Counsel, said in a blog post that “some variants such as screen recordings were more difficult to detect, so we expanded to additional detection systems including the use of audio technology”.

The Christchurch video was viewed under 200 times during the live broadcast and about 4000 times total before it was taken down. The first user report didn’t come until 12 minutes after the livestream ended.

Facebook said that reports it received for live videos were “prioritised for accelerated review.”

Facebook said the original live video was viewed less than 200 times.didn’t have 'enough gore
TVNZ
Facebook said the original live video was viewed less than 200 times.

Guy Rosen, Facebook VP of Product Management, said “we saw a core community of bad actors working together to continually re-upload edited versions of this video in ways designed to defeat our detection.”

In total, Facebook said they removed about 1.5 million videos of the attack in the first 24 hours. 1.2 million uploads were blocked immediately, but 300,000 made it through.

Stuff

terrorist receives temporary restrictions