Departing chief censor David Shanks has called for a rethink of the role and powers of his office, saying his ability to respond to a surging tide of online extremism has been crude and necessarily limited.
In the past few decades his Classification Office – formed from a 1994 merger of boards responsible for vetting indecent books, films and video cassettes – has been overtaken by both technological trends and extremist events, most notably the 2019 Christchurch mosque shootings.
“For the harmful content that we’re talking about, I’ve only got a hammer. And that’s the objectionable hammer, basically,” says Shanks. “And actually, that doesn’t work.
“Really what we need is a sophisticated regulator with a broader toolkit and community engagement,” he says of the need to tackle growing issues of disinformation, extremism and pornography.
He says that given New Zealand is a fraction of 1 per cent of global markets, we are unable to dictate terms to technology giants. However, we can support and piggy-back on reforms in the likes of the European Union.
Shanks, 55, who grew up in Waipukurau wanting to be a zoologist, took over as chief censor in 2017 after a series of senior legal and management roles in the public service with the likes of Inland Revenue and the State Services Commission.
During his tenure his office became less of a taste-tester for the motion picture industry, and instead found itself as a small but critical piece of national security infrastructure as – gradually with the conflict with Isis, then suddenly after the Christchurch shootings – its workload became increasingly dominated by assessing criminal and extremist content.
In 2016 his office was classifying 20 pieces of commercial media content for every referral from police or security services. By 2021, as the cinema industry retreated to streaming and police and security agencies focused their minds on clamping down on extremist content, that ratio has gone to 3:2, with the trend showing no signs of slowing.
He was forced to break the alarm glass and deploy his hammer in a hurry after March 15, 2019. In a matter of days he banned both a livestream and manifesto produced to propagandise the worst terrorist attack in New Zealand’s history.
Processes and policies designed for a deliberate pre-publication review – such as publishers or distributors seeking approval prior to a book or film launch – were plainly inadequate for material produced and published online that then propagated at an exponential rate on social media.
“A submission by a law enforcement agency would typically take weeks to classify, to work through our processes. But that [the terrorist classification] was very quick, probably the quickest classification until that time – we’ve been faster since then.”
The immediate aftermath of the terrorist attack was shocking in terms of the violence on display – Shanks says “I watched [the livestream] quietly, but inside I was screaming” – but also the shockwave it spread online through social media.
“The algorithms do what they’re trying to do, which is go ‘Oh, I’ve got high-engagement content here, I’m just going to promote and recommend and send this to as many people as possible’. The big takeaway of it all, is that the system went out of control and people were harmed, and this material was spread all throughout the internet and remains on it and various places and various forms to this day.”
And he says he was surprised at how amenable the big tech companies – who were grappling with global-level economic and regulatory issues – were to his approaches from the far South Pacific.
“They appear to be more open than you might initially think or assume. What we found is that once there’s some clarity about what the expectations are, these global networks and providers will engage in a very, and quite practically, in a good faith way. And we’ve made quite a lot of progress quite quickly.”
Shanks says that pre-2019, social media was clearly an “unsafe system” open to exploitation, and while the big tech companies have taken steps to prevent a recurrence of the live-streamed massacre, he is worried that other weak points could make terrorist propaganda awfully popular again.
“We’re looking in the rear vision mirror, about a recurrence of a particular set of circumstances, whereas we don’t know what’s coming next in terms of an exploitation of weaknesses in the online architecture,” he says.
Shanks says he doesn’t have firm plans for his own future, despite being out a job, but laughs at a suggestion that he might pull a Nick Clegg – the UK Liberal Democrats leader whose criticism of Facebook morphed into a job working for the social media giant once he left office.
“Well, never say never, I suppose. But that’s not my current plan.”
He is aware of the bubbling free-speech debate – a banner that draws both members of Parliament and anti-vaccination rioters – and is disappointed at the lack of nuance or recognition that online spaces largely remain a wild west.
He cites the wild – and false – rumours that spread rapidly across Facebook last October, claiming the Delta outbreak was caused by a 501 deportee sneaking his girlfriend into managed isolation.
“You had a racist, wrong, harmful rumour getting significant traction and virality in the New Zealand social media ecosystem. On the face of it that’s not unlawful content, but it presented a whole range of harms,” he says.
Present tools to mitigate such harms were, in this case “mostly ineffective and impossible to access in any meaningful way”.
“Freedom of speech is a critically important right and a critically important freedom, and a core pillar of democratic systems. So any approach, anything that you’re trying to do in this space has got to have that as a foundation,” he says.
“But in a post-March 15 world, in an increasingly polarised world, I think where that gets you to is that you need to have a clear line about what is lawful and what is not.”
Source: Read Full Article