The gateway drug of misinfo & the Christchurch Call algorithmic work
How mis/disinformation is a claimed gateway drug to radicalisation which leads to extremism.
This post is a tangent that I wrestled with posting or not - clearly I decided to but it’s not strictly Covid history. Some of this has been covered by Substacker A Halfling’s View in far greater depth. I’ve included a smidge more Covid-related OIAs but I wanted to talk about the algorithmic work of the Christchurch Call that I’m not sure has been talked about much. I didn’t know any of this before looking into Covid misinfo and reading fellow Substackers, so in that vein perhaps it helps someone else like me…
In May, 3 months after resigning as Prime Minister with “no longer have enough in the tank” as her reason, Jacinda Ardern’s 3 (yes, 3) upcoming fellowships at Harvard were announced.
Ardern is the first Knight Tech Governance Leadership Fellow at the Berkman Klein Center for Internet and Society. When it was announced Ardern said the fellowship will help her “advance” the work of the Christchurch Call, which she was made special envoy of until the end of the year.
The Berkman Klein Center cropped up in Twitter Files reporting which found a law instructor had reached out directly to Twitter to ask that accounts be banned, like LibsofTikTok.
How Covid amplified the belief that mis/disinformation is a gateway drug to extremism
Ardern publicly stated that anyone not getting fully vaccinated could only be due to misinformation - and that’s why vaccination mandates were introduced.
The official New Zealand government Covid Facebook account banned terms from ‘jabcinda’ (a nickname that grew over the pandemic for Ardern) to ‘vitamin d’ to ‘mandate’ to ‘human rights.
This labelling of anything, even obliquely like the term ‘human rights’, that questioned government policy as mis/disinformation was because government had justified it as leading to the annihilation of society if they did not.
I’m only partly kidding.
An October 2020 briefing from the Department of the Prime Minister and Cabinet (DPMC) to Ardern responded to her query if QAnon should be designated a terrorist entity (if I respond to that I’ll never be able to finish this post so I’ll stoically move on). The briefing quickly associated QAnon beliefs with anti-mask and anti-lockdown narratives linked to the US Constitution.
DPMC told a November 2021 meeting of government Chief Executive’s involved in the Covid-19 response that mis/disinformation would, “…normalise and entrench far-right ideologies, including, but not limited to, ideas about gun control, anti-Māori sentiment, anti-LGBTQIA+, conservative ideals around family structure, misogyny, and anti-immigration.”
An early 2022 briefing from DPMC to Ardern explored how they could further control Covid related mis/disinformation and claimed its presence could lead to “radicalisation of at-risk individuals” and “incitement of criminal or violent extremist activity.”
A 2022 community update from the Christchurch Call, which is administered by DPMC, has similar language, “There is also increasing recognition that terrorists and violent extremists use disinformation to extend their reach and build support for hateful ideologies, and that disinformation can also act as a gateway to these ideologies.”
Sad Sanjana of The Disinformation Project has also repeatedly said that questioning Covid policy results in growing radicalisation towards extremism:
All of which was echoed last week when second gentlemen Douglas Emhoff, met for a roundtable with Ardern on the Christchurch Call and said there was a “global epidemic of hate” and that, "It's accelerated by online radicalisation, misinformation and disinformation."
Lemme take a step back. Within New Zealand, NZSIS reports that are available appear to attribute extremist activity to lone actors, and the national threat level moving to low in November 2022 (it was raised to medium after the Christchurch attack) is surely indicative of that. Announcing the change, then Director-General of Security Rebecca Kitteridge took pains to add that “…the National Terrorism Threat Level does not reflect levels of hate speech or violent rhetoric.”
If I take all these claims and Sad Sanjana (if I keep saying it, it’ll catch on) at face value, here’s something that has been bugging me in the New Zealand context. The above statements hinge on mis/disinformation leading to other problematic rhetoric and radicalisation is growing and we should all be terrified because it results in extremism. So wouldn’t that raise the threat level? Because the whole point of that argument is that it leads….to actions.
But the NZSIS had previously pointed out in a response to journalist David Fisher that words alone don’t mean someone is going to commit a violent act. It gets confusing.
On the fourth anniversary of the Call Ardern said, “I was determined from the earliest days following the attack to do what I could to prevent something like this happening again.”
“Something like this” are key words as it helped me understand, it’s not about eliminating extremism per se - before I read that I was confused how the Christchurch Call would have prevented a horrible extremist attack on the Ugandan border last month.
No, it’s only about preventing a realistic possibility, the definition of the low threat level, that possible lone actor extremist acts could occur, mostly in Western nations, and assuming they now happen due to someone doubting mask efficacy online which leads them to anti-immigration rhetoric online which leads them to be radicalised towards acts of extremism.
I can’t help but feel….it’s more complicated than that?
But remember how Ardern got so carried away when she called mis/disinformation a weapon of war in her September 2022 speech to the United Nations General Assembly? She even ended the UN speech with, “Because for every new weapon we face, there is a new tool to overcome it.”
The tool she refers to is the work to make it easier to vanish troublesome content when it happens - a belief which seems to conflate not seeing something must also mean it doesn’t exist (‘human rights’ on Facebook pages anyone?). This must be the work of strengthening legislation and regulation, which is a part of the Christchurch Call.
Content that is considered potentially harmful, but not objectionable such as mis/disinformation, sits in empty space - the Chief Censor is powerless and it can only be referred back to the platform who may not agree it violates their policies.
In June Ardern asked Stanford University researchers what worries them most - 1 raised Twitter had blown apart content moderation policies, and another about the balancing act that regulation must have with free speech but concluded, “It needs to be done.”
I can start to see how and why the Department of Internal Affairs launched a consultation on setting up a regulator of online content in New Zealand.
Another of the tools Ardern refers to could be the work on algorithms.
The Christchurch Call’s algorithmic work
One of Ardern’s self-listed accomplishments in four years of the New Zealand government funded Christchurch Call was making the Global Internet Forum to Counter Terrorism (GIFCT) an independent NGO. A bold claim for success when the GIFCT is solely governed by the biggest tech companies in the world. Founded in 2017 by Youtube, Twitter, Meta and Microsoft - the Chair rotates among them. Civil society representatives and government, including a New Zealand government rep from DPMC, are relegated to sit on the advisory board.
These players are key to the algorithmic work. This work is the Christchurch Call beyond the photo ops - trying to crack open social media platforms to see how they work to then understand how algorithms ‘funnel’ users to extremist content.
In April 2022 DPMC provided Ardern an update on the algorithmic work. The briefing noted the difficulties with trying to design algorithms across platforms - due to regulatory issues (privacy is a big deal as to see how it works they also will need user data) and proprietary issues (AI and algorithms are commercially sensitive to each platform) as well as the inherent problems of trying to control for the ever changing, self-learning nature that underpins existing algorithms.
Ironically the briefing also drew out that the issue with extremist content is that it’s not supposed to be there in the first place, so platform policies will generally remove it before algorithms that search for it’s reach and impact can be set upon it.
The algorithmic working group, co-led by the European Commission, Meta and the Institute of Strategic Dialogue with New Zealand, considered instead they could follow an avenue they credited to Meta head Mark Zuckerberg - concentrating on content that is ‘close to the policy line’ that they believe drives people towards extremist content.
This is also called grey zone content. It’s neither illegal nor violates any platform policy - so maybe it can be what you like it to be - like focusing on mis/disinformation.
In September 2022, the Call announced a partnership with Microsoft, at a cost of $US2 million the cost was split between Microsoft, Twitter and the New Zealand and US governments. The New Zealand share of $917,000 (due to the exchange rate) was paid out of the Prime Minister’s Emerging Priorities Fund - not the dedicated Christchurch Call funding that DPMC receives.
The partnership will produce a proof of concept of a tool by OpenMined involving, one or multiple, real social media data sets so that government and researchers - will be able to “…remotely study data and algorithms distributed across multiple secure sites” and so overcome the legal, bureaucratic and cost barriers:
What is the limits to this work - are there checks or balances in place that means it cannot be expanded beyond what is deemed violent extremism?
How does this work clearly contribute to preventing violent extremism and where (it seems uniquely Western but I could be wrong)?
Most social media platforms, especially the ones you’ve heard of that extremism might lurk on, are not part of the Christchurch Call - what is the scope of the algorithms across platforms?
And what about Telegram that Sad Sanjana (that’s my last attempt, the nickname lives or dies with you now) is obsessed with. Telegram doesn’t suggest content to users through algorithms. You have to find and sign up to specific channels - you don’t get scrolling screens of suggested content through an algorithm, like on your Instagram home screen.
Ardern soon starts her fellowships at Harvard, it’ll be interesting what’s announced on updates on this work in the next few months, both as to whether it’s feasible, what it does and whether the outcomes are clearly defined.
Interested in the topic? Read a full history of Covid mis/disinformation in NZ on this Stack.
The whole disinformation/conpriracy theorist process is the elites gaslighting the public as a defensive/prtective mechanism while they push through desired social engineering changes in line with the 2030/Reset Agenda's.
Broad strokes are easy.
Cutting in is what takes skill.
*Some cars are piles of junk, therefore all cars are junk.
*Some white men are colonizers and corrupt, therefore all white men are colonizers and corrupt.
*Some anti mandate protesters have links to far right extremists, therefore all anti mandate protesters are far right extremists.
*Some leaders exploit laws or rules to enslave, manipulate and then murder their subjects, therefore all leaders exploit... Etcetera
And so on and so on.
Bigotry is on the rise yet again.
Racism bad.
Sexism bad.
Religious persecution bad.
Age discrimination is okay.
Ideological discrimination fine.
Medical procedures define you.
Want autonomy... They'll find you.
And misinformation is dangerous, but the solution isn't suppression but instead more debate.
The rule of the jungle and every other ecosystem should apply, the best ideas survive and the weak ideas die away.
But not when dictators have their say.
Hush. Silent. Algorithms. Compliance.
Nothing to be concerned about