Key Reflections

* The Christchurch Call was established in the aftermath of the 2019 Christchurch terrorist attack in New Zealand. It is a collaboration between governments, tech companies, and civil society to ensure that terrorists and violent extremists cannot use social media to amplify their attacks. 

* The social media amplification of the Christchurch shooting was unprecedented, going from 200 viewers of the livestream to millions being exposed to reuploads of the video after the event. 

* The Christchurch Call demonstrates the importance of a whole of society approach to countering terrorism and violent extremism online. Governments, tech companies, nor civil society would be able to solve this issue by themselves. 

* NATO members and partner nations were pivotal in the establishment of the Christchurch Call, with the initiative launched at the Tech For Good summit hosted in France, eight weeks after the attack.

* Leaders of the Christchurch Call community do not want to be static and focus solely on past events, instead they are looking at ways to innovate and respond to the ever-changing nature of the online environment.

* Whilst the Christchurch Call focuses primarily on countering terrorism and violent extremism online, there is a connective tissue between this issue and other adjacent ones, all of which involve the exploitation of online platforms. 

Transcript:

SG: Dr. Sajjan Gohel

PA: Paul Ash

SG: Welcome to the NATO DEEP Dive podcast, I’m your host Dr. Sajjan Gohel and in this episode I speak with Paul Ash, the New Zealand Special Representative on Cyber and Digital as well as the Christchurch Call and Cyber Coordinator in the Department of the Prime Minister and Cabinet. In our discussion we talk about the Christchurch Call which was established in the aftermath of the 2019 Christchurch terrorist attack in New Zealand. It represented a pioneering collaborative effort between governments, tech companies, and civil society to try and prevent terrorists from using social media to amplify their attacks.

Paul Ash, a warm welcome to NATO DEEP Dive.

PA: Thanks, Sajjan, great to be having the opportunity to talk with you and to have a conversation about the Christchurch Call on this day.

SG: We’re very much looking forward to having that conversation. I think perhaps it would be an interesting way to start this podcast, in many ways to demonstrate the interoperability of cultural ties, with when we actually first met which was in Australia in November of 2022, during U.S. Thanksgiving. I think it’s fair to say that there was a mutual appreciation society that was formed when we both heard each other give presentations at the Five Eyes conference. Your presentation really stood out for me because you spoke about, not just your role, but you also spoke with a huge amount of passion in helping to formulate what became known as the Christchurch Call. What I found staggering is how few people actually know what it’s about, despite the fact that it’s had global ramifications and actually does play a role throughout the world. I guess the starting point is, for our listeners, what is the Christchurch Call and how did you come to be connected with it?

PA: Thanks, Sajjan. The Christchurch Call had its genesis in the terrorist attack in Christchurch on 15 March 2019. And on that day, a terrorist walked in—a Friday—walked into two mosques, at lunchtime, as the congregants were at prayer, murdered 51 people in those two mosques, severely injured another 48 people, and live streamed the whole thing, and was able to livestream an attack that went on for 17 minutes on Facebook. And as soon as it was live streamed, it was copied, pushed out to some of the recesses of the mainstream internet, and then pushed back onto platforms quite relentlessly over the next 24 to 48 hours, as major social media platforms grappled with identifying the content and taking it down. That continued thereafter, for some time, as platforms and governments started to think very, very carefully about the problem that had been created and really struggled with the technology and linkages and networks to deal with it. 

And the harm that that caused, the immediate proximate event was obvious, a repugnant terrorist act, the amplification of that, online, went global. And it did that in a way that has not been seen on the internet before or since. The number of copies that had to be taken down in that first 24 hours; Facebook took down 1.5 million copies of the video in that time, over that first 48 hours, the weekend after the attack. YouTube had someone trying to upload the video every second. And the scale of that was not something we’d seen before. And in a sense, it took an event that was tragic in Christchurch and amplified it globally, and the harm that we saw caused by that from people that had that come through their social media feeds, inadvertently found themselves watching it. Really it was a turning point for governments and the tech sector and for anyone in civil society in trying to think about how to deal with it. I’m assuming, Sajjan, you may well have seen it come through your feed over that time. I’ve regularly bumped into people who’ve been subjected to what is a really traumatic event.

Off the back of that, we had to think pretty carefully about how we responded. And I was working at the time in the department of the Prime Minister and Cabinet, as the director of national security policy, I have a background in cyber policy and online policy. And we worked with our prime minister on what became the Christchurch Call, which was an effort to work collaboratively with the tech sector and with other governments and with civil society on finding solutions to the problem. First the presenting problem of livestream and of the technical means to deal with it, and then continuing to go a bit deeper into some of the underlying causes of the kinds of actions we saw in Christchurch and actually subsequently saw in a number of copycat attempts over the next year or two. So, that’s the sort of genesis, it’s driven out of an event that really was unprecedented in New Zealand, it was a turning point for us, and something that was unprecedented globally in terms of its impact online. 

SG: I’m just reflecting on some of the numbers, the statistics, that you identified as to the awful, horrific video that was being live streamed during the assault that was being carried out in Christchurch on the mosques. My understanding is that when it was actually being streamed live, there were under 200 people that were actually watching the carnage as it was unfolding. And the video was viewed around 4,000 times before it was actually removed. The challenge, though, was the fact that maybe a limited number of people saw it live, but afterwards, effectively billions had access to it, because that’s how quickly things spiral. Talk to me, Paul, about the challenges and the obstacles that you first faced when it came to formulating what would become the Christchurch Call, and maybe the challenges of getting buy in from social media companies.

PA: The numbers you’ve described are actually quite confronting when you think about how quickly it went from 200 viewers to 4,000, to millions and millions. What we saw was some fairly careful planning by the terrorist himself, in terms of setting up a network and a grouping of people that could be expected to take that content, pull it off the platforms that it was on, and then start to push it back on again onto mainstream social media platforms. And that MO was, at the time, very, very successful. It took the major companies by surprise; it was well enough organised that there were probably two or 300 people actively doing that. And they had, I think, developed a reasonably good understanding of the way the algorithmic processes in the companies worked and how to actually get around those and ensure that the material could be promulgated widely and go viral very quickly.

That required, I think, from our side that we thought in new ways about how to deal with this. And to your question around the challenges we faced, the first one was, what do the appropriate policy responses to something like this look like? They range from regulatory responses, some of which were put in place by New Zealand and other countries after the attacks. We did that in a slower time than some others. But that’s a function of governments, it’s quite important. Right the way through to voluntary measures that we could perhaps implement more quickly with the tech firms, drawing on their technical knowledge of the issues involved. 

And the first thing we ended up in was somewhat of a policy debate, here in New Zealand, about how to respond to this issue. We engaged with a number of the tech firms, they came reasonably quickly to want to have a conversation and to their credit, a number of them, once they had worked out what was happening, also worked operationally to ensure that there was good collaboration across our government and a number of others in dealing with some of the immediate impacts and the law enforcement related parts of the content being distributed. 

Sitting underneath that policy conundrum was that we found that we had a range of tools that we could use, but none of them were perfectly suited for what we were grappling with. We have an office of film, video, and literature classification here in New Zealand, known as the Classifications Office, and they were able to very quickly classify the material as objectionable, under New Zealand legislation, and ensure that it was prohibited to distribute, possess, copy the material. And that was a very effective first line of defence, but it wasn’t a long-term solution, that’s actually dealing with the problem after it’s been created. So, as we worked that one through, we talked at length with some of the firms involved, we talked with civil society groups, we talked with a number of other governments. And about two weeks after the event, I engaged with a number of other states, particularly France and Germany, but all of our closest partners as well, almost all of them are either NATO members or allies and partners, and worked through how we might pull together a set of commitments, that we could work with the social media firms on to try and look at ways to solve the problem. The Christchurch Call was the result of that. 

Our first conversations with the tech firms, I think, were probably more awkward than any of us would have liked. We weren’t used to working with them in this way and they weren’t used to working with us. And similarly with civil society groups, who ranged from survivor and victims’ rights groups, right the way through to advocates for civil liberties and freedom of expression. And we’ve been very careful to continue to work with that spectrum. And I guess the key thing that we have discovered over that time, is the importance of being even handed and clear in what we’re trying to achieve, without necessarily being prescriptive about how we’re going to get there. And in that way, building trust across the different participants, and drawing on the strengths that governments can bring, industry can bring, and civil society can bring to a conversation about how to address some of these issues. 

So, one of the key things you’ve identified, I think, is that question of building trust and it was something that we worked very hard on. We were pleased to see folk from the firms coming in the other direction, trying to do the same thing, and folk from civil society. And sitting at the core of that, I guess, was the sense that nobody wanted to see this sort of content on anybody’s platforms. There are some exceptions to that, some small platforms that I don’t think will ever be able to engage in a really constructive dialogue with because they prefer not to. But there was a sense of common cause there and a need to try and find shared solutions. And it took a while to get there, but I think one of the great strengths of the Christchurch Call was a real commitment across the Call community to keep working in that way.

SG: So, very much it’s a whole of society approach that involves government, tech industry, and civil society groups, as well. You mentioned various countries that helped and collaborated. So, one country I was curious about was France. France seemed to be very important in that, is there a reason why France became so engaged in helping to work with New Zealand for the Christchurch Call?

PA: France has been a really strong and steadfast partner from very early on in this process. Our Prime Minister at the time, Jacinda Ardern, was obviously in close contact with a number of her colleagues and peers in France and Germany, Canada, Australia, the U.K., a whole range of different places. The French government had reached out and said, as many others did, what can we do to help? And when we sat down with the team in Paris, as we did with a number of other teams, it became apparent they were hosting Tech for Good Summit, eight weeks after the attacks in Christchurch, and we looked at that timing and worked with France on the basis of their invitation to tee up a meeting to launch the Christchurch Call at that time. That gave us a very, very short lead time to develop the 25 commitments from the Christchurch Call. Our French colleagues worked directly with us, as did colleagues from a number of other countries and from a number of tech firms and we first built on a really solid placeholder for civil society.

But we worked very closely with the team in Paris on developing the text, we stationed someone there in the lead up to the launch on 15 May 2019. And we ran a 24/7 operation between Wellington and Paris, and between the various places that those of us who were negotiating the text were travelling to at the time. So, I made my way up to the West Coast a couple of times during that period, working with tech firms, and we worked very closely, virtually right the way across those different time zones, to get the text done and ready by 15 of May and ready for the launch.

And again, that was quite a process of building trust between the participants and developing the 25 commitments in the Call over that period of time, and France was integral for that process, we worked very, very closely with and we’re very appreciative of the role they’ve played, it’s probably the closest working relationship we’ve had with our French colleagues for quite a long time. And it’s been really an extraordinary experience, just looking at the different capabilities and different ways we think about these things and putting them together, along with those other partners across a whole range of countries and tech firms and civil society groups to get the best outcome we can for the objectives of the call around eliminating terrorist and violent extremist content online.

SG: It really does put it into perspective, just how large and how mammoth sized the challenge was in this way, and the logistical dimensions that perhaps are not always fully appreciated. Where did you first find, ‘okay,’ that, ‘this is working,’ that the Christchurch Call is actually achieving the objectives. Was there a specific moment where you thought, ‘okay, all this hard work is actually paying off, we’re going in the direction that one had envisaged, and now it’s actually happening in practice?’

PA: I think the first step was getting to the launch, and that eight weeks was a reasonably frenetic period of time. But once we settled on the text of the Call, which was completed just a few hours before the launch at the summit, we were able then to get it launched and focus in on some key work strengths. The first of those that I think we really saw ourselves getting traction on quickly was around crisis response and ensuring that between the companies involved, the countries involved, we actually had new crisis response protocols that we were working together on, developing and deploying both new technologies and new means of communicating. And over the course of the year, we saw several attempts of copycat attacks. And we saw those new protocols in place and working. 

And perhaps, to then fast forward into last year, and the really tragic attack in Buffalo, New York, that was live streamed, we saw the ability of the tech sector to identify and bring that content down very, very quickly, in a way that hadn’t been possible three years previous, when the Christchurch attacks had happened. Around the crisis response parts of the call, I think is the place where we’ve seen measurable progress that we can evaluate and come up with a quantitative outcome.

More broadly, we’ve seen real progress in working on regearing and relaunching an entity called the Global Internet Forum to Counter Terrorism (GIFCT). When Christchurch happened, it already existed, but it was very much a virtual organisation, it didn’t have staff, it didn’t have its own identity. And so, we worked through a process between May 2019 and September of that year, when the Call community met again to work out how that could be reconstructed, how it could be resourced as an independent, non-governmental organisation or a not for profit, and how it could expand its work in the areas of research around violent extremism and terrorism online, its crisis response capabilities, and its support to smaller platforms. That relaunch was announced in September 2019, and reasonably shortly thereafter, but through the pandemic period, the chairs of the GIFCT, first Facebook as it was then Microsoft, and then Twitter, subsequently Google, have stewarded the development of that organisation and the growth of its capabilities, which has been a really important step forward. 

I guess the last thing we would say, in terms of a sense of measurable progress, was actually that meeting in September 2019. King Abdullah of Jordan was kind enough to allow New Zealand and France to co-chair one of his Acaba Process meetings that became a Christchurch Call meeting at the UN. We had 31 more countries join the original 17 at that meeting, we had new tech firms, we had the announcement of the new protocols and the launch with the restructured GIFCT. And we really had, I think, momentum and lift-off at that point. We had some very frank conversations with civil society groups at that point as well, where they asked for a greater role in the work of the Call, and I think were pleasantly surprised when we reciprocated and said, ‘that makes good sense to us as well.’ And we began building a core community at that time. And to me, I think that’s one of the key achievements we still have now, that sense of community, we’re building a really coherent, engaged group across those three sectors that continues to work together on this problem.

SG: Coherent, engaged group, indeed. And I’m very glad that you also mentioned Jordan, because it’s a country that we’ve worked very closely with—the Jordanian Armed Forces—when it comes to developing CVE strategies, as well. So, they’ve been a very important ally in this issue. If you could talk to me more Paul, also about the role of algorithms, and in particular when it comes to the questions of regulation, oversight in the tech industry, or what is the best strategy to pursue in combating disinformation, radical content on social media, when it comes to those algorithms, which by design suggest related material to users that could actually lead to them getting radicalised. In many ways, it’s almost a paradoxical challenge.

PA: It really is a paradoxical challenge. The first thing to acknowledge is that when we stood up the Christchurch Call, we were first looking at crisis response as a way of limiting the impact of the sorts of issues we saw at Christchurch. We were then—and this is embedded deeply in the commitments in the Call—focused on looking at some of the contributing causes, the things that led to that kind of activity, and ways that we could grapple with those that were consistent with international human rights law, and a free, open, and secure internet. Algorithmic amplification is one of those signature issues that we’re having to really get our heads around and work on finding solutions to. 

And the third thing we had to do was work out exactly what type of algorithms we’re talking about. There are three broad groupings here, algorithmic processes that identify harmful content, and one of the issues there is false positives, false negatives, and there’s work underway on that within the broader Christchurch core community. And with those who work in this area. The other two areas go more to the point you’re describing, which is radicalization, and they are search and the algorithmic processes that identify particular types of content for a user and curate the priority that’s given to particular types of content. And we saw some particular challenges with that, after some terrorist and violent extremist events, in particular, the murder of school teacher Samuel Paty in Conflans-Sainte-Honorine, where one of the real challenges was that content being surfaced, both by search engines, but also by search functions that set inside social media platforms and recommended the attacker as someone to follow for a number of users, until firms got that under management and control.

The other is recommender algorithms and those basically would say, if you’re watching the NATO DEEP Dive podcast, or listening to it, why don’t you listen to this next, and we’ll then move you on to something else. And in some instances, there’s evidence suggesting that that does lead to amplification and increasingly harmful content. So, you might go from something that is controversial, but benign, and it’s normally within the area of freedom of expression and find that certain users—because it’s as much about the user as it is the algorithm—end up down a rabbit hole, as it’s sometimes described. And there’s a couple of questions there, first, actually an empirical understanding of what’s happening in those situations. And second, thinking very carefully about the sorts of interventions that might, positively, ensure that users’ journeys don’t take them into that radicalization rabbit hole. And both of those need quite a bit of research. So, one of our four work streams is exactly in this area: algorithmic outcomes, so understanding what the algorithms lead to, and positive interventions. That’s been a very challenging area for governments, and for tech firms alike to try and understand. And it’s one way civil societies input is really important because they can provide insights, and in a sense, a multi-stakeholder approach is potentially a more stable and robust approach than any one of those three groups trying to do this on their own. 

Last year, in September, Prime Minister Ardern announced the launch of the Christchurch Initiative on Algorithmic Outcomes. This was a ground-breaking piece of work with Microsoft and the United States and New Zealand governments, working with a not for profit called Openmind, testing a proof of concept to acknowledge these privacy enhancing technologies that would enable researchers to work with algorithmic processes, and with use of data, in a way that would enable them to research remotely in a trusted environment and draw some conclusions about what was happening. We’re partway through that process at the moment, we’re in the first step of two phases where it’s tested on individual platforms before it then looks at the way a user’s interaction on one platform might lead them to another one and then how algorithmic processes work in that environment. And to date, the first phase seems to be going well, we’re not too far off being able to transition from phase one to phase two and looking to build some further platforms into that work and grow the initiative over time. 

It’s one of a number of similar initiatives, it would be remiss if I didn’t note that a number of the firms that are in the Christchurch Call have also established research access programmes in recent times, and are looking to find ways to increase the transparency around the way that algorithmic processes work. We see that as a positive outcome, it’s one that—coming to my earlier point—does require a lot of trust to be built amongst the various partners working on it. It’s one where there probably will need to be regulatory initiatives and indeed in things like the Digital Services Act and the EU, there are now regulatory frameworks for the assessment and audit of risk and algorithmic processes and the kinds of practical tools that have been built on the likes of the Initiative on Algorithmic Outcomes. And then, of course, we’re seeing that some key universities at the moment should contribute to enabling those assistants to be carried out in a robust and safe way. But it is one of the hardest pieces of the puzzle, both because it needs to be done in a way that doesn’t necessarily get to the core of proprietary technologies in a way that would damage that particular interest, while also making sure that they’re deployed safely.

SG: These are all really, really important initiatives that are being conducted. And I think it kind of illustrates, and is, a very important reminder about how technology evolves, that it doesn’t stand still. It’s sophisticated, and the utility of it is constantly developing. One dynamic that existed, even during the time of the Christchurch attack, but seems to be developing as we speak, is video gaming chat groups, where people are communicating, they are disseminating information, even recruiting and plotting. Places that perhaps, prior to the pandemic—and social media entities—that were not necessarily looked at in the same way as your Facebooks, YouTubes, Twitters, etcetera. How hard is it, Paul, to keep having buy-in from different companies that perhaps were not necessarily involved in the beginning of the Christchurch Call, but are now actually relevant, because of the technology that’s involved? How does one keep adjusting those commitments, so that what was created, continues to keep people safe?

PA: That’s a great question, Sajjan, and I think the leaders of the Christchurch Call community do not want the Call to be a static construct that looks at yesterday’s problems and is not looking ahead. And indeed, when leaders met last year, they made the point that as we continue to innovate, as societies, as the functionality of those online environments changes, Call leaders really want today’s young people to enjoy the benefits of a global internet, without having to deal with, or be confronted by, violent extremist content or threats. And that’s very much about building a positive future online and ensuring that our thriving community contributes to that. Call leaders, at their last summit, agreed to launch a new stream of work on how we can both anticipate the adoption of new technologies, and understand the challenges that they might pose, and develop new strategies to address them, and prepare the members of the Call community for managing the exploitation of those new systems by terrorist and violent extremist groups. 

There’s a couple of elements in that. One of them is working directly with young people and children to try and understand their experiences and how we can help support them and understand the issues that they might confront. The other is working with either new companies or new parts of the tech sector, and in particular, gaming is one of those, to think about how we can support them, as they build safety into their systems. And a couple of good examples of this would be the likes of Roblox, which joined the call last year, very much targeted at gaming, but working with a very young demographic, and had already seen, for instance, game creators recreating the Christchurch attack online, and putting it on their platform in the hope that people would play it. They’ve worked with the Call community, since joining, on safety issues, we found them very, very responsive to that conversation. 

The same thing has happened with Microsoft, one of our earliest supporters of the Call, along with Google, Twitter, Amazon, and Facebook. The Minecraft product was something that was exploited, again, by some users in this way, and they’ve had to do pretty much the same things. So, the gaming sector was one that we really focused on as part of a new technology work stream, because it really is a gateway for many young people into those new immersive environments. The gaming sector is where a bunch of the extended reality, or augmented reality environments will first be driven out of, and it’s one way it has been harder to build and safety tools today. But we haven’t seen a shortage of interest from the companies, they get the problem. And again, a starting point for the Call is that nobody really wants this content on their systems.

SG: Absolutely. You mentioned one of the key words to a lot of the discussion that we’re having, which is safety, and in connection to that, child safety online has become a real hot button issue in Europe. And recently, French President Emmanuel Macron helped create, or launch, the Children’s Online Protection Laboratory to improve safety for minors across the world, and I gather that you actually were pivotal, you played a very important role in helping to establish that. Could you talk more about that? And, how it ties in as a partner concept to the Christchurch Call?

PA: Thanks, Sajjan. I think the Call has developed a unique model for coordinating action and bringing together effective communities, civil society, and tech experts, alongside governments, on the key issues of online safety and one of the key things there is, by harnessing the distinct capabilities of each of those sectors and building that community, with a shared ambition, we’ve started to see results. And the success of the call, I think, is reasonably well recognised, particularly amongst those participating in the work, such that last year a number of members of the community expressed some real interest in understanding how the Call might work on some related issues. That created a little bit of attention for the Call community because one of the things that helped us make progress in the Call was keeping its scope very, very carefully focused on terrorist and violent extremist content online. So, working with France, as they stood up the Children Online Protection laboratory, they used a very similar model to the Call and we were really supportive of them doing so through to the launch in November last year, at the Paris Peace Forum of the Laboratory Initiative. Again, like the Call, it brought together industry, civil society, and governments. 

For the launch, I was very fortunate to be able to represent the New Zealand Prime Minister, at that launch, alongside President Macron, senior ministers, the president of Estonia and Argentina, and many of the key industry players also involved in the Christchurch Call. And really, the Online Protection Laboratory is an effort to do something quite similar to the Call work, for keeping young people safe online, particularly on issues of cyber bullying, or harassment. It’s a useful case study in [how] our multi stakeholder approach can build effective coalitions to deal with a range of issues. And one of the things that leaders looked at, at the last call summit, was the number of other issues that are present at the moment, ranging from harassment, abuse, and hatred online, issues particularly those affecting youth or gender-based issues online, and toxic issues around disinformation. 

And the Call itself will probably stay, I think, reasonably tightly focused on terrorist and violent extremist content online, but it does end up grappling with some of the issues that are common, or some technological and collaboration issues that are common to all of those present problems, particularly that around issues of data ethics, artificial intelligence, algorithmic use. And so, there’s a workstream in the Call to look at, what we call, the adjacent issues, and how the models we’ve built in the Call might best be used to support work in those areas. And the key there is not to duplicate where work is already underway and settling well, and if I think about an area like child sexual exploitation online, there are already very, very strong collaborative mechanisms, the We Protect Global Alliance and the Technology Coalition would be would be two real standouts in that area—that are already working in that area, and we would not want to stand up things that either duplicate or compete with those, we’d want to make sure they were supported. But there are some other new emergent areas where I think the Call perhaps points towards a useful model that might be used.

SG: That’s, again, something that is going to be so important in the months and the years as we progress. A final question, Paul, much of what we’ve discussed, it’s been about terrorist groups, it’s been about entities that in many ways, operate in the shadows. When it comes to hostile state actors that are seeking to misuse social media, spread disinformation, is that a different challenge? Is that a different conversation, when it comes to the role of social media companies, governments, civil society? Or other transferable dynamics in relationship to this?

PA: It’s a great question. And I think when we’ve looked at the Christchurch Call, we’ve very much focused in on terrorist and violent extremist groups. There is a connective tissue between that issue and the issue of exploitation of online platforms, or online service providers, by state sponsored actors. And traditionally, perhaps, their activity has been in the more classical areas of cybersecurity, where we’ve seen cyber campaigns, malware, etcetera, distributed. My sense is that increasingly, we are starting to see some states using information in much the same way. And so, moving beyond malware that would affect code or hardware, we’re seeing information that is designed to affect people and communities and the way they behave and the way they think about the institutions or constructs that their societies are built upon. That is a very, very challenging presenting problem. And if I think about the Christchurch Call, one of the key thresholds for participation by states in the Christchurch Call, is that they are committed to a free, open, and secure internet, and that all of their actions are consistent with international human rights law, in the work of the Call. That I think helps us differentiate, certainly, between the states that are able to do that—and all governments are struggling to respond to and think about how they work with a changing online environment to a greater or lesser degree—but it helps us work with the states that are constructively engaged in that process. 

If I think across to the idea of state sponsored campaigns, I would probably put that in the category of adjacent issues. So, these are often campaigns that are well resourced, based on a good understanding of the way online platforms work and ways to exploit them. So, in that sense, there are some similarities between the way some of the more advanced terrorist groups or violent extremist groups have exploited the internet. Where the connective tissue kicks in, I think, is probably in some aspects of the algorithmic amplification. That is that, if you are using disinformation to affect societies, it’s conceivable that that could spill over into violent extremist activity or inspire some users to violent extremism. But my sense is that it probably needs to sit in a stack alongside the other presenting issues of terrorist and violent extremist content, child sexual exploitation and abuse, protection of youth online, and there are a range of others, as a distinct stack of its own, with acknowledgment that both the community model we’ve built in the Christchurch Call and some of the understanding of the way algorithmic processes work, data ethics contributes, artificial intelligence processes work, actually enabling us to transfer some of those lessons and the things we learn, as we go through the work, into that stack on disinformation. 

But it’s a very difficult issue to crack. If you think about the content issues, and you think about this as a content challenge, you have a spectrum, at one end of which is child sexual exploitation material, taxonomically, it’s very easy to identify what that is, with a small amount of material that is perhaps a little debatable. When you’re in the terrorist and violent extremist content area, you have a much more grey area, but you still have content that is very clearly terrorist and violent extremist material. And you have a designations process that covers much of that. If you’re in disinformation, you’re in an area where almost all of the material is grey. And it is very, very difficult taxonomically to deal with in the same way that you would those other two categories. And I think that makes this presenting issue, of the use of the online environment by state actors, a really difficult one to grapple with, particularly for liberal democracies that are committed to rule of law, international human rights law, and try to maintain a free, open, and secure internet. But I think the Call gives us some models that might be useful for aspects of that.

SG: Absolutely, I think the Call gives us many models that can be utilised, and I think you very amply demonstrated the difference between democracies that are based on the rule of law, accountability, transparency, and those that operate in a different way, and present different challenges. Well, Paul, let me thank you, again, so much for spending time on this podcast to talk about the Christchurch Call, and so many of the different dynamics that are connected to the challenges of social media, the exploitation, the disinformation, the connections that terrorist groups wish to exploit and take advantage of. You’ve helped demystify a lot of this for our listeners, and I’m very grateful to you.

PA: Well, thanks so much for your time, Sajjan, it’s been a pleasure. If you have any questions about the call, just hop on the website and connect with us there: christchurchcall.com

SG: Okay, well, we’ll embed that link into the transcript that we’re doing. So, thank you again, Paul Ash. 

PA: Thanks so much, Sajjan.

Disclaimer: Please note that the views, information, or opinions expressed in the NATO DEEP Dive series are solely those of the individuals involved and do not necessarily represent those of NATO or DEEP.

This transcript has been edited for clarity.