Key Reflections

* Tech Against Terrorism is a public-private partnership that works to counter terrorist use of the internet by working with the tech sector, international governments, civil society, and academia. 

* After the 2019 Christchurch attack in New Zealand, tech companies invested in combating online terrorist activity. The threat also diversified with terrorists using smaller tech apps and tools.

* Terrorist operated websites pose a threat to society. They bridge the operational and propaganda uses of the internet for terrorists, as well as acting as an archive for terrorist content.

* Online misogyny in the online space has evolved. Individuals who do not fit into conventional misogynistic groups espouse equally damaging and offensive content. Algorithms can be manipulated to promote such material.

* Violent extremists and terrorists use deepfakes and memes that can be utilised to communicate with each other, building a sense of community and concealing offensive language from unknowing outsiders. 

* The hybridisation of online threats has meant that different communities have been brought together. This poses a risk as people who may believe in more innocent conspiracy theories are sharing forums and online spaces with terrorists and violent extremists.

Transcript:

SG: Dr. Sajjan Gohel

AC: Anne Craanen

SG: Welcome to the NATO DEEP Dive podcast, I’m your host Dr. Sajjan Gohel and in this episode I speak with Anne Craanen, Research Manager at Tech Against Terrorism and host of its innovative podcast.

I discuss with Anne about the work that Tech Against Terrorism does as well as the evolving nature of how terrorists and non-state actors use the internet and online technology to further their agendas. 

Anne Craanen, warm welcome to NATO DEEP Dive

AC: Thank you very much, I am very happy to be here. 

SG: Let’s start by talking about Tech Against Terrorism. How did it come about? And what does its work entail on a daily basis? 

AC: Yes, it’s a good question. So, Tech Against Terrorism, we’re a public-private partnership, and we counter terrorist exploitation of the internet, whilst respecting human rights. And we were set up in 2017, partly by UN CTED, and our work is actually reflected in a couple Security Council resolutions that acknowledge the work of public-private partnerships and how important it is to work together with the private and the public sector. And that’s really what Tech Against Terrorism does. So, we kind of work with everyone, I would say, that wants to counter terrorist exploitation, but will do so democratically. So, we work with industry-led initiatives such as the GIFCT, so the Global Internet Forum to Counter Terrorism. 

But what we really do is—what we’ve seen, and I’m sure we’re going to talk about it later, but what we’ve seen is that bigger tech companies took action and have moderated their platforms and because of that, we’ve seen a migration to smaller tech companies being used. And so this is really where we focus a lot of our efforts that go towards supporting the smaller companies, through providing them with mentorship, so basically, practical advice in terms of how to define their terms of service, how to write a transparency report, and any other practical things that we can do to support there.

Beyond the tech sector, we also work with governments, so governments fund us. So, for example, the Canadian government funds a project of ours, we’ve previously been funded by Spain, South Korea, the UK Home Office, basically any government that, again, aims to disrupt terrorist use of the internet, but importantly, whilst respecting human rights. And then finally we also work with civil society and academia because obviously when you talk about content moderation, there’s a risk to freedom of expression and other human and digital rights. And by making sure that we work with civil society, we try to mitigate that as much as possible. 

So, that’s who we work with. And I would say our three pillars of the organisation are first, we have an open-source intelligence team. So, they actually monitor the internet and monitor where violent extremists and terrorists are exploiting the internet, which platforms are being used, which platforms are being used by which actors, because sometimes we see that social media companies or websites or whatever it is, can be exploited by, for example, far right terrorists, but not by Islamist terrorist actors. And so, it’s important to understand who’s exploiting which platforms, and that’s really what our open-source intelligence team does. And I would say it really grounds everything else we do at Tech Against Terrorism, because [when] responding to the threat, obviously you need to understand what that threat is first, and that’s what the open-source intelligence team does.

Then we have our more capacity building workstreams. So, this is where we have that mentorship process with tech companies. So, so far, we’ve worked with about 50 tech companies through our mentorship process. We also have something which is called the Knowledge Sharing Platform, which is basically an online website that we’ve built which has kind of everything you need to know in terms of moderating terrorist content online for both tech companies, governments, and other industry partners. 

And then the third thing which is interesting about Tech Against Terrorism, which is what I do, so maybe I should have opened with that. So, I’m the research manager at Tech Against Terrorism, but I also lead on the Terrorist Content Analytics Platform, the TCAP. And this is a tool that we’ve built ourselves. So, basically, at TAT, we also work with developers, and this puts us in quite a unique position to be able to build our own tools. And so, we’ve built the TCAP, which basically alerts terrorist content to tech companies when we detect it on their platforms, and we’ve been doing that since 2020. So, those are the three main workstreams, I’ll pause there if you’ve got any follow up questions, feel free.

SG: Sure, well thank you for firstly, clarifying everything that Tech Against Terrorism looks at and the partnerships that you build. It demonstrates the cross-spectrum approach that you’ve adopted. If we go into some of the specifics on the work that you all do, looking at open-source intelligence as well, being a critical component. So, how has terrorist use of the internet evolved over the years and what would you say are the current concerns?

AC: Yeah, it’s a great question. I mean, I think it’s changed massively over the last few years. And I think I already alluded to this, but what we really have seen is that after the attack in Christchurch, which was live streamed and obviously had a manifesto circulating, the bigger tech companies kind of came together and resurrected the GIFCT. And also, beyond that they obviously invested a lot of resources into content moderation. And so, we saw a real sort of cleanup of these bigger platforms. And on the one hand, that’s obviously really good, because it drives terrorist and violent extremists to more niche spaces. However, what it also did is it diversified the threat because bigger tech companies are still being used, but they’re sometimes also being used almost as what we call beacon platforms. So, they basically signpost to other, smaller niche platforms that are being used where you can watch a video or where you can read a magazine. And [with] the smaller tech companies it’s a real issue because they often don’t have the resources to first of all understand that they are being exploited, let alone that they know how to counter that exploitation. And so, this is a real [issue]. It’s kind of persisting still and we see a lot of small companies, and when I say small, I mean like file sharing platforms that have one person basically hosting it. And it can be really difficult for people to respond to sometimes the enormous level of exploitation that we see. 

So, that’s kind of the first thing, I would say the second thing is and there’s a lot of debate about this, whether this will persist or not, but obviously decentralised platforms, are becoming sort of a new thing [used] by terrorists and violent extremists, which really makes it more difficult for us to, to realise who is hosting the content. Because how Tech Against Terrorism works is that it usually alerts material, terrorist material, to tech companies when we find it through the TCAP and then how it works is that that person usually—and we’ve seen about well, in the first year, we had a 94% takedown rate, now about 84%—but basically the people behind those platforms they take the material down, and with decentralised platforms, it can be very hard to understand who we need to alert, like who’s actually hosting the material. So, it complicates matters further. And also a threat that we’re seeing currently, over the last months/year or so, is that we’re continuously seeing file sharing platforms being sort of created using open source code and particular open source code that is readily available on GitHub, for instance, and violent extremists and terrorists are basically creating their own file sharing platforms with this. And that’s obviously quite a conundrum there as well. 

And then, another thing that Tech Against Terrorism really is—it’s one of our priorities at the moment—that we’re strategically focusing on is terrorist operated websites. So, when you think of all tech companies, whether social media, file sharing, video sharing, and all the different [ones], because obviously there’s a massive variety and diversity of platforms that get exploited. Let’s say that we deal with that, then you still have terrorist operated websites. And this makes it even more difficult because obviously, we can’t alert a website to the owners because they are usually the terrorists that own it. So, that’s a no go. So, then we have to go to infrastructure providers and infrastructure providers, in order to take a terrorist operated website down, there’s a lot more we need to do there because of human rights and freedom of expression. It has a way bigger impact shutting a whole website down than a piece of content. 

So, that’s obviously a very good thing, however, it can be very resource intensive to then write an entire brief saying, ‘Well, this is why we are very certain that this is operated, or this website is operated by terrorists.’ And so that’s what we are massively focusing on because often these websites also have massive archives of terrorist contents. And so, that’s obviously a massive risk. If you imagine that we were successful in, for example, taking all material off the social media companies, but you still have a website with a massive archive, that’s obviously continuing that problem and we’re not really solving anything. And what we also see is that they’re used for internal communications, etc. So, they’re a lot more exhaustive. When you think of why terrorists and violent extremists use the internet, you’ve got the operational side, for fundraising for attack planning, and more secure communications, and then you’ve obviously got more of the propaganda dissemination. And these websites almost bridge both, which is quite the danger there.

And then finally, in this day and age I can’t not talk about it. Obviously, we are all paying attention to the use of AI, generative AI, by terrorists and violent extremists. This is obviously incredibly new, but we are seeing a more experimental use of generative AI by terrorists and violent extremists. Whether that is to, for example, sanctify particular actors, so basically, turning lone actors into saints or what we’ve seen recently, for example, is with the Nashville shooting, the police officer that basically shot the perpetrator was sanctified using generative AI. What we’ve also seen, for example, is Islamic State, translating their statements using AI. So, we’re seeing a slow but experimental use of it. And obviously, I think that’s definitely something that the whole of the industry will have to focus on.

SG: Did the pandemic change the way that terrorists use the internet. Obviously, we’re moving into a post-COVID-19 world, hopefully, but at the same time, everyone’s had to adjust and adapt to that environment. Is that also the case when it comes to terrorist groups? 

AC: Yeah, I think it is, to be honest, I think there’s still more studies coming out that actually assess to what extent COVID-19 has changed terrorists use of the internet. I would say that at the time, at first, at Tech Against Terrorism we were a bit hesitant with saying like, ‘It’s completely changed the game,’ because I think a lot of people were kind of jumping to conclusions and saying, ‘Oh, gosh, now that everyone is sitting at home and is more online, it must be the case that all our young people are being radicalised into echo chambers online.’ I would always warn against that in terms of, let’s look at the evidence for that, let’s look at the data. But now after COVID now we’re seeing yes, there has definitely been an impact. 

And I think it still remains a question of size. How many people are now being drawn into these spaces that weren’t before. But there has been a UK study, I believe by the Department of Justice, that showed that, prior to 2007, there was a very low percentage of people that said that they radicalised with an online component let’s say, whilst between 2019 and 2021, I believe 92% of convicted people said that, yes, the internet played a role in them radicalising online. And so that’s a pretty overwhelming statistic, I would say.

And I think what it definitely has done, which is beyond terrorists use the internet, but it’s also changed the threat and I think the hybridization of online harms that really has massively accelerated through COVID-19. Because you all of a sudden had a lot of, for example, conspiracy theories about COVID-19, where it came from, that it wasn’t real. And then when the vaccine came, anti-vaxxers, for instance, and the conspiracy theories around that. And what we saw was that a lot of these conspiracy theorists or people that were interested in these theories, also all of a sudden found themselves in more extreme channels as well, more extremist and terrorist channels. And partially this was just because they started to interact online and these conspiracy theories often out link to other channels, etc. 

But what it also was, for instance, after the attack on the Capitol, we saw that Parlour was shut down. And when Parlour was shut down, we then saw a massive uptick in telegram channels, especially Far Right channels. And then we actually found evidence of, for example, recruitment manuals in terms of how to take someone from an idea like a conspiracy theory to more Far Right, antisemitic, extremist ideologies. And so, COVID-19 has, without a doubt, completely changed that. And that merging of harms is definitely something that now has changed the threat online, but also how we should respond to it.

SG: Using the term you just mentioned, ‘merging of harms,’ lines are increasingly becoming blurred and harder to sometimes distinguish. How easy is it therefore, to determine what platforms online are operated by actual terrorists and what may look somewhat concerning, but actually are not necessarily breaking the law? 

AC: Yeah, it’s a great question. And I think there’s a lot more we can do here to be honest because in our typology, what we say is something is either a terrorist operated website, so that is we are very confident that the website is actually owned by Islamic State supporters or members of Islamic State, for instance, or another terrorist organisation. We then sort of have the middle category, which we sometimes call fringe platforms or libertarian platforms, which are basically resurrected to really uphold freedom of expression, and usually they feel that they are sort of the product of censorship elsewhere. And this goes more towards—and I’m definitely not saying that the people behind this are terrorists or violent extremists in nature at all—but it’s more that these websites can sometimes, or companies, might have a lot of terrorist and violent extremist content on the platform, just because they are so dedicated to freedom of expression that terrorists and violent extremists think well actually, ‘let me try this platform because they might not moderate and remove what I have to say,’ basically. And then there’s the platforms that we all know that are obviously absolutely not run by terrorists and violent extremists and that get exploited against what anyone had in mind. 

But I think you’re right, I think the question is very interesting, because I think there is more to be done in terms of when is something maybe not a terrorist operated website, but when we have a strong suspicion that for instance, a social media company or another type of tech platform, is being hosted by people that might have very similar ideologies as extremists and terrorists. And then what do we do there? Because that sometimes can really be a challenge.

And I would also say that in terms of that freedom of expression, it’s obviously incredibly important that, for example, through our mentorship process, we helped with very practical advice in terms of how do you write a terms of service to make sure that the rules of your platform explicitly say this is how we define terrorism, this is how we define violent extremism, if you will espouse this material online that will moderate you off our platform. And infrastructure providers in a way are a bit lagging behind on that because that conversation hasn’t been had as much yet, as with, for example, the social media platforms. And so, to really think through, what should be the threshold of evidence that we need to provide to infrastructure providers to make sure that they then can take a website offline. 

And then I think we’re stuck with the same issue that there’s multiple ways of taking a website offline. It involves many different infrastructure providers, domain names, registrar’s, it has a lot of different levels. And so, it can also be quite easy that if a terrorist operated website was to be, for example, no longer supported by a particular infrastructure provider, it could basically hop to another one. And so, for instance, what infrastructure providers could consider is, if you block a certain domain name, then what are the sort of alternatives that are very close to that domain name because they’ll certainly try to—terrorists and violent extremists—will certainly try to re-upload that website using a very similar domain name, because if they change completely, then obviously their fans won’t be able to find it. And so, there’s a lot more that we can do there, basically. I hope that answered your question somewhat.

SG: Yes, absolutely. Another aspect of the consequence of the pandemic was that there’s been a rise in online misogynistic doctrines of all kinds of ideological persuasions. Can you talk about the growing threat of misogyny and gender-based abuse online? 

AC: Yeah, so this is definitely one of my focus areas. And I would say that it’s always interesting to think through whether the pandemic has changed it. I’m not sure if it has to be very honest. I think it’s brought it more to light. And I think especially, not just online, but if you think about, the offline developments that we’ve seen during the pandemic and after the pandemic. So, I mean, misogyny is obviously as old as the world, but events like the decision to overturn or overthrow the Roe v. Wade decision, the protests that we’ve seen, both for and against abortion, and basically sort of a steady decline in terms of men trying to control women’s reproductive rights, for instance, and I wouldn’t say that’s just an online phenomenon that’s very offline as well.

I think in the UK, where we saw during the COVID-19, the death of Sarah Everard and the more light that was shone on femicide, but also women’s position in society. I think those offline events can have a real impact, first of all, on misogyny, offline and online, but also on the online ecosystem. For example, after Sarah Everard we saw on notable incel forums, for instance, loads of support shown towards the killer of Sarah Everard. So, I would say, offline and online, as always, they influence one another. And I would say that that’s the same for misogyny, so I wouldn’t just confine the change or the uptick in misogyny with just an online phenomenon. 

But if we purely look at the online realm, I would say the manosphere has been there for a long time, made up of men’s rights activists, Pickup Artists, Men Going Their Own Way, and then the worst, most notable case that everyone I think mostly is familiar with is incels, so involuntary celibate, which basically means that they blame women for denying them sex because they find themselves so unattractive and they have this perceived grievance that they can’t have sex and therefore, in worst case scenarios, they become violent against women or men. And so, what is usually studied is that incels and Far Right extremism that that really overlaps, which I definitely think it does, and it’s something to consider, especially how one can influence the other and how, for example, incels can radicalise into further, Far Right ideologies. But I would also say it’s really important to make sure that we consider vice versa, what that effect is, because I would say that violent misogyny, whether online or offline, that obviously needs to be studied also in its own right because it’s a threat against women and that should be taken seriously, not just because it can influence someone to also become, for example, antisemitic or Islamophobic.

But I would say that there’s now, again, there’s a lot of development now, and I think Andrew Tate has really changed this and raised these issues to the service because I wouldn’t say that Andrew Tate really fits into one of those categories that I just mentioned in terms of the manosphere, but obviously, he’s very misogynistic. And I think what we really need to consider there is what is actually online misogyny, and how should we categorise it and how should we treat it as a threat? And I would say that, especially Andrew Tate, has really raised issues of what are platforms, and especially their algorithms, promoting? Who are they promoting it to? And what is the internet affording in terms of features and algorithms in terms of how that misogyny is spreading, basically. And I think a lot more can be done about that. And the Digital Services Act in the EU, that will certainly focus on algorithmic transparency, the online safety bill in the UK, hopefully, will also focus on algorithmic transparency, but I think, in the context of online misogyny, that is incredibly important to consider. 

And in terms of that definition point, it goes wider than also the manosphere and people like Andrew Tate. If you think about the harassment both online and offline, that female journalists or female politicians are facing, I would say that this is really—I think, Julia Ebner’s put it as the ‘new glass ceiling’ of women because if they go into politics the online misogyny that they face is so steep that it usually translates into an offline risk as well. And so, I think that online misogyny, to be honest, is still not being taken seriously as it should. And sometimes it also is because it’s been put in this ‘terrorism or violent extremism’ paradigm, to see it as part of that, and I think we need to be very careful, because the threat doesn’t spread the same way, is what I would say. And so, we should respond to it in different ways as well.

SG: Yes, I certainly would echo your comments about Julia Ebner, who has done some very important research into this and is not only a friend of our podcast, but a former student of mine. So, there’s definitely a lot of praise I have for her. I’d like to explore with you the use of memes in the context of terrorism and violent extremism. Why are memes being used by terrorists and violent extremist actors? What is the goal and the agenda? Is this part of the current generation who are susceptible and impressionable to the impact of those memes?

AC: Yeah, so at Tech Against Terrorism, we always say extremists and terrorists, they use the internet in basically the same way as normal people. So, we like memes for a certain reason, and they do as well. And I would say that, mostly, I think we’re familiar with [the fact that] good memes have to be funny, and they have to signal something and I think when we then put that into the context of violent extremists and terrorists, it’s a very good way to signal to your in crowds because the sense of community you can build up through a meme grows, and it’s also a very good way because if you don’t understand the context, or the for example, antisemitic slur that a particular meme is trying to convey, the outgroup doesn’t understand it. And so, it’s a very good way to distinguish basically between your in group and out group.

It obviously has different purposes. Some say that it can be used, for example, for recruitment because we sometimes see very clear memes, for example, we saw a recent one where the Islamic State and al-Qaeda tried to basically make fun of one another and that is a very pointed meme for a clearly terrorist audience. Usually what we see is that these memes are bridging the more hardcore versus the people that might be entering into a particular extremist space. And a meme might be able to take them through that pathway.

I think another thing to point out that I think there’s a lot of focus usually on—and it makes sense—in terms of the Far Right use of memes and I would definitely say they use memes a lot more than Islamist actors or Islamist terrorist actors, but they are—and Moustafa Ayad has actually done quite a lot of work on that as well—they are definitely also exploring with memes. And I think this goes into a wider trend where we always say, the Far Right and Islamist terrorists and violent extremists don’t operate in completely separate spaces online. They very much learn from one another. And so, if Islamist actors notice that, actually, memes are a very good way to communicate to your in-group then Islamist actors will probably learn from that. And for example, the Far Right might look at Islamist actors and say, ‘Well, they are having a lot of success with file sharing platforms, let us experiment with that as well.’ And so, I think it’s always important to make sure that we study the two and how they learn from one another.

And in terms of memes, the final thing I’ll say on that as well, is I find it very interesting in terms of how our memes and extremist memes might not be as separate as we would want them or believe them to be. And Olivier Cauberghs has done a lot of research on this, in terms of how an innocent meme can be utilised by violent extremists and terrorists and then pop back up into our normal society. So, actually one of my friends, the other day, sent me Pepe the Frog and I was like, ‘Oh, dear, I don’t think you understand what this implies to a completely different audience.’ And so, how that transfers from one to the next is interesting, and again, points to the fact, also in terms of how difficult it is to moderate memes, right? Because if you’re not a subject matter expert on violent extremism, you might not have a clue as a content moderator that you’re actually looking at a very deeply antisemitic slur. And in order to teach content moderators how to recognise that is a real challenge. 

And also, memes are continuously changing. So, for example, automated detection methods like hashing, which is basically a digital fingerprint that you can take of a piece of content, if it changes, if it is edited, the hash won’t work anymore. So, there’s no automated way to go about this either. And maybe we shouldn’t even try automation when it comes to the freedom of expression elements as well, because when is a meme terrorist and when is it violent extremist. That debate in terms of human rights and digital rights is even more complicated. So, there’s many different reasons also, I would say, tactically, why terrorists and extremists would use memes because it is difficult to moderate and detect.

SG: Let’s continue to complicate things that we’re discussing. So, we’re living in a world where deepfakes are becoming a mainstream topic. You mentioned the evolution of AI earlier in our discussion and how this is now also, just like memes, it’s impacting into pop culture as well. Can deepfakes be manipulated for terrorist purposes?

AC: Yeah, I think it’s a very good question. And it’s an interesting one because I definitely feel like there is a lot of focus on deepfakes and especially also the question of whether terrorists and violent extremists would use them. Interestingly, I would say the sort of the thought leaders on this one are Daniel Byman and Chris Meserole, from Brookings Institute, and they wrote a paper on deepfakes and what it showed is that they haven’t really seen any terrorist or violent extremist use of deepfakes, which is interesting. That’s not to say no, that there is not massive potential. And I think when we spoke to them on the podcast, they were also like, well with generative AI, which is basically happening all that week, or at least, it became the topic of conversation, the paper was produced before that, and so I think generative AI has really changed the game on that one. 

So, in terms of what we’ve seen, at Tech Against Terrorism, we haven’t seen deepfakes where, for example, because that was kind of what everyone’s talking about, what if a former leader for example, the of the Islamic State or al-Qaeda would pop up and do a speech and basically using a deepfake to do so. So, we haven’t seen that but obviously, with generative AI, it has gotten a lot more complex. And what you could imagine, for instance, is—and this ties into disinformation as well—what if terrorist and violent extremist groups start using deepfakes and generative AI to, for example, create a false flag attack and spread disinformation on that account?

That is something that we’re monitoring, but we haven’t seen. We’ve seen an experimentation of generative AI, but we haven’t seen this persist as of yet. But that’s not to say, that this is not a massive worry and with how quickly the technology is developing and growing, we can see violent extremists and terrorists starting to use it. And I think with deepfakes again, whilst not having seen it, I’m by no means saying that it won’t be used in future. And also, the one thing I would say is that I absolutely don’t want to come across like I don’t find deepfakes a threat in terms of obviously, state actors have had a lot of success with this already and are already using it for disinformation purposes. So, it’s a question of when terrorists and violent extremists are going to start utilising it as well to a to a greater extent.

SG: Building on that as a final question, I guess, talk to me about the challenges of hybrid threats online and perhaps, for the benefit of our listeners, maybe just also explain what hybrid threats actually mean as well.

AC: Yeah, so hybrid threats are the hybridization of online harm. So, previously, where you might have had terrorists use the internet, so terrorists’ content, as purely that and now we’ve got the hybridization where, basically, online harms that aren’t necessarily terrorist or violent extremist are starting to overlap with that. So, you now have online terrorism, then you’ve got violent extremism, hate speech, discrimination, misogyny, conspiracy theories, and the hybridization of those basically they all overlap now. 

And so, it’s really changed the game in terms of, first of all, what the content looks like. And so, this poses questions, for example, for content moderators in terms of how to moderate it. Are they going to moderate something as terrorist content, violent extremism, hate speech, because on the content moderation side, platforms have rules for this, and they have definitions of content. And so, it becomes very hard to see for example, we’ve seen a video that has about one minute of the Christchurch live stream and then it has a massive conspiracy theory about an anti-Vaxxer, basically, and then it has a load of antisemitic tropes that are usually behind conspiracy theories. So, you could say that that could also be discrimination or hate speech. And so, it becomes a lot more difficult to detect this material and to also say what online harm it actually is. And so, that’s kind of what we mean with the hybridization of online harms. 

It also feeds into what I would say is a trend of post-organisational terrorism and violent extremism. And I would say even beyond lone actor attacks, because obviously those have been rising massively, especially on the far-right side, but material online that can’t be attributed to a terrorist organisation or a violent extremist organisation. It’s usually content that centres around a particular idea or a certain event like COVID-19. And so, that’s changed as well. And then I would say the final part of that is that it’s not always ideologically confined. So, content online, especially in more Far Right terrorist channels or violent extremist channels, for example, Atomwaffen Division is a designated Far Right terrorist group, it’s not all material that is neatly produced by Atomwaffen Division and labelled as such. It’s basically a lot of material that picks and chooses from different ideologies, I would say, and different ideas, this content that it shows up as. And as I mentioned, COVID-19 has had a massive effect on that as well.

And I think again, and I pointed that out earlier, but because of that hybridisation people and networks of individuals are interacting with all of these different online harms in different spaces. And obviously our worry then is, as Tech Against Terrorism, is what happens when people that, for instance, believe in a conspiracy theory actually radicalise towards even more extreme ideologies, and in the worst-case scenario, justify violence based off that. And Bettina Rottweiler and Paul Gill have done a really good study where they basically look at what are the psychological factors of an individual that might predispose someone to actually radicalise from a relatively more innocent conspiracy theory to more violent ideologies. And they, for example, show that low self-control, but high levels of self-efficacy can actually make someone interact more from conspiracy theories to more extreme ideologies. 

And so, this is really important to consider, because it really also changes how we think about countering that threat. And not just in terms of what it presents as online. But also how to tackle these things, because for Islamic State, there’s almost such a brand identity that it’s very easy to identify what is official material produced by the Islamic States, we’ve got designation lists where governments say we shouldn’t have this material and then we’ve got online regulation that says, well, we shouldn’t have this material online as terrorist content, so we remove it. And for this hybridization of threat and the overlap and online harms, we don’t have that pathway yet, or that sticker in terms of a solution. 

And I think that is something that we really have to come to terms with and I know that like the Digital Services Act, or the DSA, and the online safety bill, they’re trying to make a start at this. But I don’t think we’re there yet in terms of the granularity of definitions and I haven’t even brought up the term borderline content, but this often coincides with it as well. So, it’s sometimes material that doesn’t even go against the terms of service of a platform, but it errs on the line. And so, it can be very hard to tackle that type of material. But I would say I’m mostly worried about that type of material and about the hybridization of threat because it might seem more innocent, I do think that’s the type of material that has the opportunity to polarise society and the opportunity to recruit people and undermine our democracy really. I worry about that a lot, basically.

SG: Definitely something for all of us to be deeply concerned about, and as you mentioned, the challenges to democracy and civil society as well. So, things that we’ll need to watch out for. Well, let me thank you, again, Anne Craanen of Tech Against Terrorism, for joining us and helping to demystify a lot of the challenges that we are having to deal with online. Thank you so much.

AC: Thank you so much for having me. It was a great conversation.

SG: It’s been our pleasure.

Thank you for listening to this episode of NATO DEEP Dive, brought to you by NATO’s Defence Education Enhancement Programme (DEEP). My producers are Marcus Andreopoulos and Victoria Jones. For additional content, including full transcripts of each episode, please visit: deepportal.hq.nato.int/deepdive. 

Disclaimer: Please note that the views, information, or opinions expressed in the NATO DEEP Dive series are solely those of the individuals involved and do not necessarily represent those of NATO or DEEP.

This transcript has been edited for clarity.