Key Reflections

* There are major concerns surrounding the convergence of misinformation, borderline content, and foreign interference that can lead to violent extremism. 

* Failing to anticipate the next iteration or innovation that movements and groups can take could lead to the mainstreaming of extremist ideas within society and the broader political spectrum. 

* Terrorist and violent extremist actors use end-to-end encryption and encrypted services for sharing propaganda, recruitment, and planning attacks. There remains a lack of data on these phenomena, as those technologies are still relatively new. 

* Compromising the integrity of end-to-end encryption and internet security is not an option. There are, however, other types of data, such as metadata, which can be leveraged to disrupt terrorist and criminal activities provided the harvesting of that data is done within clearly defined legal frameworks. 

* It is possible to regulate virtual currencies to prevent malicious abuse by actors who seek to circumvent banking systems in order to coordinate fundraising, including crowdfunding.

* There are concerns regarding the ways that artificial intelligence (AI) can be weaponised, especially through disinformation and the blending or the creation of content that could be used to undermine trust in governments.

Transcript:

SG: Dr. Sajjan Gohel

HF: Hadelin Feront

SG: Welcome to the NATO DEEP Dive podcast, I’m your host Dr. Sajjan Gohel and in this episode I speak with Hadelin Feront who serves as part of Meta’s Global Counter-Terrorism Policy team.

Hadelin leads the company’s efforts on regulatory affairs, transparency, and end-to-end encryption. In our discussion we talk about a range of issues including online disinformation and radicalisation as well as foreign interference and the growing prevalence of artificial intelligence (AI).

Hadelin Feront warm welcome to NATO Deep Dive.

HF: Thank you. It’s a pleasure to be here.

SG: Let’s look at online communication technologies. They’ve made it easier for us in our daily lives, but they’ve also been utilised by terrorists to communicate across borders and have been also used to amplify propaganda. If we address terrorism and extremist content online in the last few years, what would you say are the key areas of concern that we need to be observing?

HF: Thank you, Sajjan. The one area of concern for me is at the moment the convergence of misinformation, borderline content, and foreign interference that can lead to violent extremism. I think this is a phenomenon that we have seen increase over the last few years. And it is a blending phenomenon that makes it harder to detect, and also harder to enforce against because it doesn’t conform to habitual notions or definitions of terrorist content or violent extremist content. And so, we have a situation in which industry as well as policymakers are playing catch up by trying to develop ad hoc policies to counter these types of new approaches to propaganda and online radicalization. And this ends up leading to situations where those policies might be effective in some cases, because they’ve been developed and tailored to specific movements or groups. But eventually they end up not being applicable at scale. And so, we fail to anticipate the next iteration or innovation that those movements and groups can take. And of course, we are handling this sort of blending of different and hard to define concepts and ideas. It leads to a mainstreaming of extremist ideas within society and the broader political spectrum. And I think that is a challenge to our political culture and institutions, as well as to security.

SG: You spoke about the challenges to political culture, we know that increasingly, we find that all kinds of terrorist groups, of various different ideological beliefs, lurk online, through the dark web, through encrypted messaging. What are the challenges when it comes to dealing with those entities that utilise the dark web, that utilise encrypted messaging for communication, for plotting, for planning?

HF: Yeah, thank you. That’s a great question. In terms of end-to-end encryption and encrypted services, terrorist and violent extremist actors indeed do use end-to-end encryption. I think it’s important to focus on what is the use that they do have. One is for sharing propaganda, another key use is for recruitment, and the third use case is for attack planning. The question however, is how big is this phenomenon? And that’s where unfortunately, there’s still a lack of data, in part due to the fact that those technologies are new, but also [due] to the fact that they are encrypted and therefore it is a different kind of technology where you won’t have the same type of data and the same volume of data that you might have on public surfaces. 

So, this is why we see increased interest in researchers about how we can understand terrorist and violent extremists’ use of end-to-end encryption? Meta, of course, then Facebook, commissioned research, in 2020, by Tech Against Terrorism on the use of end-to-end encryption by terrorist and violent extremist actors. And this is a very interesting report in many ways, because it illuminates what the factors that play into terrorists’ minds [are] when they’re considering using encrypted services and messaging services. 

So, what do we learn about the perception of terrorist actors? Well, that they basically will choose a particular app or messaging service based on how they perceive the platform to be secure for them. If the platform has strong policies against terrorist use, if there are controls around the way that users can report content, block content, typically this will undermine the trust, if you will, that terrorist actors place in a particular app. So, that’s important information, because, of course, policymakers and industry alike are in a very tough debate about what the right approach is to counter this use, even if it’s marginal. And I think the first step in that debate is to recognise that end-to-end encryption is becoming a key pillar of internet security and privacy. That is the first step because it is providing society as a whole with incredible benefits. And this was underlined recently in a 2022 report by the Office of the High Commissioner for Human Rights in the UN, that found that end-to-end [encryption] makes a key contribution to fundamental rights in our current era. 

So, I think that recognising those benefits, it’s important to state that compromising the integrity of end-to-end encryption and therefore of internet security, internet privacy is not really an option. And therefore, we cannot have a technical solution as some policymakers say, where you have, you want to have, end-to-end encryption on the one hand, but you also want to have scanning or backdoors. And those are technically not feasible, currently. However, what we know is how we can deter terrorists from using encrypted services. And we know how to disrupt the use that they may make of those services. How do we deter? By having strong policies, by having strong security features, by empowering users to report and to block content that would violate those policies. 

And how do we disrupt? I think this is, for me, the key notion, by tackling discoverability between bad actors, connectability between those actors, and the virality of the URLs or other content that they may try to share via encrypted messaging. And of course, this requires a concerted approach by industry and regulators alike and law enforcement agencies, that we need a targeted use of what we call metadata, but within the safeguard of clearly defined legal frameworks. What this does is it enables companies basically to compare data points between public and encrypted services, to identify high risk users and interactions and therefore to take mitigating action.

SG: These are all very interesting and important points that you bring up and it actually leads me to one of the questions I wanted to ask you, which is what is the role of tech companies when it comes to algorithms that suggest catered content to a specific user, but which could also potentially lead to a process of radicalization?

HF: I think the role of industry on the one hand, what it seeks to do is to help users curate content based on their preferences. We have an enormous amount of content. And everyone recognises that, due to this volume, it is impossible for individuals to sift through that content and find that which is most relevant to them. That is the basic use case at the beginning of how we apply, across the industry, algorithms, to organise and curate content and help users find information that is most relevant and useful to them. I think that is an important starting point. 

Now there has been a lot of interrogation and debate around the engagement models that that certain services and companies use and allegations that some of these models prioritise content that would be extremist in some way, or at least, encourage people into behaviours and speech that is hateful for example. I think this is a fundamental misconception in that debate. Why? Because all of the companies that have been asked to testify as to how to use algorithms have very clear policies against, for instance, any behaviour online that would be illegal, any speech that would be hateful, and certainly against terrorist content. So, what happens is that, as the industry has built the control mechanisms to counter this type of content, the vast majority of that content is removed from platforms before people even see it. 

However, what has been, I think, more delicate, and harder to address, for instance, is the type of content I was mentioning before, that blends things that are not illegal, that are not violating the policies of those companies, at least in the latter of those policies, and therefore, that it circumvents the enforcement mechanism, and which can also game the algorithm to create echo chambers to feed into people’s preferences. And we have to recognise that this is an issue, this is a real problem. But it’s a problem, I think, at two levels, it is not just an industry problem. It is a policy challenge for industry and regulators alike, because it goes down to the question of how do we define content that encourages a form of extremist thinking and behaviour as we know and you are, of course, a preeminent expert on terrorism, we’ve never been able to agree on a definition of terrorism, and I would argue, much less on a definition of what type of content leads to violent extremism for instance. 

So, it’s a policy challenge that is for industry and regulators alike, but it’s also an enforcement challenge, because we need to be able to counter this type of extremist engagement and echo chambers without undermining fundamental rights such as freedom of expression. So, we see here the magnitude of the challenge, and this is only talking about sort of major industry players, but what about also enabling smaller platforms to crack down on those kinds of echo chambers? There are certain platforms that are built around the model of being an echo chamber for extremist content. So, as we often say in the field of countering violent extremism, I think this is a whole of society challenge that requires a whole of society approach.

SG: You spoke about policy and enforcement challenges. One aspect that very much now is a key word when it comes to the virtual world is cryptocurrency and given the potential of cryptocurrencies to serve as a vehicle for both illicit financing as well as for terrorist organisations to continue to fund their activities, is it possible to regulate virtual currencies to prevent malicious abuse by actors? I noticed that it’s certainly coming up in multilateral forums such as the G20, but I’d be interested in your take on this.

HF: Yes, I think it’s possible to regulate, and I think efforts are already underway to…as different countries, the US, the EU, think about cryptocurrencies at the national level, also at the regional level, I think we are seeing already an awareness that this new technology could be used by terrorist organisations, or even more likely at this stage criminal organisations, in order to circumvent the mechanisms that have been built over the years within the traditional banking systems to prevent those criminal and illegal activities. So, it’s possible; however, at the moment, in the case of terrorism, there’s little data indicating sophisticated financial plots involving digital technology. And we continue to see a preference for simplicity, especially within the jihadi groups, communities, and networks. So, a preference for cash handouts, for transfers using services such as Western Union, and most often involving small amounts of money. 

That being said, there is other ways that the internet can be used to coordinate fundraising. I think gap areas include crowdfunding as well as online shops, in particular things like merchandise, what I’ve called the DOI lifestyle, dangerous organisations and individuals’ lifestyle, because the accessibility of online shops today is unprecedented. But also, the scale is global, really, it’s not confined to the big platforms; anyone can build an online shop and promote it. And I think this is an area that is extremely difficult to police, and that will require cross-industry coordination on a global scale. There is however potential for increased use of cryptocurrencies by terrorist organisations as they become more mainstream. I think a key challenge in terms of regulating cryptocurrencies will be to mature those regulations and keeping pace with the development of those currencies that will continue to evolve, continue to innovate, and I think it’s in the loopholes or in the rapidity with which they evolve that some loopholes may emerge, that criminal organisations and terrorist groups will seek to exploit.

SG: Talking about the potential of exploitation, artificial intelligence is something that we keep hearing about; it’s growing, it’s proliferating, it’s becoming more sophisticated. Is AI potentially a double-edged sword in that could it be used, on the one hand, through predictive software to aid efforts when it comes to counter-terrorism, and at the same time, could it be utilised by terrorists to weaponize their agenda online?

HF: I certainly think there is a legitimate concern around the ways that AI can be weaponised by various actors in ways that we do not yet fully understand or anticipate. And of course, just yesterday, there was an open letter signed by more than 100 AI pioneers and specialists including, of course, Elon Musk, who are now calling for a temporary halt on the development of AI technologies for the very reason that in the race to build the next big AI model, we’re not taking into account the harms they could cause sufficiently. We’re not even able to anticipate how they could be weaponised because we do not fully understand how those AI models function, so-called Blackbox AIs. Certainly within that letter, we also see something that I’ve mentioned before, which is the concern around disinformation and the blending or the creation of content that could be used to undermine trust in government, and therefore to radicalise, without us having the ability to detect that this content is actually fake, because AI is able to create the appearance of validity, of legitimacy. So that is where, at some point, we have an example of how an advanced AI can surpass humans to create things that we are unable to counter. So, I think that’s a very important area of risk that we need to be aware of, and we need to be very cautious in how we will approach, in particular, regulating AI. In terms of artificial intelligence being used to counter terrorism, I know of course of different programmes and initiatives that seek to use AI almost in a predictive fashion to understand where the next risks will emerge. But similarly, to what we have just said about the potential for AI to be weaponised, I think here, there is another risk, which is how accurate would these predictions be in models that we don’t fully understand and that are not necessarily trained on a representative sample of data. In the case of terrorism, we have limited fragmented data, and therefore we have a natural challenge in terms of being able to train an AI to a level that would prevent glaring biases and therefore inaccuracies in the results.

SG: To expand this aspect of AI a little bit more, Russian President Vladimir Putin once predicted that the winner of the AI arms race, as he called it, would be the ruler of the world. In many ways, it kind of suggests his own megalomaniac tendencies. And it seemed to suggest that it was more a concern on state actors as opposed to terrorist groups misusing artificial intelligence. Do you think that is the larger concern…hostile state actors utilising artificial intelligence for propaganda, for disinformation, for controlling a narrative in a very warped sense?

HF: No, I would disagree with that assessment. I think the biggest risk, and we have seen this in recent weeks since OpenAI made available ChatGPT, is the fact that in the marketplace, because of competition between major industry players, that by the way are far more advanced in artificial intelligence than state actors….Of course, there is a need to make those services available as quickly as possible, to put it on the market as quickly as possible. And I think therein lies the risk that actors, like state actors, but also extremist actors and criminal organisations, may seek to, as we have described, weaponize artificial intelligence and create a range of issues that we will have difficulty to identify and to counter. That is not to say, however, that artificial intelligence does not have an enormous potential for good. And what I hope to see in the in the coming months and years is how state actors, but also civil society, will be able to leverage artificial intelligence to counter problems that we have enormous difficulty in comprehending because of their complexity, such as, for instance, climate change, such as urban mobility and growth, such as trade in illicit goods or criminal activities. I think, therein lies an enormous potential for artificial intelligence to help us meet the challenges of our current era.

SG: It’s very interesting, and I guess it’s also somewhat of a relief, as well, that Vladimir Putin doesn’t get to control the narrative on everything as he perhaps wishes to. A final question, Hadelin, for Meta, what would you say are the likely challenges that we will have to factor in and engage with when it comes to a lot of the topics that we’ve spoken about? Where do you see, if you had a crystal ball, the challenges that we’re going to have to face that perhaps are not imminent today, but will be down the road?

HF: Well, I think for the industry, and states and civil society alike, the main challenge that I see in the next 10 years is as the internet evolves towards a more even, more decentralised and, to some extent, fragmented version of itself, the question of internet governance will become ever more central and ever more difficult also to achieve at scale in a way that is consistent. We have seen recently with the war in Ukraine, but also with the COVID pandemic, an increase in the so-called splintering of the internet. We have seen tendencies in some countries to claim digital sovereignty. We have seen an increase in internet outages and blockages in different countries in order to handle societal instability. I think these are signs, indications of the type of internet that we’re heading towards increasingly that is characterised by fragmented regulations but also different strategies in terms of how this internet is leveraged for government and society alike. And so, as technologies, such as artificial intelligence and the metaverse, become more mainstream, we will also see a more difficult discussion and debate between industry, between national regulators on how to police and how to govern these expanding and layered realms of digital worlds and capabilities. So…as we multiply the layers of the internet that we can use and engage with, of course we multiply also the risks. And that’s why governance, I think, will be key in terms of us being able to successfully identify emerging risks, but also create the safeguards that we need to continue to ensure a safe and productive use of those amazing technologies.

SG: Absolutely. There’s a lot of important food for thought that you have left us with. Let me just thank you, again, Hadelin, for joining us on NATO DEEP Dive and hope to have you back in the future.

HF: Thank you so much, Sajjan. It was a pleasure. And likewise, looking forward to our next talk.

SG: Thank you for listening to this episode of NATO DEEP Dive, brought to you by NATO’s Defence Education Enhancement Programme (DEEP). My producers are Marcus Andreopoulos and Victoria Jones. For additional content, including full transcripts of each episode, please visit: deepportal.hq.nato.int/deepdive. 

Disclaimer: Please note that the views, information, or opinions expressed in the NATO DEEP Dive series are solely those of the individuals involved and do not necessarily represent those of NATO or DEEP.

This transcript has been edited for clarity.