Key Reflections

* The Ukrainian military is succeeding in the information battlefield by utilising social media to directly challenge Russian propaganda and disinformation.

* Social media influencers in the battlefield space can support military contingencies through sharing information about humanitarian operations or enemy attacks that have been successfully countered. Doing so helps to create credibility. 

* There are four types of technologies that power social media disinformation campaigns: text command platforms that create relatively believable text; text-to-image generators; deepfake videos consisting of virtual face transplants through software; and synthetically generated images and avatars that are assembled in any language and background.

* There are concerns that as artificial intelligence (AI) advances, both domestic and foreign state actors could use deepfake material to derail particular politicians or parties when it comes to election campaigns.

* Technology can potentially discern whether something is fake or not. However, with high-end deepfake videos, it is becoming harder to detect with the naked eye. Synthetically generated content is easier to discern—for now. On social media feeds, many could get misled by artificially generated images.

* The big tech industry actors could regulate themselves, but this won’t prevent others from letting AI proliferate.

Transcript:

SG: Dr. Sajjan Gohel

TH: Todd Helmus 

SG: Welcome to the NATO DEEP Dive podcast, I’m your host Dr. Sajjan Gohel and in this episode I speak with Dr. Todd Helmus, a Senior Behavioural Scientist with RAND.

In our discussion we talk about the importance of controlling the narrative during conflicts as well as the growing concerns about how artificial intelligence is being used for disinformation, propaganda and deepfakes as well as the role of state actors.

Todd Helmus, warm welcome to NATO DEEP Dive.

TH: Thank you for having me.

SG: Let’s look at the situation in Ukraine. Ukraine has had success in the information battlefield. Russia, which was once considered to be the preeminent force when it came to propaganda disinformation, has found itself being directly challenged. What are the reasons underlying Ukraine’s success in terms of leveraging online influences?

TH: Well, thanks for that question. I just put a piece together on War On The Rocks on that topic. I really find the most interesting piece about what’s happening in Ukraine is the degree to which Ukraine is leveraging just regular people. I’m not even sure if they’re purposely leveraging this or not. But regular folks, in the army and out of the army, are taking to social media to share their experiences and it just so happens that these experiences really work in Ukraine’s favour. They’re highlighting, talking about, the attacks that Russia is launching against civilian centres. There’s a great influencer named Margo Gontar, who is a journalist, she basically live-tweets air raid siren alerts in Kyiv, and you really get a palpable feel of what’s happening there just by following her on her feed. 

And there are a lot of other civilians out there sharing their viewpoints, similar viewpoints to that, but also other perspectives. And then of course, you can’t help but think about all of the coverage on what’s happening in Ukraine, on the successful attacks that they’re conducting against Russian forces. You almost get a skewed view of how successful Ukraine is getting, just by following social media and seeing the degree to which Ukraine is successfully targeting Russian tanks, successfully targeting Russian troops in Bakhmut, and other places. And some of that, of course, is done by the Ukraine Government, but a lot of it is also coming from soldiers who have their own Twitter accounts. We follow Twitter here, I’m sure in the region they do, they have other channels that they’re following as well, but you really get a palpable feel of what’s happening there because you have folks like Viking, there’s a really interesting Instagram account named Viking, he’s a Ukrainian pilot, you get a sense [of what it’s like] when he’s going on his missions and what it’s like for him to fly his attack chopper into combat, and there’s others as well, like Kriegsforcher, who is a Ukrainian Marine, he’s got nearly 70,000 followers, I think he’s on TikTok, but he’s posting a lot of live feeds on attacks against Russian forces.

I feel like there’s several really key benefits to this. One is there’s a lot of research showing that people have inherent trust in what they call ‘someone just like me.’ A number of surveys have shown that people trust ‘someone just like me,’ more than governments, corporations, and things like that. So, we have a lot of trust in those who we can relate to and that’s really the value of these individual accounts. They appear, by all purposes, to be normal folks in really tough situations, and by following them we build a relationship to who they are. And I think that relationship that you get through following someone on social media, makes their message really particularly powerful.

SG: That’s very interesting. What lessons can NATO member nations learn from the experience of the Ukrainian army when it comes to the utilisation of social media and influencing operations?

TH: Well, I can’t speak to NATO in general, I know here in the United States there’s a lot of angst in the U.S. military, about soldiers going out on social media. And soldiers do go out on social media, the military doesn’t prevent them from doing that. So, they’re still doing it, but there’s a lot of angst about it and a lot of angst about what they’re tweeting and concerns that they might say or do something negative. And the real emphasis is on the fear, I think, of what the higher authorities feel about these individual soldiers who have grown up on social media and are just really used to, and accustomed to, sharing their views and perspectives, in a very visceral way with their audiences. 

So, you can be scared of it and you can try and tamp it down or you can just leverage it. It’s certainly, I think, what Ukraine is doing. As I write about, there is a very strong case, for least in the U.S., I’m sure in Europe, businesses leveraging their own employee base, and there’s a lot of benefits to leveraging your own employee base to get out on social media. Because they work for you, there’s some semblance, some level of trust and motivation to say good things. Because they work for you, you have a touch point with them, you can provide training and education to help them, not only be better at social media, but also know what the lines are; what are the things you should or should not talk about. And then, of course, you can follow these individuals and evaluate what types of impacts they have. 

So, businesses do this. A number of fortune 500 businesses are engaged in what are called employee advocate programmes. And I just see a lot of unique comparisons between that and what Ukraine is doing. And what I argue is that the U.S. military should develop some sort of employee advocate programme. You can start small or big, but you basically identify savvy social media folks within the military, and then you provide them some training and oversight on what they’re doing. Number one, you empower them. Say that you’re really excited about their skills and their capabilities, and you want to see them share their perspective of being in the military. You can provide training that can help improve their capabilities and of course, as I mentioned, you can provide some education about things not to tweet about right, don’t tweet about how you hate your commander, don’t share sensitive information. 

And I, especially here in the United States, we’re really struggling to recruit new people in the military, there is a very significant deficit of folks coming into the military. There’s a lot of potential power in soldiers, marines, airmen, and navy folk getting out on social media and talking to their own networks, about their experiences, which oftentimes are really exciting. There’s a lot of doldrums in the military, but there’s also a lot of exciting moments that are highly shareable content, that could provide highly shareable content. And if they could share that and the military could welcome that then I really think that we could do a lot to spread the message about what military life is like within the U.S., particularly within the age range of the folks that they want to recruit. And I think that could be very powerful.

SG: You use the word leverage a couple of times, I feel we’ve almost answered what I wanted to ask you next, but just wanted to see if there’s a way to expand this very important discussion that we’re having and that the military might also think about what a social media presence looks like during military contingencies. Are you able to expand on what that actually would entail?

TH: Yeah, so there’s two levels to approach this. One is the soldiers and I’ll just use the word soldiers referencing all service personnel, but there’s one aspect of how you leverage your service personnel in this and then there’s a second aspect of how you leverage other influencers in the battlespace. First on the service personnel side, number one, I should say you don’t want everybody out there going into battle live-tweeting and being concerned about their ‘likes’ and engagement data while they’re in the midst of a firefight obviously, you don’t want that. You obviously don’t want them giving away their positions. And so there’s going to have to be rules in the road and it would probably, definitely, have to be more strictly regulated than what might be the case in garrison.

But, I could imagine providing a sort of a commander’s intent to your employee advocates who, by the way, have been trained and educated and have some level of requisite trust in what they are able to do, but providing some commander’s intent about the types of content that are permissible, not permissible, ensuring that there’s strict rules about not giving away positions and things like that. But then letting them share their experiences. And this will obviously vary according to the types of operations, right. You don’t want a high level special operations unit doing this in the midst of a highly intense operation. But I’m sure there are other scenarios where again, depending on the operation, you can see soldiers sharing information about humanitarian operations that they’re doing, or sharing information about enemy attacks that they’ve successfully engaged in.

So all that would be very powerful and if you think about what is otherwise the case, particularly in the U.S., is that you would have combat cameramen who go out, and they’re in select units, and they often take very, I don’t know, in my view, they take very sort of posed pictures, they don’t come across as authentic, as what might be the case if someone snaps something on their smartphone. So, I think that’s really powerful and obviously to make that work, you need doctrine, to set the stage about what that would look like for different types of operations, and you need to integrate that into training. So, when a unit does their high level sort of inter-unit training events, then you would want to make sure that there are individuals in that who are authorised to post and share content to some sort of made up social media account. And then they would do that as part of the operation and the public affairs folks would help do the after action on that to see if that worked or not. 

So, that’s one piece. The second piece is that there are influencers out in the battlefield that are not Americans. I go back to thinking about Iraq or Afghanistan, maybe not in Afghanistan because social media presence wasn’t so good. But imagine going back to a place like Iraq. And in this day and age where everyone has a cell phone and a social media account, identify those people that support your cause. These are folks who live in the country, who have a level of credibility with their compatriots, and so you want to identify folks that are sort of sympathetic to what you’re trying to accomplish. And then again, you go through this process of building a relationship with them, training them, and educating them to be more influential, to use their capabilities even better, and then provide them, you’re not going to tell them what to do because you really want this to be authentic, but you could empower them to go out and share these stories.

And the U.S. does this on some level. We evaluated a programme for example, in the Philippines, and Nigeria for that matter, where some folks from the State Department helped train local civil society people to be better communicators. And then with that training, they just went out and did a lot of interesting things. We weren’t able to evaluate how effective that was, but these people were really excited to go out and do the things they were doing. These are folks that lived in Mindanao, Philippines, and they really disliked the whole terrorism problem that was happening there, and they wanted to be part of the solution, and the U.S. sort of provided a means for them to participate in that.

SG: You’ve provided a lot of important perspective to do with influence operations, with shaping the narrative, getting the information out there. One thing that we’ve also noticed, in this current age, has been the rise in technology, and in particular, deepfake threats, as part of artificial intelligence (AI) and forming disinformation campaigns. Can you provide an overview of the deepfake threat and the technology that has been used as associated with artificial intelligence driven technologies and also its contribution to disinformation campaigns? 

TH: Well, yeah, this space is blowing up right now, as you’re well-aware. And I’ll just note, there are four different types of technologies that are at play here and there are different levels of maturity. First, we know that there’s ChatGPTwhich allows you, with a simple text command, to create relatively believable text. And it is very conceivable that adversaries will use programmes like ChatGPT to power their social media campaigns. Places like China where they might lack a lot of English language expertise, or at least where that could be a limitation in their ability to peddle propaganda content to the U.S., now really have an automated means of creating that content in a way that does not sound like it came from someone from China, it sounds like someone from the United States.

The second part is these text-to-generated images that are online right now. And so, almost any type of text command will generate images and a number of those might well be in the wheelhouse of what you’re looking for. This just happened yesterday, we’re sort of dealing with this in the U.S. right now, where, with all the frenzy about whether or not President Trump will get indicted in New York City, someone disseminated a series of deepfake images showing President Trump being arrested. Those images went like wildfire across the social media space. And my guess is almost anybody who’s really interested in running a disinformation campaign or conveying any sort of real message on social media, it would probably behove them to go to one of these generator websites and generate images that can back up whatever claims they have. That technology is good to go right now, at a very high level of maturity. The pictures look believable, and so I imagine that we’re going to see that explode in the next few weeks to months. 

There’s another value to it too, that it can power the images you put on your social media profile. So before, you had to use someone else’s photograph on the fake social media accounts that you’d create, and those could oftentimes be reverse imaged to back to the original owner, which would show that it was a fake account, but now it’s really easy just to create a fake profile image.

The third piece would be the deepfake videos. There’s several ways you can create those. You can do face transplants, so you have an actor or some sort of video footage, and you can transplant a face of whoever you wanted to deepfake onto that. So that is one way to do it. That technology is not fully there yet. It takes several months of work, at least two months of hard work, to create those videos in ways that they will be highly believable. There’s also another approach that you can use, a completely synthetic approach. China was just recently caught with a YouTube campaign featuring synthetically generated images that they used to create their anchor-man as part of these fake news programmes that they had. That was generated from a programme called Synthesia; they basically pedal software that allows companies to create training videos from scratch. You don’t need an actor, you just need to go to synthesia.io, and they’ll create an avatar in any language with any sort of background that you choose. Those images look pretty fake right now, but they are being powered by not only China but Venezuela is using them to disseminate some of their content. 

Right now, the big value is that it’s just cheaper to do that than having an actor do it. But that technology will get better, and you will easily be able to fake key personalities that you might choose. So those are just a few of the technologies that are out there to this end. Like I said, I think the text generation, the image generation, is there, good enough to be used right now. I think it’s just a matter of time before adversaries really start to use these technologies in a coordinated systematic way to conduct their campaigns.

SG: You’ve laid out a lot of examples of how this technology can be utilised and manipulated, and I have to say it’s very disconcerting as to just how sophisticated it’s become, with each example being more disturbing than the next. Todd, what clues are there that people can look for that would give away that a video or a piece of tech is fake? Will this eventually become irrelevant because the technology then is so good that it’s impossible to tell? Or are there small examples, forensic tools available that would be able to discern between what is genuine and what is fake?

TH: I think with the high-end deepfake videos, the face transplants that take several months to put together, my guess is those can be done and it’d be very hard to discern from just the naked eye that it was fake or not. The synthetically generated content videos right now, it’s pretty easy to discern. They just don’t look real. The head movements don’t look real. The conversational tone doesn’t sound real. But that’s really only if you’re trying to pay attention. I imagine there’s a lot of people that don’t pay attention to those cues, and they might be fooled. But it looks kind of fake. The text, especially the text that you could put into a social media feed, my guess is a lot of people will get fooled by that, and the image generations, people would definitely get fooled by it. I’d say the exception is the funny image showing President Trump running away from police officers. That image had him running a little too fast for a 70-some-year-old man. But other than that, the images are pretty good. 

Now there are technological ways to discern whether something is fake or not. I really can’t speak to the high-end technology of that, but it involves…part of the way that you create, for example, deepfake content is you think about having two competing computers, or they’re called GANs (generative adversarial network) in this case. These two computers, one computer is charged with creating deepfake content, and the other is charged with detecting that deepfake content. And so these work in consort to develop these highly believable images because as the first iteration is created, then the other computer identifies what aspects of that look fake or need to be improved, and then the first computer goes ahead and makes those improvements. 

So, detection oftentimes is really built into the creation of a lot of this content. And that makes it, I think, particularly challenging to create effective detectors right now. Facebook a couple years ago did a competition to identify and basically asked a lot of organisations to create detectors. And then they tested the effectiveness of those detectors. The best detectors, as of a couple of years ago, only detected about 65% of the fake video content. I’ve heard that the advances in creating content have probably outmoded advances in detecting content, so you might not even be that successful now. And as the videos get created even better and better, and look more perfect and have higher resolutions, the likelihood of effective detection will get lower and lower. And once you use the detector, then that detector is kind of outed, and then those who are creating the video content can create videos that that detector can’t detect. And a classic example of this is like in 2019, it was discovered that in deepfake videos, the actors were not eye blinking at believable levels, so they weren’t eye blinking at all. Within about 30 days, that fix was made, and then all the deepfakes started eye blinking in a relatively believable way. So, the battlefield is definitely in favour of creating the content more so than detecting the content.

SG: Let me ask you this, we have seen over the last few years concerns that hostile state actors have interfered in elections around the world. Is it only a matter of time before certain states use deepfake material to derail particular politicians or political parties when it comes to election campaigns?

TH: So my answer to that is it depends. And here I discriminate between foreign actors and domestic actors. Here in the United States, it’s a highly partisan world we live in here. It is almost with guaranteed certainty that domestic actors will use the fake content to attack political actors, so that will almost certainly happen. I think the question is whether foreign actors will do this. And what it depends on is a couple things. It depends on what they’re trying to target. Think about the worst case scenario where Russia launches a highly believable deepfake targeted at President Biden two days before the 2024 election. And that deepfake is so believable that it throws everybody off, and then all of a sudden he loses support, and now you have whoever is competing against him, maybe Trump or somebody else, win the election. 

My guess is that’s definitely a worst case scenario of a foreign actor upending a US election. But I also imagine that that would incur some level of cost for that foreign actor. My guess is whoever created the video will get outed, and then there’ll be some sort of political, diplomatic price to be paid for doing so. I think the US could help shape the choices that adversaries make in the future by highlighting the different types of consequences that they may face by conducting such campaigns. And as we argue in our report, we need a wargame. We need to really wargame out the factors that different adversaries would consider in creating this type of content and wargame out the different types of deterrence strategies that could be put in place to prevent them from doing so.

SG: This whole thing seems nightmarish to some extent because it’s almost living in a sci-fi world where effectively some of the movies that we have seen, that have gained prominence in our lives, are actually now becoming part of our real world. Is there no way to regulate this, or is this like the internet that it becomes basically a space which is ungoverned and where material will continue to expand and proliferate?

TH: I definitely agree. It’s going to be a bit of a surreal world we’re going to be wading into in the near-term as this type of technology proliferates. I really believe that any decent disinformant would be well-advised to create deepfake images that would go along with what else they’re doing, so I really imagine we’re going to see a really strong proliferation of that. I guess the question is, is there a regulation that could stop it? I mean, it might depend on the different sort of regulatory environments in different countries. Here in the United States, where there is freedom of speech, which is constitutionally guaranteed, it might make it difficult for US government laws and regulations to prevent people from creating this content because it will be seen as an extension of free speech. There are a couple of state laws on the books on this, but really I don’t think those laws have been put to use yet. And whether or not they withstand constitutional scrutiny is a major question, and I would probably not bet on it.

So, can the tech industry regulate itself? That’s a good question. I feel like the cat is out of the bag right now. This technology is out there. You just need some decent engineering experience to put together some of this technology with some of the code that’s available right now. And it will get easier and easier to create over time. I think there might be value in regulation from the big actors like OpenAI, Google, and Meta. There’s certainly a sort of war going on about their ability to create text generation capabilities and certainly text-to-video and text-to-image capabilities. Hopefully, they will engage in some level of self-regulation, either as a consortium or on their own. Think about just how the platforms have their own safety, their own departments that focus on trust and safety, so I would hope that those organisations would focus trust and safety initiatives on their artificial intelligence capabilities. But that’s not going to prevent a lot of other actors who don’t care about this, who don’t care about trust and safety or who want to leverage it to their own capability, from developing that capability. I feel the cat’s out of the bag a little bit. 

SG: The cat’s out of the bag indeed. Final question, Todd, is where do you see the other concerns when it comes to technology? We’ve touched upon influencing campaigns, we’ve looked a lot at AI and deepfake. Is there another dimension when it comes to technology that we should also be paying attention to from a concern perspective?

TH: Maybe the other angle on this would be the ability of foreign actors to leverage artificial intelligence to engage in some kind of command and control of their own information operations. There’s the technology that exists to create the discrete content, the videos, the images, the text. But I feel like the big concern will be when actors learn to put all that together and develop a technology that can synchronise that so that you could have basically autonomous propaganda campaigns running online, which could be conducted at scale. You don’t need to have x number of people managing y number of social media accounts. You just need one computer to manage all your social media accounts and so you can just keep having more social media accounts. I don’t think we’re there yet for that. It’s hard enough for humans to command and control even a small number of accounts without getting detected. But I think the capability will be there, that they will be able to do some of this autonomously.

SG: Well, I think this is all going to be very important for a lot of the decision-makers around the world as to how to handle, because it’s very clear that this is something that is morphing, developing, being utilised a lot for nefarious purposes, and it’s something that is going to require very urgent addressing in the near-term. Todd Helmus, let me thank you once again for joining us on NATO DEEP Dive. You’ve provided us with a lot of important perspective and food for thought.

TH: Well, thank you for having me. I’ve really enjoyed this conversation.

SG: It’s been our pleasure. 

Thank you for listening to this episode of NATO DEEP Dive, brought to you by NATO’s Defence Education Enhancement Programme (DEEP). My producers are Marcus Andreopoulos and Victoria Jones. For additional content, including full transcripts of each episode, please visit: deepportal.hq.nato.int/deepdive. 

Disclaimer: Please note that the views, information, or opinions expressed in the NATO DEEP Dive series are solely those of the individuals involved and do not necessarily represent those of NATO or DEEP.

This transcript has been edited for clarity.