Deepfakes are now trying to turn the tide of war

By Rachel Metz, CNN Business

During the third week of Russia’s war in Ukraine, Volodymyr Zelensky appeared in a video, wearing a dark green shirt, speaking slowly and deliberately while standing behind a white presidential podium emblazoned with his country’s coat of arms. . Except for his head, the Ukrainian president’s body barely moved as he spoke. His voice sounded distorted and almost gritty as he seemed to be telling Ukrainians to surrender to Russia.

“I ask you to lay down your arms and return to your families,” he appeared to say in Ukrainian in the clip, which was quickly identified as a deepfake. “This war is not worth dying for. I suggest you keep living, and I will do the same.

Five years ago, no one had even heard of deepfakes, those convincing-looking but fake video and audio files made using artificial intelligence. Now, they are used to influence the course of a war. In addition to the fake Zelesnky video, which went viral last week, there was another widely circulated deepfake video showing Russian President Vladimir Putin supposedly declaring peace in the war in Ukraine.

Disinformation Experts and content authentication have been concerned for years about the potential for spreading lies and mayhem via deepfakes, especially as they become increasingly realistic. In general, deepfakes have improved tremendously in a relatively short time. Viral videos of a fake Tom Cruise tossing coins and covering Dave Matthews Band songs last year, for example, showed how convincingly real deepfakes can sound.

None of Zelensky’s or Putin’s recent videos have come close to TikTok Tom Cruise’s high production values ​​(they were noticeably low-res, for one thing, which is a common tactic to hide flaws.) But experts still consider them dangerous. This is because they show the lightning speed with which high-tech disinformation can now spread around the world. As they become more common, deepfake videos make it harder to distinguish between fact and fiction online, and all the more so during a war that is being fought online and plagued by misinformation. Even a bad deepfake risks further confusion.

“Once that line is eroded, truth itself will not exist,” said Wael Abd-Almageed, associate research professor at the University of Southern California and founding director of the Visual Intelligence and Analytics Lab. school multimedia. “If you see something and you can’t believe it anymore, then everything becomes fake. It’s not like everything is coming true. It’s just that we will lose faith in anything and everything.

Deepfakes during the war

In 2019, deepfakes were feared to influence the 2020 US presidential election, including a warning at the time of then-US Director of National Intelligence Dan Coats. Corn This does not happen.

Siwei Lyu, director of the computer vision and machine learning lab at the University of Albany, thinks it was because the technology “wasn’t there yet.” It just wasn’t easy to do a good deepfake, which requires smoothing out the obvious signs that a video has been faked (like weird visual tremors around the frame of a person’s face) and ringing it as the person in the video said what they appeared to be saying (either via an AI version of their actual voice or a convincing voice actor).

Now it’s easier to make better deepfakes, but perhaps more importantly, the circumstances of using them are different. The fact that they are now being used to try to influence people during a war is particularly pernicious, experts told CNN Business, simply because the confusion they sow can be dangerous.

Under normal circumstances, Lyu said, deepfakes may not have much impact beyond generating interest and gaining traction online. “But in critical situations, during a war or a national disaster, when people really can’t think very rationally and they have very short attention spans, and they see something thing like that, that’s where it becomes a problem.” he added.

Stifling disinformation in general has become more complex during the war in Ukraine. Russia’s invasion of the country was accompanied by a deluge of real-time information on social platforms like Twitter, Facebook, Instagram and TikTok. Much of it is real, but some is fake or misleading. The visual nature of what is shared – as well as its often emotional and visceral nature – can make it difficult to quickly distinguish what is true from what is false.

Nina Schick, The author of ‘Deepfakes: The Coming Infocalypse’ sees deepfakes like those of Zelensky and Putin as signs of the much larger misinformation problem online, which she believes social media companies aren’t doing enough to address. to resolve. She argued that responses from companies such as Facebook, which quickly said he had deleted Zelensky’s video, are often a “fig leaf”.

“You’re talking about a video,” she said. The biggest problem remains.

“Nothing really beats human eyes”

As deepfakes improve, researchers and companies are trying to keep pace with tools to spot them.

Abd-Almageed and Lyu use algorithms to detect deepfakes. Lyu’s solution, dubbed DeepFake-o-meter, allows anyone to upload a video to verify its authenticity, although he notes that it can take a few hours to get results. And some companies, such as cybersecurity software provider Zemana, are also working on their own software.

There are issues with automated detection though, such as it getting trickier as deepfakes get better. In 2018, for example, Lyu developed a way to spot deepfake videos by tracking inconsistencies in how the person in the video blinked; less than a month later, someone generated a deepfake with realistic blinking.

Lyu thinks people will ultimately be better at shutting down these videos than software. He would eventually like to see (and is interested in helping) some sort of deepfake bounty hunter scheme emerge, where people get paid to root them out online. (In the United States, there is also legislation to address the issue, such as a California law passed in 2019 prohibiting the distribution of misleading videos or audio of political candidates within 60 days of an election.)

“We’re going to see that a lot more, and relying on platforms like Google, Facebook, Twitter probably isn’t enough,” he said. “Nothing really beats human eyes.”

™ & © 2022 Cable News Network, Inc., a WarnerMedia company. All rights reserved.