Deepfakes are now trying to change the course of the war

“I ask you to lay down your arms and return to your families,” he said in Ukrainian a clip that was quickly identified as a deepfake. “This war is not worth dying for. I suggest you keep living, and I’m going to do the same.”

Five years ago, no one had even heard of deepfakes — convincing-looking but fake video and audio files created using artificial intelligence. Now, they are used to influence the course of a war. In addition to Zelessky’s fake video that went viral last week, there was another widely circulated deepfake video of Russian President Vladimir Putin allegedly declaring peace in the war in Ukraine.

Disinformation Experts and content authentication have been worried about the potential for lies and chaos to spread through deepfakes for years, especially as they become more and more realistic. Overall, deepfakes have improved significantly in a relatively short period of time. For example, last year, viral videos of a fake Tom Cruise flipping a coin and covering songs by the Dave Matthews Band showed how deepfakes can seem convincingly real.

None of Zelensky’s or Putin’s recent videos have come close to the high performance of Tom Cruise’s TikTok (for one thing, they were noticeably low-res, a common tactic to cover up flaws). experts still consider them dangerous. This is because they show the lightning speed with which high-tech disinformation can now spread around the world. As they become more common, deepfake videos make it hard to tell fact from fiction online, especially in a war that is unfolding online and rife with disinformation. Even a bad deepfake runs the risk of making things worse.

“Once that line is broken, there will be no truth itself,” said Wael Abd-Almagid, associate professor at the University of Southern California and founding director of the Visual Intelligence and Multimedia Analytics Lab. “If you see something and you can no longer believe it, then everything becomes false. Not that everything will be true. We will just lose confidence in everything and everything.”

Deepfakes during the war

Back in 2019, there were fears that deepfakes would affect the 2020 US presidential election, including a warning at the time from Dan Coates, then US Director of National Intelligence. But That did not happen.

Xiwei Liu, director of the Computer Vision and Machine Learning Lab at the University of Albany, believes this happened because the technology “wasn’t there yet.” It was just not easy to make a good deepfake, which requires smoothing out the obvious signs that the video was faked (like the odd visual judder around the frame of the person’s face) and making it look like the person in the frame. the video said what they seemed to say (either with an AI version of their real voice or with a convincing voice actor).

It’s easier to make better deepfakes now, but perhaps more importantly, the circumstances of their use are different. The fact that they are now being used in an attempt to influence people during times of war is especially detrimental, experts at CNN Business told CNN Business, simply because their sowing confusion can be dangerous.

According to Liu, under normal circumstances, deepfakes may not be of much importance other than generating interest and popularity on the Internet. “But in critical situations, during a war or a national catastrophe, when people really can’t think very rationally, and they have a very short attention span, and they see something like that, that’s when it becomes a problem.” he added.

Destroying disinformation in general became more difficult during the war in Ukraine. The Russian invasion of the country has been accompanied by a flood of real-time information hitting social platforms such as Twitter, Facebook, Instagram and TikTok. Most of them are real, but some are fake or misleading. The visual nature of what is shared, and how emotional and visceral it is often, can make it difficult to quickly tell the truth from the fake.
Nina Shik, The author of Deepfakes: The Coming Infocalypse sees deepfakes like those of Zelenskiy and Putin as signs of a much larger online disinformation problem that she believes social media companies are not doing enough to address. She argued that the responses from companies such as Facebook, which quickly said he made videos of Zelenskiy, often a fig leaf.

“You are talking about one video,” she said. The big problem remains.

“In fact, nothing compares to human eyes”

As deepfakes get better, researchers and companies are trying to keep up with the tools to detect them.

Abd-Almagid and Liu use algorithms to detect deepfakes. Lyu’s solution, glibly named Deepfake-o-meter, allows anyone to upload a video to verify its authenticity, though he notes that it can take a couple of hours to get the results. And some companies, such as cybersecurity software provider Zemana, are also working on their own software.

However, there are problems with automatic detection, for example, with the improvement of deepfakes, it becomes more difficult. In 2018, for example, Liu developed a way to detect deepfake videos by looking for inconsistencies in the way a person blinks in a video; Less than a month later, someone generated a realistic blinking deepfake.

Liu believes that people will eventually be better at stopping such videos than software. Ultimately, he would like to see (and is interested in helping with) some kind of deepfake bounty hunter program where people get paid to dig them up on the internet. (In the United States, legislation has also been passed to deal with this problem, such as California law passed in 2019 a ban on the dissemination of misleading video or audio recordings of political candidates for 60 days after the election.)

“We will see this much more often, and relying on the platforms of companies like Google, Facebook, Twitter is probably not enough,” he said. “In fact, nothing compares to human eyes.”

Leave a Reply

Your email address will not be published. Required fields are marked *