Five years ago, no one had even heard of deepfakes — convincing-looking but fake video and audio files created using artificial intelligence. Now, they are used to influence the course of a war. In addition to Zelessky’s fake video that went viral last week, there was another widely circulated deepfake video of Russian President Vladimir Putin allegedly declaring peace in the war in Ukraine.
None of Zelensky’s or Putin’s recent videos have come close to the high performance of Tom Cruise’s TikTok (for one thing, they were noticeably low-res, a common tactic to cover up flaws). experts still consider them dangerous. This is because they show the lightning speed with which high-tech disinformation can now spread around the world. As they become more common, deepfake videos make it hard to tell fact from fiction online, especially in a war that is unfolding online and rife with disinformation. Even a bad deepfake runs the risk of making things worse.
“Once that line is broken, there will be no truth itself,” said Wael Abd-Almagid, associate professor at the University of Southern California and founding director of the Visual Intelligence and Multimedia Analytics Lab. “If you see something and you can no longer believe it, then everything becomes false. Not that everything will be true. We will just lose confidence in everything and everything.”
Deepfakes during the war
Xiwei Liu, director of the Computer Vision and Machine Learning Lab at the University of Albany, believes this happened because the technology “wasn’t there yet.” It was just not easy to make a good deepfake, which requires smoothing out the obvious signs that the video was faked (like the odd visual judder around the frame of the person’s face) and making it look like the person in the frame. the video said what they seemed to say (either with an AI version of their real voice or with a convincing voice actor).
It’s easier to make better deepfakes now, but perhaps more importantly, the circumstances of their use are different. The fact that they are now being used in an attempt to influence people during times of war is especially detrimental, experts at CNN Business told CNN Business, simply because their sowing confusion can be dangerous.
According to Liu, under normal circumstances, deepfakes may not be of much importance other than generating interest and popularity on the Internet. “But in critical situations, during a war or a national catastrophe, when people really can’t think very rationally, and they have a very short attention span, and they see something like that, that’s when it becomes a problem.” he added.
“You are talking about one video,” she said. The big problem remains.
“In fact, nothing compares to human eyes”
As deepfakes get better, researchers and companies are trying to keep up with the tools to detect them.
However, there are problems with automatic detection, for example, with the improvement of deepfakes, it becomes more difficult. In 2018, for example, Liu developed a way to detect deepfake videos by looking for inconsistencies in the way a person blinks in a video; Less than a month later, someone generated a realistic blinking deepfake.
“We will see this much more often, and relying on the platforms of companies like Google, Facebook, Twitter is probably not enough,” he said. “In fact, nothing compares to human eyes.”