Experts at the World Economic Forum are sounding the alarm: deepfake technology is spreading at a dizzying pace. The estimated number of fake videos is increasing by more than 900% per year*. While some of them serve an entertainment function, far more often they are a means of violence and a sophisticated propaganda tool. Find out how deepfake distorts reality.
What is deepfake?
Artificial intelligence (AI) is increasingly effective in simulating human imagination. Among other things, it works by using GAN technology, which is a substitute for creativity. It makes it possible to classify objects and create realistic images.
Two opposing neural networks, which resemble the human nervous system, are responsible for its effectiveness. With their help, AI collects, analyzes and combines previously provided information, and then generates new data from it, such as a unique image. The character created in this way is deceptively similar to a real person. He is able to mimic the facial expressions, movements, voice and even mannerisms of the original.
Black PR of deepfakes
Until now, photographs and audiovisual materials have been among the most reliable sources of history. They shaped its perception and – although their authors have always resorted to manipulative techniques – provided tangible proof of past events.
The problem of trust emerged with the development of artificial intelligence. The emergence of deepfakes revealed the threat posed by the development of artificial intelligence. The technology has not yet developed enough to falsify events or conflicts on a large scale, yet it is already playing with our visual perception.
Something you see doesn’t really exist.
From Reddit to Tiktok
The term “deepfakes” itself first appeared in 2017. One day a Reddit user came up with the idea of sharing pornographic videos on the platform, in which he replaced the faces of actresses with images of famous celebrities, including Taylor Swift and Emma Watson.
Since then, the Internet has rapidly begun to fill up with fake images and audio. What’s more, applications are appearing on the market that allow users to create deepfakes amateurishly. Nowadays, they not only satisfy unfulfilled erotic fantasies, but are also entertaining, educational and – unfortunately – disinformative.
Thus, “the eternally young Tom Cruise” has nearly 5 million followers on TikTok, Salvador Dali takes a selfie with visitors to the Florida Museum, and Volodymyr Zelenskiy calls on Ukrainian soldiers to lay down their arms.
Threats beyond the borders of cyberspace
Deepfakes technology has social, moral and political implications. It builds an atmosphere of mutual distrust and indifference to what we see or hear. Social media analysts suggest that it could lead to an “infocalypse.” Part of the public is already questioning events that undoubtedly took place, such as the moon landing.
Pornographic Deepfakes are a form of violence against women. By directly affecting their image, criminals use them for blackmail or publish them online as revenge (so-called revenge porn). In extreme cases, fabricated materials sexualize children.
Going further, the reputation and opinions of others are harmed by deepfakes used to publicize propaganda. Disinformation spread in this way not only manipulates the public, but can weigh on government elections or lead to the removal of someone from the political scene.
How to detect a deepfake?
At this point, artificial intelligence is still not perfect. First of all, it lacks consistency in creating details of elements, such as wrinkles, facial hair, reflections of glasses or body positioning. In most cases, a close look at the generated character is enough to spot its imperfections.
Other methods of detecting deepfakes are:
- image quality assessment,
- analysis of metadata to determine its authenticity,
- forensic analysis,
- soundtrack examination,
- comparison of the image with other sources,
- using a platform to detect fake images.
Leave a Reply