What are Deepfakes? Good vs Bad Use Case Examples

Find out how modern deepfakes work and how they manage to trick both biometric verification systems and average internet users into thinking the synthetic video is legitimate.

Deepfakes: their evolution and tips for detection

From a fake video of Ukraine’s president Zelensky, to the viral “Tom Cruise” TikTok sensation — deepfakes have stormed the Internet, and yet, many of their use cases aren’t designed for the better good. The synthetic media created by AI merges a sense of reality with events or characters that didn’t actually happen. 

By planting its roots in 2017 on Reddit, the term “deepfake” now is linked not only to innocent videos and swapping faces, it’s also associated with misinformation, propaganda, porn and cyberattacks. 

But are deepfakes really that bad? We look into modern AI, deepfakes’ origins, use cases, and risks to businesses and society as a whole. 

What is a Deepfake?

A deepfake is a manipulated image, video, or audio recording designed to produce realistic media of people who actually didn’t say any of the things or never actually existed at all. It’s a form of media that is created by sophisticated deep learning algorithms and artificial intelligence (AI). 

A common example of a deepfake is a fake video of a celebrity, a politician, an influencer, or another prominent figure that can potentially show shock value due to the content of the video. That’s why, oftentimes, deepfakes can damage reputation, cause distress and spread misinformation. 

A sophisticated deepfake can trick people into thinking that it’s real. In general, deepfakes can be used in different ways and different industries, including sectors like entertainment and cybersecurity. Criminals also use this technology for deceptive practices. They collect information from emails, magazines, as well as social media posts to produce deepfakes. Sometimes, fake images and real sounds are modified and produced into deepfakes to appear more realistic to the general public. 

What is Deepfake Technology? 

Deepfake technology is another term for deepfakes and a form of AI that uses deep learning and machine learning algorithms that are able to learn from vast datasets.

The two algorithms — generator and discriminator — work like this when creating a deepfake image:

  • The first algorithm generates a realistic replica of an image.
  • The second algorithm detects if the replica is fake and identifies the differences between it and the original.

The first algorithm then adjusts the synthetic content based on feedback from the second algorithm. It continues to perfect it until the second algorithm isn’t able to detect any false elements. This process is repeated as many times as needed. 

Consequently, deepfake technology learns from a lot of examples, quickly analyzing facial features, people’s expressions, details of human speech and behavior, and more. Simulating such details with high accuracy would be a complex task for people in real life. 

2017 was the year deepfake technology quickly took off after emerging in a subreddit

The term “deepfake” survived in the media in 2017 after a Reddit moderator coined this title and created a subreddit called “deepfakes”. It mainly included videos of celebrities that were created using face-swapping technology, then evolved into bigger hoaxes that featured pornographic content. 

How are Deepfake Videos Created?

People create deepfake videos by training the AI model with real audio data from a specific person in order to replicate their voice. In this technique, the target in the existing video is analyzed and algorithms capture attributes like facial expressions and body language, then apply these features to a new video. 

Another way to generate deepfake videos is to overdub existing videos and add newly generated fake sound that mimics the person’s voice. Similarly, for video games in particular, deepfakes can be created by mimicking a person’s voice based on their vocal patterns and enabling the voice to say anything the creator chooses. 

Lastly, a popular method for deepfakes is lip-syncing. Here, the goal is to match a voice recording to a video and make it seem like the person in the video is actually saying the recorded words. However, the audio for the video can also be a deepfake, which can be even more confusing. 

Famous Positive Deepfake Examples

Deepfakes are often associated with malicious examples, such as revenge porn, and, now, in the intense geopolitical environment, shift the narrative regarding important political or economic issues. However, a deepfake can be positive as well. For example, David Beckham was featured in an official “Malaria Must Die” campaign that raised awareness about malaria. It was a video of the football star that used deepfake technology to show him speaking in nine different languages. 

Another beneficial use of deepfakes is AI-generated videos of news reporters. Reuters uses such deepfake videos showcasing virtual reporters and their personalized broadcasts, as well as for sports news summaries. Like with the mentioned Beckham campaign, news sites use dubbing in various languages to make the news more accessible.

Deepfakes are also popular for creating or enhancing existing art pieces. They are used in the film industry or for music. Another great example is the “Dalí Lives” exhibition at the Dalí Museum in Barcelona. The technology is used to create a realistic portrait of the famous artist, mimicking his voice and enabling him to speak to the museum’s visitors. 

Why are Deepfakes Risky?

Bad actors use deepfakes to fulfill their malicious intentions, which vary depending on the use case. For example, people create deepfakes to:

  • Bypass identity verification systems. Criminals create fake AI videos to complete biometric and liveness checks and try to open new bank accounts or use voice samples from a victim’s social media page to collect personal data and take over accounts.
  • Harass or blackmail another person. This is a common technique for deepfakes to be misused for explicit content, for example, by stacking a person’s face on a pornographic video piece. This can be a form of blackmail used to damage another person’s reputation. 
  • Steal sensitive information. Deepfakes can be used as a tool of impersonation to steal another person’s credentials or credit card info. This technique is successfully employed when impersonating company executives in order to access personal information, posing a serious threat to the whole organization.
Infographic with use cases of fraudulent deepfakes

In general, deepfakes are risky because they are often closely tied to all sorts of scams. A good example is the social engineering attack that featured deepfake audio and fake emails in an attempt to scam a company in the United Arab Emirates. This corporate heist resulted in $35 million in losses after successfully impersonating the company’s director and receiving transfers from employees. 

Similarly, a deepfake video was used to impersonate Sam Bankman-Fried back in 2022. An X user posted it, in which the FTX founder claimed to offer free crypto to all users who were affected by FTX’s collapse.

Can Artificial Intelligence Detect Deepfakes?

AI can identify deepfake content. That’s why many security specialists agree that you have to fight AI with AI itself. This is especially important now that the deepfakes have become more sophisticated and distinguishing anomalies have become more challenging. AI can be trained to spot these manipulations and detect unnatural human expressions or voices. 

There are certain strategies for using artificial intelligence to detect deepfakes. For example, a common practice of this sort is implementing a two-stage process — capture and analysis. This involves recording images or videos of the physical world and then checking the captured content by assessing its key details for authenticity, such as objects or facial features. 

Infographic listing the main red flags of deepfake content.

On a more practical level, many regulated industries are obliged to verify their users during the onboarding process and also face the issue of fraudulent deepfakes. For this reason, even less strict sectors now use robust ID verification measures as a first line of defense to detect both forged ID documents and other attempts, like deepfakes during the selfie verification stage, as an attempt to bypass Know Your Customer (KYC) checks. 

iDenfy’s Deepfake Detection Solution

At iDenfy, we offer an AI-powered, fully automated verification solution that accurately detects deepfakes and other fraudulent behavior, allowing you to catch fraudsters right before they actually access your network by:

Unlock your free trial here

Frequently asked questions

1

What is a Texual Deepfake?

Arrow

A textual deepfake is a form of deepfake content, which is a text designed to seem like it was written by a real person while it’s actually generated artificially. These types of deepfakes are used on social media as a form of propaganda campaign. For example, Russian bots are widely used to spread fake news regarding their war in Ukraine. The goal of textual deepfakes is to make it appear that a lot of users online share the same opinions or beliefs. 

2

How Did Deepfakes Evolve?

Arrow
3

Which Deepfakes are the Most Difficult to Spot?

Arrow
4

Are Deepfakes Illegal?

Arrow

Save costs by onboarding more verified users

Join hundreds of businesses that successfully integrated iDenfy in their processes and saved money on failed verifications.

X