What Are Deepfakes?
A deepfake is generated video, image or audio that imitates the appearance and sound of a person. Also called “synthetic media”, it’s so convincing at mimicking the real thing that it can fool both people and algorithms.
Deepfakes are AI-generated live, and their most common forms of application are either in videos or as augmented reality filters.
“Deepfake” as a term comes from the combination of the words “deep learning” and “fake”, to represent something fake that is a result of deep learning technology.
While there is a growing market of consumer apps that use deepfake technology for entertainment, such as FaceSwap, as the technology becomes more widespread and available, it will be deployed for nefarious purposes. In fact, it already is, to a limited degree – as we will see below.
How Do Deepfakes Work?
Deepfakes are generated via machine learning – more precisely, using deep learning and general adversarial networks (GANs).
In layman’s terms, this means that two neural networks compete against one another, where the goal of one is to generate an image that the other will not be able to tell from its training data, and the goal of the latter is to avoid being fooled in this way. As a 2014 paper explains:
“A generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake.”
GANs are pioneering technology in the field of computer vision and are very successful at generating images that resemble humans. Moreover, there are commercial GAN services available should one lack the necessary computing power to run these models at home.
The core challenge of the technology used to be the availability of training data for a given person. This is why, with plenty of footage available, celebrity deepfakes were popular at first.
But as technology advances, it becomes easier to create deepfakes from single images or short sound recordings.
Why Are Deepfakes Dangerous?
Because they convincingly misrepresent reality, deepfakes come with several dangers, including online fraud, disinformation, hoaxes and revenge porn.
But firstly, we should clarify that not all deepfake applications are dangerous or of a dubious legal and ethical nature. In fact, most known examples are popular in entertainment and as a means to express creativity.
However, the ability to impersonate a person on live video, on the phone, or even during a video call is set to pose a challenge to online verification systems, which rely on face or voice recognition either done by a human or automatically by identity verification software. In fact, researchers from Sungkyunkwan University showed in March 2021 that most current facial recognition APIs can be beaten by using deepfakes.
Per the Deepfake Report, a mind-boggling 96% of deepfakes in video format that researchers from Amsterdam company Deeptrace were able to identify online were porn videos.
This might seem relatively harmless compared to financial fraud, but it is not when considering that it can lead to extortion and blackmail, with the deepfake creator threatening to release a fake private video of someone, which looks so believable the victim has no choice but to pay up.
Another issue highlighted by journalists and researchers from the BBC to Wired and Forbes is deepfakes’ potential to cause political and social harm through disinformation and fake scandals – as the image of a public figure can be manipulated to depict them doing or saying things they never did.
Two researchers from the EU Observer reported how on April 21, 2021, the Foreign Affairs Committee of the Dutch Parliament had a phone conversation with someone they thought was a Russian opposition politician but was, in fact, a voice deepfake. Commenting on the incident, they added that “once deepfakes enter the market of political disinformation, the problems we have now may look like child’s play”.
For the latter reasons, deepfakes are considered an important weapon in cyberwarfare arsenals, with national governments trying to mitigate against nationally harmful deepfakes while building the capacity to use them against their own enemies, if needed.
3 Examples of Deepfakes
Let’s look at three great examples of these.
- Buzzfeed’s Synthetic Obama video voiced by Jordan Peele
- GAN-generated convincing human faces on ThisPersonDoesNotExist.com
- The popular Reface face swap app
1. The Obama/Jordan Peele Deepfake video
Perhaps the most famous example of a deepfake, which largely brought the technology into the mainstream spotlight, was an April 2018 video released by Buzzfeed that was based on a synthetic Barack Obama and an authentic Jordan Peele. To date, it has amassed more than 8.7 million views on YouTube.
2. People (and Cats) Who Do Not Exist – Examples of Image Deepfakes
Our second example is the website This Person Does Not Exist, which loads a new GAN generated human face every time you refresh it.
While many of the results are very convincing, there is the occasional glitch and an overall uncanny valley effect. Nevertheless, the website is so popular that it has inspired spinoffs, including This Cat Does Not Exist.
3. An App That Swaps Your Face with a Celebrity’s
Available both on the App Store and Google Play, this mobile app by Neocortext, Inc boasts 4.8/5 stars on 4.5/5 stars respectively on the two application marketplaces.
Reface allows users to deepfake themselves, swapping their faces with those on celebrity and meme videos, gifs and images. Its library is largely crowdsourced.
What Are Shallowfakes?
Shallowfakes are misleading images and videos manipulated for propaganda or profit not through the use of elaborate machine learning technology but more conventional and accessible image, audio and video manipulation tools.
These could be, for example, slowing down or speeding up a video, changing its audio or simply renaming a file to imply something much more nefarious.
Although this is a new term that mirrors the word “deepfake”, shallowfakes have been around for much longer – starting from when the tools that enable each type first emerged. They may use older technology but they can be equally convincing.
One often-cited example comes from October 2018 and shows CNN reporter Jim Acosta interacting with a White House intern. The shallowfake version of this video was sped up to make Acosta appear aggressive towards the intern, who was trying to take his microphone.
Eventually, the manipulated nature of the video was demonstrated but, in the opinion of some commentators, the damage to the man’s reputation had already been done.
How Does Deepfake Fraud Work?
Regardless of the type of deepfake used – video, audio or even static images – deepfake fraud largely uses the technology to allow the fraudster to impersonate another person.
This could be the victim’s boss or manager (CEO fraud), distant relative or similar; someone they know of but do not interact frequently for, who they would nevertheless not want to disappoint or disobey.
Having manipulated the voice or image of such a figure in the victim’s life, the fraudster will then instruct the victim to transfer money to the fraudster’s account or make other questionable moves.
In fact, this type of fraud is already counting several prominent examples. For instance, the case of the Managing Director who was convinced by a deepfake voice created using voice cloning to transfer $240,000 to a supplier he was not familiar with, in a different country.
How Can You Spot Deepfakes?
Be it a video or image, there are a few things to watch out for to catch a deepfake:
1. incongruencies in the skin (too smooth or too wrinkly, or its age seems off compared to the hair)
2. shadows off around the eyes
3. glare errors on glasses
4. off-looking facial hair
5. unrealistic facial moles
6. too much or too little blinking
7. lip color off compared to the face
8. unrealistic movement of and around the mouth
Most deepfakes leave you with a feeling that something is off, and that’s because there are errors in the process that leave residue.
The above are some key points that you can train the human eye to spot. In conjunction with other verification techniques, they can help you catch a deepfake.
How Is Deepfake Fraud Impacting Businesses?
Deepfakes are essentially impersonations, so they are a popular method of CEO fraud using synthetic voices.
Consumers and employees may be impacted by phishing attempts using deepfakes, and they can be victims of identity theft through it, as a fraudster can try to use deepfake technology to beat processes for KYC onboarding that are based on face matching and other biometric verifications.
We should also note that criminals and fraudsters keep coming up with new ways to use deepfakes, including extortion, blackmail and industrial espionage. Companies, organizations and private individuals ought to remain vigilant.
Fraudsters’ techniques are ever-improving. Sign-ups with synthetic IDs can wreak havoc on your operations. Can they fool biometric verification?
Learn More Here
Sources
- Arxiv: Am I a Real or Fake Celebrity?
- Arvix: Generative Adversarial Networks
- Deepfakesweb: Online Deepfake Maker
- This Person Doesn’t Exist: Home Page
- EU Observer: ‘Deepfakes’ – a political problem already hitting the EU
- Regmedia: The State of Deepfakes
- This Cat Doesn’t Exist: Home page