Connect with us

Hi, what are you looking for?

Technology

Deepfakes Explained: What You Need to Know

AI Audio Deepfake (Photo: Alamy)

The term “deepfakes” describes synthetic audiovisual material generated using artificial intelligence, where one person is made to speak or act like another. These AI methods, although not particularly recent, are now being applied in ways that draw considerable attention, particularly due to how they can be misused for spreading false information.

Recognising and identifying deepfakes can help safeguard personal data and protect a company’s public image and financial well-being. For these reasons, businesses ought to familiarise themselves with this technology and understand the dangers it carries. This article explains the AI technologies behind deepfakes and provides useful pointers on how they can be detected.

When faces lie and voices deceive deepfakes blur the truth (Photo: Alamy)

Impact of Deepfakes on Online Content

Just as fake news alters public discourse, deepfakes also can contaminate the internet with misleading information and provocative visuals. Videos tend to be taken at face value by many, especially now that the understanding of deepfake techniques remains limited.

The mental strain that comes with having to constantly judge what is true or false online may contribute to what some researchers call “reality apathy”, where people lose the motivation to care about distinguishing fact from fiction. This type of general indifference to information integrity presents serious societal challenges.

Researchers have long studied ways to highlight fake news, track propaganda, and develop automatic fact-checking systems. Addressing the issue of deepfakes, though, requires deeper and more specific interventions. While researchers work towards this, those producing deepfakes are already keeping pace with improvements in detection.

Where Deepfakes First Emerged

The internet began noticing deepfakes more widely around 2017, particularly following a Reddit post that shared non-consensual, fake, explicit clips involving celebrities. Although the post was removed, it had already gained widespread traction across many platforms.

That same year, a video featuring former United States president Barack Obama was shared by the University of Washington as a warning about what this technology could be used to achieve.

In 2018, filmmaker Jordan Peele used a similar approach with Obama’s likeness, issuing a warning about the danger of fabricated speech. To date, the primary ways this technology has been misused include political manipulation and forced digital content, with financial fraud closely following behind.

Positive Uses of Deepfake Tools

There is, however, a more constructive side to deepfake innovation. Because the source code for these AI tools is freely accessible, hobbyists and content creators have started to experiment with them creatively. Online, one might see comedic mashups like Jim Carrey playing Jack from “The Shining” or Snoop Dog delivering a tarot reading.

Industries such as film, education, healthcare, fashion, and online retail are also investigating how to apply this tech in meaningful ways. A creative example comes from the Dalí Museum, which marked the surrealist artist’s 115th birthday by presenting a virtual version of Salvador Dalí using deepfake techniques.

This project, named “Dalí Lives”, was carried out in partnership with an advertising agency and received several awards for innovation and design. Companies like VocaliD also contribute by offering people who use speech synthesis technology more distinct and personalised voices.

Cultural Campaigns Featuring Deepfakes

In 2019, the global health charity Malaria No More teamed up with video software provider Synthesia to enable David Beckham to deliver a campaign message in nine different languages using deepfake-generated visuals.

Another prominent example came in 2022 with Kendrick Lamar’s music video “The Heart Part 5”, in which he morphed into the faces of several prominent figures such as Kobe Bryant and Kanye West. The technical achievement of the video was widely appreciated and highlighted what the technology can now do when used effectively and artistically.

Technological Concepts Behind Deepfakes

Generative neural networks power deepfakes, particularly through autoencoders and generative adversarial networks (GANs). These techniques serve two major functions: switching faces and replicating facial movements.

Face reenactment adjusts facial expressions, gaze, or head movement within existing videos or images, while face swapping replaces one person’s face with another, keeping visual realism intact.

How Face Swapping Works

The mechanism of face swapping leans heavily on autoencoders. These models are composed of two main sections – an encoder and a decoder – with the space in between known as the latent space. The encoder extracts compressed information from a face image, and the decoder rebuilds the image from this information.

For effective face swapping, two separate encoders are trained – one for the source and one for the target face. When the models are fully trained, the information from one encoder can be interpreted by the decoder for the other face. In essence, facial expressions and features from one face are translated to another face across video frames using these algorithms.

Understanding Face Reenactment

Face reenactment begins by reconstructing a three-dimensional model of the face from a single image using a method called monocular face reconstruction. This technique allows for changing features like head position or facial expression while retaining individual characteristics and lighting effects.

Synthetic visuals are then generated with the new expressions and movements. A specialised video rendering network is finally applied to produce a smooth video with realistic transitions from these images.

Current Approaches to Detecting Deepfakes

To detect deepfakes, current techniques rely on artificial intelligence and detailed image analysis. These detection strategies fall into five categories. The first group uses convolutional neural networks to examine individual frames.

The second group uses methods based on time-based inconsistencies, where patterns between consecutive frames are analysed using recurrent neural networks. The third approach looks for unnatural edges and inconsistencies introduced during the editing process. Another approach compares the image’s facial regions with background patterns to detect discrepancies related to the original recording device.

Finally, some techniques look at facial biological signals, such as blinking patterns or changes in skin tone, which are difficult for synthetic video generators to mimic.

However, once a flaw is exposed and publicised, developers often modify their tools to fix it. Because of this, researchers suggest focusing on detection techniques that do not rely on easily patched weaknesses.

Steps to Take While Detection Remains Limited

Since current detection tools are not yet fully reliable or integrated into most media platforms, it helps to know what to watch for when assessing video content. Pay close attention to the face in the video. Irregular blinking, unnatural alignment of facial parts, mismatched skin tone, or inconsistencies in scars and facial hair might indicate tampering.

AI-crafted illusions are changing the game of trust (Photo: Shutterstock)

Watch for odd movements in limbs or objects in the background. Shadows and lighting that look unrealistic can also be clues. If the language used does not match the usual style or vocabulary of the person being shown, that may also raise concern.

Collective Awareness and Responsible Action

Being cautious on a personal level is important, but institutions such as government bodies, corporate organisations, media outlets, and schools must also prioritise awareness and education about this issue.

There should be clear guidelines to restrict the misuse of this technology, especially when it is employed for misleading political, financial, or anti-social purposes.

How Accedia Supports Protection Against Deepfakes

Accedia offers expert cybersecurity services to address the growing challenges created by deepfakes. Their specialists conduct thorough vulnerability assessments, helping their clients secure applications and uphold professional standards.

By drawing on advanced technical knowledge, the firm contributes meaningfully to efforts aimed at reducing the harm posed by manipulated digital content.

Written By

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Movies

As the leaves turn golden and the winds grow crisp, the air is filled with the excitement of the spookiest season: Halloween. And what...

Movies

To account for how contemporary audiences might interpret some of the older films in their catalog, Disney Plus began incorporating content warnings into its...

TV

Bronwyn Newport, the newest addition to the cast of Real Housewives of Salt Lake City, has made a lasting impression with her striking fashion...

Movies

The 2020 film “Underwater”, directed by William Eubank, a thriller that slowly evolves into an eldritch horror experience. The film is set in a...