Blog

Are Deepfakes Really a Security Threat? - Member Recap from (ISC)² Security Congress 2022

Nov 21, 2022

A member recap of Dr. Thomas Scanlon’s session at (ISC)² Security Congress 2022 by Angus Chen , CISSP, CCSP, MBA, PMP.

Deep fake 2 Dr. Scanlon started his talk by showing images of women and posing a question to the audience: Can you spot the fake person? See the image to left.

To my surprise, none of them are a real person! These images are generated by an AI algorithm, generative adversarial network (GAN),  source: https://thispersondoesnotexist.com . In my opinion, it is a little creepy. Several websites today use data-driven unconditional generative image modeling to create deepfake images such as https://thisxdoesnotexist.com

According to CISA , a deepfake is considered as misinformation, disinformation and malinformation (MDM).

  • Misinformation is false, but not created or shared with the intention of causing harm. i.e. Betsy Ross sewed the first American flag.
  • Disinformation is deliberately created to mislead, harm, or manipulate a person, social group, organization, or country. i.e. Operation INFEKTON.
  • Malinformation is based on fact, but used out of context to mislead, harm, or manipulate. i.e. 80% of dentists recommend Colgate.

Disinformation and Mal-information are often shared as misinformation.

Deep fake 3 In 2017, a Reddit user claimed to create the first deepfake. Today, a branch of AI – Machine learning (ML) has accelerated the availability to creating deepfake content. A deepfake can be audio, video, an image or multimodal that has been modified deceptively using ML algorithm Deep Neural Networks to alter a person’s identity. However, a deepfake is not the same as using photoshop. Deepfakes are considered as disinformation or they are combined with disinformation. An example would be a profile with image deepfake on LinkedIn.

Image source: https://semiengineering.com/deep-learning-spreads/

Most deepfakes are face swap, lip syncing, puppeteering and synthetic. They are created using auto-encoders, GANs, or a combination of both. The creation processes are as follow: Extraction (Data collection) -> Training -> Conversion / Generation. It takes thousands of images. These images can also be extract from individual frames in few video clips. During the image creation, a reenactment is used to drive the expression, mouth, gaze and pose or body.

Deep fake 4 Video deepfakes can be used for entertainment, for example when President Obama was depicted name calling President Trump in a YouTube video created by BuzzFeed .

Dr. Scanlon points out we can simply use intuition and eye test to identify deepfake. My intuition tells me that it’s out of character for a president to say these things, certainly not in a formal recorded session.

As Dr. Scanlon shows us the deepfake generated images of women the second time, I can notice eye gazing or staring in the images.

Here are some practical cues:

  • Flickering
  • Unnatural movements and expressions
  • Lack of blinking
  • Unnatural hair and skin colors
  • Awkward head positions
  • Appears to be lip-syncing
  • Oversmoothed faces
  • Double eyebrows: raised eyebrows at wrong time; one raised eyebrow like the Rock
  • Glare/lack of glare on glasses
  • Realistic appearance of moles; consider placement of moles
  • Earrings – wearing only one or mismatched

As deepfakes becomes pervasive security concerns increase and there are several efforts in the public and private sectors to fight them. Defense Advanced Research Projects Agency (DARPA) is working on Semantic Forensics (SemaFor) and Media Forensics (MediFor). Social media companies like Facebook have their own or use a centralized agency to detect deep fake. There are also detection tools such as Microsoft’s Video Authenticator Tool, Facebook Reverse Engineering and Quantum Integrity.

Here are few programmatical ways to detect deepfake:

  • Blending (spatial)
  • Environmental (spatial): lighting – background/foreground differences
  • Physiological (temporal): generated content lacks pulse, beathing; has irregular eye blinking patterns
  • Synchronization (temporal): Mouth shapes and speech, “B-P-M” mouth closed failure
  • Coherence (temporal): Flickering, predict next frame
  • Forensic (spatial): Generative Adversarial Networks (GANs) leaving unique fingerprint, Camera Photo-Response Non-Uniformity (PRNU)
  • Behavioral (temporal): Video vs audio emotions; target mannerisms (>data)

As an organization, several methods can be considered to prevent security threats:

  • Understand the current capabilities for creation and detection
  • Know what can be done realistically and learn to recognize indicators
  • Be aware of practical ways to defeat current deepfake capabilities – “turn your head”
  • Create a training and awareness campaign for your organization
  • Review business workflows for places deepfakes could be leveraged
  • Craft policies about what can be done through voice or video instructions
  • Establish out-of-band verification processes
  • Watermark media – literally and figuratively
  • Be ready to combat MDM of all flavors
  • Eventually use deepfake detection tools

Deep fake 1 During the Q&A, audiences asked Dr. Scanlon for tips to identify deepfake. Although deepfake is working toward three-dimensional space, it requires a lot of non-AI pre- and post-processing. The current deepfake tool such as Faceswap and DeepFaceLab take considerable time and Graphic Processing Time (GPU) to create a low quality deepfake. Virtual meeting participants can easily spot imperfections by asking others to “turn your head”. Dr. Scanlon predicts the pre- and post-processing issue can be overcome within five years.

(ISC)² Security Congress attendees can earn CPE credits by watching Are Deepfakes Really a Security Threat?  and all other sessions from the event on-demand.

Interested in discovering more about AI? (ISC)² Members can take the Professional Development Course Introduction to Artificial Intelligence (AI) for FREE, U.S. $80 for non-members.