AI Deepfake Video and Images

A practical guide and simple checklists to safeguard your charity from the risk of AI deepfake video and images

AI Deepfake Video and Images

AI deepfake imagery and video is rapidly proliferating and represents a significant risk to charities. Preventing AI deepfakes is very difficult if you post images or video online but there is a great deal that can be done to mitigate this risk. This guide supports charities in thinking through the risks, the degree of risk they may face and identifying what steps they will take to manage that risk. It was one of 3 Charity AI safety guides.  The other 2 are cyber security and misinformation.

This guide is an initial piece of work that will be significant refined as part of a collaborative exercise to support charities.  Constructive criticism to improve it would be very welcome and should be sent to ian@charityexcellence.co.uk.

What are AI Deepfakes?

 AI deepfakes are synthetic media, typically videos or audio recordings, that have been manipulated using artificial intelligence (AI) to create realistic but entirely fake representations of individuals. These can make it appear as though someone is saying or doing something they never did.  Bad actors use AI deepfakes for various malicious purposes, such as spreading misinformation, discrediting individuals, committing fraud, and even blackmail.  For example, they might manipulate a video to make it seem like a public figure said something controversial.

How Much of a Threat Are AI Deepfakes?

Growing Threat. The threat of AI deepfakes in the UK has grown significantly in recent years, with the number of deepfake videos online doubling every six months, and the global deepfake AI market is growing rapidly.  Currently, the threat is serious and is expected to worsen and the UK government has recognized deepfakes as a significant threat to national security and democracy.  There are concerns that deepfakes could be used to influence elections, incite hate, and undermine public trust.

Threat to Charities.  The threat to charities is threefold.

  • Civil Society. We support civil society and deepfakes represent a significant threat to society itself.
  • Charities. There is a threat to charities themselves.
    • We will be targeted for scams and.
    • More widely, we need the public to trust us, both to feel able to use our services and to donate the funding we need. Deepfakes will undermine trust.
  • Beneficiaries.  The often very vulnerable people we exist to support will be targets for both scams and misinformation. Our beneficiaries may be targeted:
    • Simply because they are vulnerable to exploitation, or.
    • Because they belong to a marginalised group and.
    • There is also the risk of sexual harassment and abuse of women and children.

AI Deepfake Imagery - Preventing Downloading Images

Limit the Availability of Your Images

  • Tighten Privacy Settings: On social media platforms, adjust your privacy settings so that only trusted people can see your photos. Avoid making images public if possible.
  • Avoid Public Uploads: Be cautious about posting high-quality, high-resolution images of yourself online. The more public images there are, the easier it is for bad actors to create deepfakes.

Restrict Image Access

  • Limit Public Access: Use access controls or password protection for certain web pages containing personal images. Limiting who can see the images reduces the risk of them being harvested by bots or malicious users.
  • Disable Right-Click Download: Disable the ability to right-click and download images. While this is not fool proof (users can still take screenshots), it adds a layer of difficulty for casual misuse.

Limit Image Availability

  • Private or Expiring Content: Platforms like Snapchat or Instagram Stories offer features where images expire after a set time. Reducing the time your images are available lowers the chances of them being harvested for deepfake purposes.
  • Temporary Images: Use features on websites that allow for temporary or expiring content. Certain CMS platforms or plugins allow images to be visible only for a short time, reducing the window in which they can be harvested for misuse.

Making your images less attractive for AI Deepfakes

A deepfake can be created using just a single image but creating a deepfake works best when they have several images of your face, from different angles, that are high quality and high resolution.  Anything that you do, which undermines these factors helps to make it less likely that an image will be used to create a deepfake.

Use Lower Image Resolution

  • Reduce Quality: Posting lower-resolution images (72 dpi or lower) makes it harder for AI to capture the level of detail needed for realistic deepfakes. This is particularly effective because deepfakes require clear, high-quality facial data.
  • Limit File Sizes: Resize images to smaller dimensions (e.g., 800x600 pixels) to make them less useful for detailed AI processing.

Apply Filters or Artistic Effects:  Applying artistic filters, like posterization, noise, or stylised colour effects, alters the way AI interprets the facial features, making it more difficult to use the image for deepfakes. Filters that change the shape or texture of facial data can be particularly useful.

Face Obfuscation Techniques

  • Hide part of your face: Use objects (e.g., sunglasses, masks, or hats) to obscure parts of the face. This physical disruption of the facial structure makes it harder for AI to capture and reproduce the face.
  • Blurring or Pixelation: Lightly blurring or pixelating facial areas reduces the detail in the image without making it entirely unrecognisable to viewers. This can subtly protect against deepfakes while retaining some usability on platforms.
  • Masks or Stickers: For informal settings like social media, you can add stickers or masks to cover parts of the face, making it difficult for AI to accurately map features.

Use Watermarks

  • Digital Watermarking: Add watermarks to your images, especially those that are posted publicly. Although it won’t completely prevent misuse, it can make it more difficult to use the image convincingly for deepfakes.
  • Invisible Watermarking: Using digital watermarking tools, you can embed an invisible signature in your images that allows you to track where your images appear online. This won’t directly stop deepfake creation, but it can help you detect misuse and pursue takedown requests.

Overlay Noise Patterns: Adding slight noise or grain to an image makes it harder for AI algorithms to isolate facial features cleanly.

Cropped or Partial Images

  • Avoid Full-Face Shots: Posting cropped photos where only part of the face is visible (e.g., from a side profile or a portion of the face) can limit the amount of data an AI can use to create a deepfake. Side profiles or slightly blurred facial features are less likely to be manipulated.
  • Focus on Non-Facial Features: If appropriate, post images that focus on other aspects (such as clothing, setting, or environment) rather than clear, front-facing shots of people.

AI Deepfakes – Software

Use software to combat deepfakes, such as water marks, partially obscuring faces etc. 

 AI Detection Tools: Some online tools can detect deepfake content, helping you monitor if your images have been used without your knowledge.

  • Deepfake Blocking Services: Certain platforms are developing AI tools to flag and block deepfakes. Keeping track of these tools and using platforms that employ them can help.

AI Distortion Tools

  • Fawkes and Similar Tools: Tools like Fawkes, developed by the University of Chicago, can cloak facial images. These tools make subtle, invisible alterations to the image that confuse facial recognition and deepfake software.
  • Glaze: Like Fawkes, Glaze helps artists protect their images by altering their facial and visual features in a way that’s invisible to humans but prevents AI from learning to mimic their work.
  • Image Composition: Using group rather than individual images with more complex backgrounds may help.

Facial Recognition Blocking:  Some software tools can block or alter the metadata in images, making it harder for AI systems to detect and extract faces from images.

Audio and Video Deepfakes

 While imagery deepfakes are a concern, audio and video deepfakes pose a more immediate threat due to their dynamic and engaging nature. Imagery deepfakes can be used to create fake photos or manipulate existing ones, but audio and video deepfakes can more convincingly mimic real people and situations, making them harder to detect and more impactful.

 Identifying Audio Deepfakes.

 Listen for Inconsistencies: Pay attention to unnatural pauses, changes in tone, or background noise that seems out of place1.

  • Check for Metadata: Look at the file's metadata for any signs of tampering or inconsistencies1.
  • Use AI Detection Tools: Tools like Resemble AI's Detect can analyze audio for signs of deepfake manipulation2.
  • Verify with Known Samples: Compare the suspicious audio with known, verified samples of the person's voice1.

 Combating Audio Deepfakes.

 Watermarking: Embed imperceptible watermarks in audio files to track and verify authenticity2.

  • Educate and Raise Awareness: Inform the public about the existence and dangers of deepfakes3.
  • Legal Action: Report deepfake audio to authorities and pursue legal action if it's used for malicious purposes1.

 Identifying Video Deepfakes.

  •  Look for Visual Anomalies: Check for unnatural blinking, inconsistent lighting, or odd facial movements.
  • Use AI Detection Tools: Tools like Microsoft's Video Authenticator can analyze videos and provide a likelihood score of manipulation.
  • Reverse Image Search: Use reverse image search engines to find the original source of the vide.

 Combating Video Deepfakes.

  • Digital Watermarks: Embed watermarks in videos to indicate authenticity.
  • Report and Take Down: Report deepfake videos to platform administrators and request takedowns.
  • Legal Action: Legal action is an option, if deepfakes are used to harm or defame individuals.

Taking Down and Reporting Deepfakes

Monitor and Remove Unauthorised Content.

  • Image Tracking Tools: Use reverse image search engines (like Google Images) to search for your images online. This helps you track where your images are being used.
  • DMCA Takedown Requests: If your images are used without consent, you can file DMCA (Digital Millennium Copyright Act) takedown requests with the platform where the image is posted. In the UK, you can also request removal under data protection laws.

Educate and Advocate.

  • Report Abuse: Report any instance of deepfake abuse to the relevant platform immediately.
  • Legal Action: Deepfakes used to harass, defame, or intimidate are illegal in many jurisdictions. In the UK, laws surrounding cyber harassment, defamation, and the sharing of intimate images without consent may apply.

Related Deepfake Security Measures

Be Selective with Connections: Only accept friend requests or follows from people you know and trust.

Metadata Scrubbing - Remove Metadata: Ensure all personal or identifying metadata is removed from images before posting. Some photos may include hidden data like geolocation, device details, or time stamps that could aid in identifying or targeting individuals.

How to Identify Potential Deepfake Imagery

Unnatural Facial Features:

  • Look for inconsistencies in facial expressions, such as unnatural blinking or lack of emotion.
  • Example: Faces that appear too smooth or have odd lighting effects.

Inconsistent Lighting and Shadows:

  • Check if the lighting and shadows are consistent with the environment.
  • Example: Shadows that don’t match the light source or appear in the wrong direction.

Blurry or Distorted Areas:

  • Be wary of images with isolated blurry spots or double edges around faces.
  • Example: Blurry patches around the mouth or eyes.

Unnatural Movements:

  • In videos, look for unnatural movements or synchronisation issues between audio and lip movements.
  • Example: Speech that doesn’t match lip movements or robotic tones in the audio.

Odd Backgrounds:

  • Check for inconsistencies in the background, such as changes in video quality or mismatched elements.
  • Example: Backgrounds that appear static while the subject moves.

Lack of Detail:

  • Deepfakes often lack fine details like frizzy hair or subtle skin textures.
  • Example: Hair that looks too perfect or skin that appears overly smooth.

Watermarks and Artifacts:

  • Look for watermarks or digital artifacts that indicate manipulation.
  • Example: Unusual marks or pixelation around the edges of objects.

Social Media Platforms

Here are some things to look out for that indicate a social media platform may be lower risk in terms of deepfakes.

  • Clear Policies and Reporting Mechanisms: Platforms with explicit policies against deepfakes and easy-to-use reporting mechanisms are more likely to take swift action.
    • Look for platforms that provide clear guidelines on what constitutes a deepfake and how users can report them.
  • Detection Technology: Platforms that invest in AI and machine learning technologies to detect deepfakes are better equipped to identify and remove such content quickly.
    • Check if the platform mentions the use of such technologies in their safety or content moderation policies.
  • Transparency and Accountability: Platforms that are transparent about their content moderation processes and provide regular updates on their efforts to combat deepfakes are more trustworthy.
    • Look for platforms that publish transparency reports detailing the number of deepfakes detected and removed.
    • Avoid platforms that host explicit content and/or are known for being used by bad actors who create and share deepfakes.
  • Collaboration with Experts: Platforms that collaborate with external experts, researchers, and organisations to improve their detection and response to deepfakes are more proactive.
    • Check if the platform mentions partnerships with academic institutions or cybersecurity firms.
  • User Education and Awareness: Platforms that prioritise educating users about the risks of deepfakes and how to spot them are more likely to foster a safer environment.
    • Look for platforms that provide resources and tips on identifying deepfakes.
  • Rapid Response Time: Platforms that have a track record of quickly responding to reports of deepfakes and removing such content are more reliable.
    • Check user reviews and feedback to gauge the platform's responsiveness.
  • Other Safety Features. Platforms with ephemeral imagery are safer. That is, photos and videos disappear after being viewed. Some platforms (e.g. Discord) offer private and invite-only spaces, giving you greater control over who accesses your images.

Other risk indicators – allowing anonymous users, public profiles being easily accessible, images being easy to download, and imagery being made public by default.

This Guide to AI Deepfakes Is Not Professional Advice

This article is for general interest only and does not constitute professional legal or financial advice.  I'm neither a lawyer, nor an accountant, and am not a tech security expert either so not able to provide this, and I cannot write guidance that covers every charity or eventuality.  I have included links to relevant regulatory guidance, which you must check to ensure that whatever you create reflects correctly your charity’s needs and your obligations.  In using this resource, you accept that I have no responsibility whatsoever from any harm, loss or other detriment that may arise from your use of my work.

If you need professional advice, you must seek this from someone else. To do so, register, then login and use the Help Finder directory to find pro bono support. Everything is free.

Ethics note: AI was partially used in researching this guide.

Register Now
We are very grateful to the organisations below for the funding and pro bono support they generously provide.

With 40,000 members, growing by 3500 a month, we are the largest and fastest growing UK charity community. How We Help Charities

View our Infographic

Charity Excellence Framework CIO

14 Blackmore Gate
Buckland
Buckinghamshire
United Kingdom
HP22 5JT
charity number: 1195568
Copyrights © 2016 - 2024 All Rights Reserved by Alumna Ltd.
Terms & ConditionsPrivacy Statement
Website by DJMWeb.co
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram