0:00
/
0:00
Transcript

31 Days of AI: Porn and Deepfakes

A recording from Briar Harvey's live video

Hi! I’m Briar. I have cats. And podcasts.

Welcome to 31 Days of AI! This series breaks down the threats no one’s talking about. Not the theoretical risks you see in think pieces. The real, immediate dangers that are already affecting real people—and the systematic protection you can build before you need it.

Most AI education focuses on capability. I focus on understanding first. Because by the time you realize you need these systems, it’s too late to build them.

Every day covers a different threat. Every day includes actionable steps you can take right now. No fear-mongering, no snake oil—just the reality of what’s already happening and what actually works to protect yourself.

Paid subscribers also receive access to a full strategic brief that goes into greater detail about each day’s threat, and the steps you can take to protect yourself.

This series and all of our shows are always free. Ways you can join me on the journey:

Today, we’re talking about how to protect yourself from deepfake images and pornography.

Let’s get started.

The Uncomfortable Reality

Did you know that someone can take a picture of your face and create pornography of you in about fifteen minutes? And it costs less than thirty dollars.

Also, there’s almost nothing you can do about it once it’s out there.

This isn’t theoretical. It’s happening to women in tech, journalists, executives, teachers, and regular people every single day. It’s also happening to students and underage girls, which means we’re dealing with child sexual abuse material. And all it takes is a couple pictures of you.

How Deepfake Pornography Works

Deepfake pornography uses AI to map your face onto already existing pornographic content, creating videos that look real enough to destroy reputations, careers, and relationships.

The process is straightforward: Take photos of your face from social media, any videos you’ve ever recorded, any public appearances you’ve ever made, your yearbook photos if you’re a student. Then AI software maps your face onto the existing video structure, which creates new content that can be distributed literally everywhere. And it looks like you or your children.

The apps themselves are fairly easily accessible if you know where to look. Kids—I am hearing as young as nine years old—are passing these apps around in private Discord servers, links in Snap. And then the videos themselves get published everywhere.

Why This Is Worse Than You Think

You think that it only happens to celebrities. Wrong. It’s actually easier to target real people because the way the algorithm works is that celebrities get flagged.

You think that the technology isn’t good enough to be convincing. I encourage you to go take a gander at any AI video on Facebook and tell me that people won’t believe it’s real.

You think that platforms will remove it. Not only are there almost no legal protections depending on where you live in the world, it spreads faster than it can be taken down. And once it’s out there, it’s almost likely to be duplicated to dozens of other sites.

The uncomfortable truth is that you’ve already been exposed. It affects women in every position, in every livelihood. It affects girls of every age. And I don’t think I have to emphasize in this current climate what that looks like with predatory behavior on the rise.

You’re exposed every time you post a new video, every time you get a new headshot, every time you share a picture of your children on social media. Every photo of you or your kids from now until the end of time is a threat. And the more visible you are, the more options we have to choose from.

The Algorithm Problem

By the time you know it exists, it’s probably already been shared thousands of times. And most people find out from their friends, their employers, their coworkers.

And I haven’t even told you the worst part, because the worst part is that the algorithm doesn’t just not protect you, it actively harms you.

The amount of work that I had to get this particular video scheduled, particularly on LinkedIn, was staggering. In the end, I ended up having to delete the pre-scheduled events because I couldn’t get it to go through. The algorithm will suppress the word porn. Pornography. You probably won’t see this unless I send it to you.

But my revenge porn? The one some dude in his mom’s basement’s making of me? That’s called Briar’s Cream Pie, and the algorithm is gonna parse that as a fucking recipe. Or it’s Briar’s Golden Shower, and it thinks that’s a party.

What Are the Actual Consequences?

Let’s start short term and then we’ll go long term.

In the short term, you’re looking at professional reputation damage, harassment, employer investigations, family trauma, psychological impact—violated bodily autonomy. All the good stuff.

Long term: These videos will always be there. It’s rule thirty four, right? If it exists, there’s porn of it on the internet. And they will come back to haunt you at the absolute worst times.

Once you’ve been targeted, the likelihood of those videos being used in other situations becomes an ever-increasing curve because it’s easier to take the stuff that’s already existing and just put a new spin on it. That’s why we’re not recreating AI porn—we don’t have to do that. There are literally millions of hours of what is most likely a trafficked human, woman, or girl who has been forced to endure these absolutely horrific scenarios. And then they’ll just put your face on it.

And typical takedown notices are slow. There are almost no legal protections. Platform policies are inconsistent. And if it’s international, fucking forget it.

What You Can Actually Do

On that happy note, what can you actually do about this?

Audit Your Image Footprint

Google yourself plus images and see what’s publicly accessible. Check your privacy settings on LinkedIn, Facebook, and Instagram. You can’t control everything, but you can reduce the source material pool by making older photos private and or removing high resolution versions.

Set Up Google Alerts

If you set up a Google alert for your name plus terms like videos and photos, you’re going to be more likely to get results. You can also try this with specific platforms and specific terms, but again, it’s a needle in a haystack looking for terms. You’re better off trying to find your name and photo and image and video. And you use a plus sign in that Google alert.

Document Your Legitimate Image Usage

Take screenshots of where you appear in real life with the date in the screenshot. If deepfakes surface, you’ll at least have some record of authenticity. And this is going to help platforms remove images or videos that are deepfakes.

Know Your Takedown Options

They’re wildly variable depending on where you’re located, what platforms you’re using. They even vary by state because California, New York, even Utah has different protections on images and photos and videos, especially if you live in places where a lot of filming happens. You’re gonna be likely to find more state and local protections than you are federal protections. And that’s important to know because the more resources you have, the more likely you are going to be able to combat any of this.

Have an Attorney in Your Rolodex

Either have an attorney or know an attorney that you can call who can help you submit a takedown letter. These aren’t actually that expensive, but you need to have somebody in your Rolodex who can answer that call when it gets bad, because you’re going to want it before it gets bad.

Tell Someone Your Plan

This isn’t about security, it’s having a person who understands what your concerns are and what steps you wanna take. If you can work this out with a partner, with a business bestie, find someone who you can run this stuff past if it’s not your attorney.

Whatever you do, I need you to have awareness of the fact that when it happens—not if, when it happens—these are the steps that you’re going to take, because in the moment you will have been violated. You are not going to be reasonable or rational. So I need you to have a plan ahead of time.

Why I’m Talking About This

The reason that I am spending so much time this month on threat detection and threat assessment is because this is the world that we live in now. And no one seems to be talking about how all of these pieces are intersecting.

This is exactly what I built in the AI Protection Program. We start in January, and we don’t just tackle deepfake risk in isolation. I’m going to help you build the infrastructure to handle all kinds of digital targeting and reputation threat. We’re going to build proactive protection, incident response protocols, and recovery systems.

By the time you need them, it’s going to be too late to build them.

I’m going to be talking about this program in greater detail for the rest of this month. Find more details here.

If you’re not ready for the full intensive, I also have a 2026 workshop series. We’re going to go every month deep into a specific AI topic and threat and how to deal with it. Learn more about the AI Workshop Series here.

What to Remember

You’re not going to be able to prevent deepfakes. Not for you. Not for your kids.

What you can do is control your image footprint as much as possible and have response systems ready before you need them.

The question isn’t whether you’ll be targeted. The question is whether you’ll have systems ready when it happens.

Paid members: Want the actual implementation steps? Today’s strategic brief breaks down exactly how to protect yourself.

Discussion about this video

User's avatar

Ready for more?