0:00
/
0:00
Transcript

31 Days of AI: Data Centers

A recording from Briar Harvey's live video

Hi! I’m Briar. I have cats. And podcasts.

Welcome to 31 Days of AI! This series breaks down the threats no one’s talking about. Not the theoretical risks you see in think pieces. The real, immediate dangers that are already affecting real people—and the systematic protection you can build before you need it.

Most AI education focuses on capability. I focus on understanding first. Because by the time you realize you need these systems, it’s too late to build them.

Every day covers a different threat. Every day includes actionable steps you can take right now. No fear-mongering, no snake oil—just the reality of what’s already happening and what actually works to protect yourself.

Paid subscribers also receive access to a full strategic brief that goes into greater detail about each day’s threat, and the steps you can take to protect yourself.

This series and all of our shows are always free. Ways you can join me on the journey:

Today, we’re talking about how the cloud is a myth.

Let’s get started.

Day 4: Data Centers and What You Need to Know

Every conversation you’ve ever had with ChatGPT, Claude, or any other AI tool is sitting on a physical server in a physical building somewhere in the world right now.

And that building is not in your house. It’s not in your state. And it might not even be in your country.

When you delete a conversation, it does not disappear. When you think your data is private, you mean private from other users—not from the company, not from the government where that server lives, not from anyone who gains physical access to that facility.

The cloud is just a lie. It’s just someone else’s computer. And they can read everything on it.

What Data Centers Actually Are

A data center is a physical facility—a building full of servers, networking equipment, cooling systems, and power infrastructure. When you use any cloud service or AI tool, your data is being stored and processed on servers inside one of these buildings.

These facilities can be massive, the size of multiple football fields, or relatively small regional operations.

Here’s how it works: Your data doesn’t just exist in one place. For redundancy and performance, cloud providers replicate your data across multiple data centers in different geographical regions. That conversation you had with an AI? It might exist on servers in Virginia, Oregon, Ireland, and Singapore simultaneously.

Each data center operates under the laws of the country it’s physically located in. If your data is replicated to a server in the EU, it’s subject to GDPR. If it’s in China, it’s subject to Chinese data localization laws. If it’s in the U.S., it’s subject to Patriot Act provisions that allow warrantless government access.

Why Data Centers Aren’t Fortresses

Data centers have security, but they’re not impenetrable. Employees, contractors, maintenance workers, and government agents with legal authority all have potential physical access to the servers containing your data.

Even with encrypted data, people who run those data centers have administrative access. End-to-end encryption doesn’t protect against insider threats at the infrastructure level.

The Snowden case showed that the NSA was tapping directly into Google and Yahoo fiber optic cables. This wasn’t hacking—it was physical interception at the infrastructure layer. Companies didn’t even know it was happening until documents were leaked. After the revelation, both companies encrypted their inter-data-center traffic, but the pipelines can still be compromised.

More recently, Microsoft admitted that Chinese hackers gained access to their cloud infrastructure through a stolen signing key, giving them access to email accounts across multiple government agencies. This wasn’t a user-level breach. It was an infrastructure breach that compromised everything and everyone using those servers.

Common Misconceptions

You think: Your data is encrypted, so they can’t read it.

Actually: It’s encrypted in transit and at rest, but the provider holds the decryption keys. They can access your data whenever they want or are compelled to by law.

You think: You only use U.S.-based services.

Actually: Even U.S.-based companies replicate data internationally for performance and redundancy. Your data almost certainly exists in multiple countries simultaneously.

You think: Tech companies fight government access requests.

Actually: They fight individual requests publicly while simultaneously operating under broad PRISM authorizations that allow bulk surveillance. The public legal battles coexist with extensive government access programs operating under different legal frameworks.

You think: Data centers are secure from physical attacks.

Actually: Security is good, but nation states, sophisticated criminals, and insider threats can and do gain physical access. Once somebody has physical access to a server, encryption is often meaningless.

The Uncomfortable Part

If you use any cloud services or AI tools, your data is distributed across infrastructure you don’t control, in locations you don’t know, under legal jurisdictions that may not protect you.

This especially affects:

  • People working with sensitive business information who think their corporate cloud is private

  • Healthcare providers using cloud AI tools who believe HIPAA compliance means their patient data is protected at the infrastructure layer (it doesn’t)

  • Activists and journalists in countries with authoritarian governments, where data stored in international data centers can be accessed through legal cooperation agreements

  • Anyone with information that could be commercially valuable to the companies running the infrastructure

  • People subject to litigation or investigation, where data can be subpoenaed from cloud providers without you even knowing

You’re Already Exposed

In the terms of service, when you agreed to use any cloud AI service, you granted the provider broad rights to access, analyze, and use your data.

Read what you agreed to. It’s always: “We can access your data to improve services, comply with legal obligations, and protect our systems.” And that covers nearly everything.

Intelligence agencies don’t need to hack U.S. companies either. They just request the data from the international replica in a jurisdiction with weaker privacy laws. Your data in Ireland is subject to Irish legal protections, which are different from U.S. protections.

Cloud providers use third-party vendors for maintenance, security, monitoring, and operations. Each vendor relationship creates an additional access point to your data. You didn’t consent to those third parties specifically, but they’re built into the infrastructure.

Many AI providers explicitly state that they use your inputs to improve their models. You might be able to opt out, but you have to look for this information. That means your private conversations are being fed into training datasets, analyzed by researchers, and incorporated into the models other people are using.

The Compounding Effect

Every service you use adds another data center relationship, another jurisdiction where your data exists, another set of administrators with access, and another legal framework that might compel disclosure.

If you use Google, Microsoft, Amazon, Anthropic, OpenAI, Apple, and Dropbox? That’s not seven separate security contacts. It’s dozens of data centers across multiple countries with hundreds of administrators who have technical access capabilities, plus contractors with varying levels of access to your infrastructure.

You’ll never know when someone accessed your data at the infrastructure level because there’s no notification when a system administrator views your files or when a government agent serves a legal demand. The access happens invisibly.

What Are the Actual Consequences?

Short Term

If you’re working on anything sensitive—business strategy, competitive intelligence, product development, personal legal matters, health information, political organizing—and you’re using cloud AI tools, you’ve already shared it with all your infrastructure providers.

For businesses, this is espionage risk. Your competition can hire employees from cloud providers who have seen your data. Foreign intelligence services can legally request data stored in their jurisdictions.

For individuals, this is the end of privacy in the traditional sense. Your therapy conversations with AI, your financial planning, your relationship problems, your political views—all of it exists in systems where administrators can read it, where AI researchers can analyze it for training, and where governments can subpoena it.

Long Term

Data sovereignty is becoming a competitive advantage and a human rights issue.

Companies and individuals who keep data on local infrastructure they control will have strategic advantages over those who outsource to the cloud.

Countries with strong data localization laws can protect their citizens’ information from foreign access. The infrastructure layer is becoming a battleground for geopolitical power. Control over data centers means control over information, which means control over economies and populations.

When Data Center Jurisdiction Matters

Every cloud service decision becomes a question of: Which government do I want to have potential access to this information?

U.S. companies replicate to Europe for GDPR compliance, but that means EU governments can access the data. And so on and so on and so on.

There are no safe choices. Only trade-offs between which jurisdictions you trust the least.

What You Can Actually Do

This is where systematic thinking diverges from security theater.

Understand Where Your Data Physically Exists

For every cloud service you use, look up where their data centers are located. Major providers have dozens of regions across multiple continents. It’s going to be hard to find. It’s going to take some work.

Know which jurisdictions have legal access to your information.

Don’t Use Cloud AI for Sensitive Work

For sensitive work, don’t use cloud AI for anything you wouldn’t be comfortable with a data center administrator or the FBI reading.

That means no confidential business information, no personal health details, no private conversations, no financial data. Nothing that can be used against you.

If it’s genuinely sensitive information, it shouldn’t touch infrastructure you don’t physically control.

Look for Local Processing Alternatives

For AI assistance, you can use open-source models like Llama, Mistral, or Stable Diffusion without sending data to external servers. The quality isn’t always going to be the same, and setup requires some tech knowledge and decent hardware.

Many users will find this option impractical, but if your data security matters to you, you can build your own infrastructure.

For most people, the more realistic protection is: Don’t use AI for sensitive work, period.

Consider Data Residency Guarantees

If you must use cloud services, some providers offer data residency guarantees where you can specify that your data only exists in certain jurisdictions. This will cost you more and probably reduce performance, but it limits your legal exposure to specific countries.

For businesses handling EU citizen data, GDPR compliance often requires this anyway.

Assess Your Jurisdictional Risk

Figure out where your data lives in relationship to the laws to which you are subject. Understanding which jurisdictions have mutual legal assistance treaties and intelligence sharing agreements will help you because it tells you where your data is vulnerable to government access.

Accept What You Can’t Control

If you put it in the cloud, it’s not yours anymore—not in any meaningful sense.

You can delete it from your account, but you can’t delete it from the backups, the training datasets, or the governmental archives where it was copied during surveillance operations.

The only real protection is never generating the data in cloud infrastructure in the first place.

Why We Start with Infrastructure

The real skill isn’t finding the most secure cloud provider. It’s recognizing that the cloud is fundamentally insecure by design because you don’t control the physical infrastructure.

That’s why we start with the infrastructure layer in the AI Protection Program. Every other protection you build—encryption, authentication, privacy settings—is meaningless if the data exists on servers you don’t control in jurisdictions that can compel access.

We’re going to give you frameworks for assessing infrastructure risk across your entire digital life. Not just AI tools, but everything that stores data remotely.

Registration closes December 19th, and the program starts January 5th. Learn more here.

If that’s a little too much for you, the 2026 Workshop Year Pass starts in January with monthly workshops on systematic thinking about AI protection and infrastructure.

What to Remember

The cloud is just somebody else’s computer in somebody else’s building in somebody else’s country, and they can read everything on it.

That’s it. It’s that simple.

If you’re a Network member, the strategic implementation brief will be in the post for today. It covers how to audit where your data actually lives, which jurisdictions you’re exposed to, and when you should keep things local.

You can’t protect your infrastructure if you don’t understand it. You’ve got to start by knowing where your data actually is.

Discussion about this video

User's avatar

Ready for more?