Invisible Levers: How Privacy Loss Hands Control to Hidden Forces

Like many in the privacy community, we've noticed that friends and family often struggle to understand why we're so focused on protecting privacy. It's not about paranoia. It's about something far more practical: the fundamental right to decide for ourselves how, when, where, with whom, and for what purpose our personal information is used.
In an era where major data breaches routinely expose hundreds of millions (or even billions) of records, the risks are clear and immediate. For example, the 2024 National Public Data incident compromised approximately 2.9 billion records, including full names, dates of birth, Social Security numbers and addresses. That's the very information bad actors require to gain access to our most important accounts - both professional and personal.
Scandals like Cambridge Analytica (see "Facebook–Cambridge Analytica data scandal" on Wiki) also showed how personal data can be weaponized to influence elections and individual behaviors. Because of this, reclaiming control isn't extreme. It's essential.
When we lose grip on our data, invisible levers get pulled by corporations, advertisers, governments, or malicious actors. This shifts power away from us. It can subtly manipulate our choices, erode personal autonomy, and even undermine societal trust.
This isn't conspiracy theory. It's a well-documented reality of modern digital systems, supported by countless real-world examples from widespread identity theft to sophisticated targeted manipulation. Protecting privacy is ultimately about preserving our agency and freedom in a world increasingly shaped by unseen influences.
What Are These Invisible Levers? Understanding the Mechanisms of Hidden Control
To grasp why privacy matters so much, it's helpful to first visualize the "levers" we're talking about. These are the subtle yet powerful ways entities gain influence over our lives through data.
These levers aren't science-fiction gadgets. They are everyday digital tools like algorithms, targeted advertising, and surveillance systems. These tools use our personal information to predict, influence, and sometimes outright steer our behavior. When we share data freely (or have it collected without full consent), we hand over access to these mechanisms.
We often hear the question: "What's the worst that could happen?" Let's make this concrete with simple, real-world analogies.
Would you hand your wallet over to a complete stranger? The worst case might mean they take your cash, withdraw the maximum from an ATM, or max out your credit cards.
How about your house or car keys? The worst could be a stolen vehicle, a burglarized home, or even someone moving in without permission (squatters).
Or your unlocked phone? That could give access to almost everything important in your life: banking, emails, photos, contacts, and more.
Your personal information carries just as much importance. Who holds it (and how they use it) can lead to similar or even greater real-world consequences.
The kinds of information that can be used against us include everyday details we often share without a second thought:
- Full name
- Date of birth
- Home address
- Phone number
- Email address
- Social Security number or national ID
- Driver's license or passport details
- Biometric data (fingerprints, facial recognition templates)
- Location history
- Browsing history
- Purchase history
- Health records
- Financial account details
- Names of financial institutions
- Political or religious affiliations
- Family members and relationships
- Photos and videos
Basically information that's willing given away when signing up for various accounts. Often there are legal requirements to share additional information like images of government issued ID due to Know Your Customer (KYC) and Anti-Money Laundering (AML) regulations. However, there's a growing trend to share that information with websites for age/identity verification, where there's no way to prove age without requiring the same verification from every consumer in their community.
This ever increasing collection of information is becoming a target for bad actors, and when the information isn't properly stored and disposed of, it becomes public information or fodder for the dark web.
An Example: The Tea App Data Breach
Remember the case of Tea Dating Advice, a women-only app launched in 2023? It allowed users to anonymously share dating experiences, review men, and flag potential risks to help others stay safe. In mid-2025, the app exploded in popularity. It briefly became one of the top free apps on Apple's App Store as millions of women joined seeking a safer dating landscape.
Then, in July 2025, disaster struck. Hackers accessed a legacy data storage system and exposed approximately 72,000 user images. This included around 13,000 selfies and photos of government-issued IDs (like driver's licenses) submitted for account verification, plus tens of thousands more from posts, comments, and direct messages. The breach only affected users who had signed up before February 2024.
The company stated that no email addresses or phone numbers were compromised in this incident. Yet the damage was severe. The exposed ID photos contained sensitive details such as full names, addresses, and dates of birth. This information made it straightforward for malicious actors to identify and potentially harass affected women.
Worse still, online trolls extracted location metadata embedded in some photos. They used it to create public maps pinpointing approximate user locations. These maps circulated briefly before being taken down.
Perhaps most telling was how this breach highlighted broken promises around data handling. The app's privacy policy explicitly stated that verification selfies and ID photos would be "stored only temporarily and deleted immediately after review." Retaining this data for years in an unsecured legacy system, however, directly violated that commitment. It handed unintended control to outsiders when the breach occurred.
The Tea incident shows exactly how losing grip on personal data (even images we assume are handled responsibly) can pull those invisible levers. It can turn tools meant for safety into sources of vulnerability and havoc.
Another Example: Discord Third-Party Breach
A more recent example is from Discord, the popular chat platform with over 200 million monthly active users. Many people use it for gaming communities, friendships, and everyday conversations. To comply with various local age restrictions and handle account appeals, Discord sometimes requires users to submit photos of government-issued IDs (like driver's licenses or passports) along with a selfie holding the ID and their Discord username.
In October 2025, Discord disclosed a serious security incident. Hackers compromised a third-party customer support vendor. This gave unauthorized access to support tickets from users who had contacted customer service or the Trust & Safety team.
The breach exposed sensitive information from a limited number of users. This included names, email addresses, support messages, partial billing details (like the last four digits of credit cards), and, most alarmingly, images of government-issued IDs. Discord confirmed that approximately 70,000 such ID photos were potentially accessed.
The company emphasized that this was not a direct hack of Discord's own systems. No passwords, full credit card numbers, or private messages were compromised. Yet the exposed ID images contained highly personal details such as full names, dates of birth, addresses, and photos. This made affected users vulnerable to identity theft, doxxing, or targeted harassment.
Attackers attempted to extort Discord with the stolen data. When the company refused to pay, samples were leaked online to pressure them.
This incident also raised questions about data retention practices. Users submitted these highly sensitive images trusting they would be handled securely and deleted after verification. The breach revealed risks in how third-party partners store and protect such information. Once shared, control slips away.
The Discord case illustrates another way invisible levers get pulled. Even when we share data for a legitimate reason (like proving age and identity to regain account access), it can end up in unintended hands through supply chain weaknesses. Tools and processes designed to keep platforms safe or age-gate content can inadvertently create new vulnerabilities, handing power to malicious actors and eroding personal control.
Why "I Have Nothing to Hide" Misses the Point
Many people respond to privacy concerns with a simple phrase: "I have nothing to hide." On the surface, it sounds reasonable. If you are not doing anything wrong, why worry about who sees your information?
The problem is that this argument misunderstands what privacy is really about. Privacy is not just about concealing wrongdoing. It is about maintaining control over your own life and protecting yourself from potential abuse of power.
Think about it this way: You probably close the bathroom door even when you are home alone. You likely draw the curtains at night even though you are doing nothing illegal. These habits are not about hiding crimes. They are about preserving personal boundaries and dignity.
The same principle applies to digital information. Even ordinary, innocent details can be weaponized in the wrong hands. Your location history might reveal visits to a doctor or a support group. Your purchase history could expose health issues or financial struggles. Your search history might show personal interests you prefer to keep private.
Once data leaves your control, you lose the ability to decide the context in which it is viewed. A future employer, insurer, landlord, or even a government agency might interpret it in ways that harm you. Laws and policies change over time. What seems harmless today could become risky tomorrow.
The "nothing to hide" mindset also shifts the burden in the wrong direction. It implies that people must justify their need for privacy. In reality, the opposite should be true. Those who collect, store, and use our data should have to justify why they need it and prove they will handle it responsibly.
Privacy protects everyone, not just those with secrets. It prevents manipulation, discrimination, and overreach. It preserves the freedom to think, explore, and grow without constant judgment or interference.
In short, having nothing to hide does not mean you have nothing to protect. Privacy is a fundamental safeguard for personal autonomy in an increasingly connected world.
Bottom Line
Giving up privacy hands invisible levers to corporations, advertisers, governments, and criminals. Those levers can steer choices, expose vulnerabilities, and erode personal autonomy.
We should approach proposed solutions like mandatory age and identity verification, digital IDs, and central bank digital currencies (CBDCs) with caution and scrutiny. While often presented as tools for safety, convenience, or efficiency, these systems can centralize vast amounts of personal data. They create new risks of surveillance, data breaches, misuse, and overreach.
Real-world examples, such as the temporary debanking during Canada's 2022 Freedom Convoy protests, illustrate how financial access can be restricted in extraordinary circumstances. Broader adoption of linked digital systems could amplify such levers, making it easier to monitor or limit individual activities.
Protecting privacy is not about secrecy. It is about retaining control over our own lives in a world increasingly shaped by hidden forces. Questioning these technologies helps ensure they serve people, not the other way around.
Remember: We may not have anything to hide, but everything to protect.
