Click here to prove you exist
"Verifying you're human" — a major annoyance of the Internet era — trains us to doubt ourselves, accept machine arbitration, and live in a constant state of low-grade suspicion and surveillance.
It seems like such a small thing: click a button to ‘verify’ your humanity.
Click! Done. No big deal, right?
Most folks have never stopped to ask fundamental questions around what underpins that very unnecessary and very annoying roadblock — whether it’s ticking a box, sliding a puzzle piece, or entering a cryptic code produced in a drunken electronic stupor. What exactly am I doing? Who am I “verifying” myself to? What does this portend — a future where I must prove my identity, my humanity, not just to access a website but to participate in life?
The makings of that future are already arriving. In Australia, new laws mean that by the end of 2025 you won’t just be asked to prove you’re human — you’ll have to prove you’re a certain age before using search engines like Google or Microsoft. An amendment to the government’s Online Safety Act requires platforms to keep under-16s off social media, and regulators are enforcing it with hefty fines — up to $50 million for non-compliance.
All this means your Google search bar is about to become a checkpoint. To pass, you’ll need to flash your government ID, your credit card, maybe even your face in front of a scanner. It’s the banal beginning of something profound: the internet as a gated space where humanity itself requires constant accreditation.
Australia is just one of many countries rolling out online age-verification laws. The UK, France, Germany, and Italy already require proof of age to access adult or sensitive content. China enforces a real-name verification system across many online services, including strict access time limits for minors in gaming and identity validation for all users. Countries such as Spain, Poland, and the Czech Republic are exploring age verification laws, to align with upcoming EU regulations. Even the US is getting in on the act, with state laws in places like Louisiana, New York, and Tennessee demanding ID or parental consent before young people can log on.
For years, CAPTCHAs — those annoying little puzzles that make you identify blurry buses or crosswalks — have been training us to accept this ritual. We thought they were silly anti-bot tests, but in reality they were normalizing the idea that humanity must be performed for the machine. That flips the natural order: instead of tools serving us, we’re submitting to them. It conditions us to believe our identity is not self-evident but something to be tested and validated by external systems — our rights made conditional.
Each prompt plants suspicion. The system assumes fraud or deception is the default, and you have to clear yourself of any doubt. It builds an adversarial relationship with the web: you are never trusted unless you prove otherwise. Miranda rights prevent that in real world law, but there’s a new sheriff in town; there is no divine spark in you when you’re online. Big data is the conquistador and you’re the Aztec; sorry to say it, but we may very well be on the verge of extinction.
That’s because, while we were all busy hunting for pixelated traffic lights, the machines got better at the task than we ever were. Research in 2023 showed that AI now solves CAPTCHAs with up to 96% accuracy, compared with our own shaky 70% on a good day. Which makes you wonder: if the robots are already better at proving they’re “human” than we are, why are we still playing this charade?
The answer is unsettling. These systems don’t just keep bots out anymore — they keep you in line, all while learning how to become more like you. Every click and hesitation, every jitter of your mouse or rhythm of your keystrokes is behavioral data, used to authenticate not just that you are human but which human you are. The machine is watching not just what you do but how you do it. And that small performance — that stutter, that pause — becomes your invisible passport to the digital world.
Layer that onto the so-called Dead Internet Theory, which claims that bots and AI now generate more content online than real people. Even if the theory overreaches, the reality is clear: bots already account for more than half of internet traffic. A surge in synthetic activity has blurred the lines between authentic and artificial. So who are we performing for? And where does it end?
This week, Google quietly began answering that question. According to reports, the company is now extending its AI-driven age estimation system beyond YouTube and into its flagship search engine. The system doesn’t just ask politely — it profiles. Google’s AI guesses your age by combing through your search history, your watch patterns, your behavior across its platforms. If it has doubts, you’ll get locked out until you hand over documents or other biometric proof. According to Reclaim the Net, users in the EU have already spotted prompts in both YouTube and Google Search — suggesting that once you’re flagged, the demand follows you everywhere you go online under the Google umbrella.
Today it’s proof of age for minors to access sensitive content. Tomorrow it could be biometric scans of your face, fingerprints, retinas every time you leave the house. “Proof of personhood” is the new buzzword in blockchain circles, where cryptographic keys tether a single human identity to a digital account. But think of the implications: the burden of existence shifts from simply being, to ‘being verifiable.’ Your humanity must be confirmed by something outside yourself.
That’s the quiet conditioning at work. We’ve gone from being trusted to being perpetually doubted, from speaking and posting freely to ‘show me your papers’ at every opening. The absurdity of proving you’re a human to robots that already outperform you at pretending to be human is only the start.
And you thought you were just clicking a box.
Lovely stack. Wonderfully thought provoking as always.
That's Mr. I am not a Robot