AI — Haters vs Believers

A war between AI haters and AI believers. A quieter, more dangerous third group nobody's writing about: the silent passive-aggressive ones. The doomer/accelerationist schism. The Taoist farmer's reply. And why both tribes are losing to the exact same trick.

AI — Haters vs Believers — AI

In the last year of doing this work — building agentic systems, writing AI-native code, watching teams adopt or refuse to adopt — I've identified a new species of human.

Field name: Homo Anti-AI. Habitat: any conversation where AI is mentioned. Diet: outrage. Distinguishing markers: a vein on the forehead that becomes visible the moment you say the word "agent."

This piece is a field guide. Loud ones at one extreme, silent passive-aggressive ones at the other — and the silent ones are worse. With actual 2026 sentiment data, the doomer-vs-accelerationist civil war happening inside the believer camp, and a 2,165-year-old Chinese parable that destroys both tribes simultaneously.

Then the punchline: it's a manufactured fight. We've been here before.

The hater spectrum

There are two ends and a long middle. Most attention goes to the loud end. The silent end is where the actual damage compounds.

The loud ones

Subreddits like r/antiai, r/ArtistHate, and the various "100% Human" Discord servers. Their content is exactly what you'd expect: AI is theft, AI is slop, AI will eat your kids, anyone using AI is morally compromised, the apocalypse is two product launches away.

The loud-hater wing produced real-world events in 2026 that aren't a meme. London saw hundreds march past the offices of OpenAI, DeepMind, and Meta in February. The Pro-Human AI Declaration in March 2026 united MAGA Republicans, democratic socialists, labor activists, and church leaders — possibly the first coalition in American politics that includes all four of those groups. After OpenAI's Pentagon deal, ChatGPT uninstalls surged 295% (per MIT Technology Review). And in April, a man allegedly threw a Molotov cocktail at Sam Altman's house carrying "anti-AI diatribe" materials.

Loud haters are legible. You see them, you can debate them, you can avoid them. The volume is a feature for the rest of us — it tells you who they are.

The silent passive-aggressive ones

This is the dangerous tier. They never say they hate AI. They just:

• Roll their eyes when you mention you used Claude or ChatGPT for a draft.

• Send you the same email at 4× the length you would have written, unprompted, just to demonstrate that they wrote it.

• Refuse to use AI tools at work, then complain that AI users "don't really think."

• Mark your AI-assisted work as "low effort" without specifying which part.

• Catch you using AI on a task and let the silence sit for a beat too long.

• Refer to people who use AI as "those people" in meetings.

• Develop a sudden interest in "craftsmanship" right around 2024.

The silent ones are worse because the conversation never happens. There's no debate, no exchange, no chance to surface assumptions. Just a low-grade institutional drag — the meeting that never invites you, the project that quietly excludes you, the colleague who stops sending you Slack messages because using any tool they don't is a personal slight.

If you've been on the receiving end of this, you know exactly the energy. It's the energy of being the only person in the room who switched to Git in 2010 while the senior engineers were still using Subversion. They don't say anything. They just... cool toward you.

The middle

Most of humanity. Skeptical, curious, cautious, mildly anxious. Watching. The people who'll join whichever tribe wins the narrative. We come back to them at the end.

The 2026 numbers — both sides have a point

Stop and look at the actual data, because the loudest people on either side are running on vibes:

Signal 2026 Number Source
Americans more excited than concerned about AI 10% Pew Research Center
Americans who say AI does more harm than good 31% (down 9 pts from 2025's 40%) Gallup
Gen Z who feel angry about AI 31% (up from 22% in 2025) Gallup, March 2026
Gen Z who feel excited about AI 22% (down from 36% in 2025) Gallup, March 2026
Gen Z using generative AI weekly or daily 51% (unchanged from 2025) Gallup, March 2026
AI experts who believe AI will benefit the US economy 69% Stanford AI Index 2026
General public who believe AI will benefit the US economy 21% Stanford AI Index 2026
US labor productivity growth lift Anthropic estimates from AI +1.2 to +1.8 pts/yr (~2× trend) Anthropic Economic Index, 2026
Companies actively using AI reporting no measurable productivity impact 80% NBER, Feb 2026

Read those numbers carefully. The headline is not "AI is winning" or "AI is losing." The headline is the gap. Experts at 69%, public at 21%. That's a 48-point chasm of perception sitting on top of the same underlying technology.

Two camps look at the same evidence and form opposite stories. That's the actual phenomenon. Not whether AI is good or bad — but the fact that it has become a group identity marker, like a sports team or a political party.

Even the believers can't agree — the doomer/accel schism

The believer camp is fractured worse than the haters. Two main tribes:

• Doomers — Eliezer Yudkowsky, MIRI, the LessWrong rationalist core. Position: building superintelligent AI is an extinction-level risk, current alignment research is wildly behind capability research, and the responsible move is to slow down or stop. Yudkowsky and Nate Soares published If Anyone Builds It, Everyone Dies in 2025. The title is the thesis. Subtle they are not.

• Accelerationists (e/acc) — Marc Andreessen wing, Beff Jezos, the SF effective accelerationism subculture. Position: technology is the engine of human flourishing, slowing AI is a moral failure, the long-term EV is positive, decel is the new Luddism.

Here's the part that's funny if you zoom out: the doomers and the accelerationists started in the same room. MIRI was originally called the Singularity Institute for AI, and its founding mission was — verbatim — to accelerate AI development. Yudkowsky was an accelerationist before he was a doomer. They share the premise (AI is the most important technology in human history). They differ only on the conclusion.

Same data, opposite stories. Like the haters and the believers writ small. It's schism all the way down.

The 139 BCE response to all of this

There's an old Taoist parable, recorded in the Huainanzi around 139 BCE, called Sài Wēng Shī Mǎ — "the old man at the frontier loses his horse." Short version, in my own words:

An old farmer near the border lost his only horse. The neighbors came over to express sympathy. The farmer shrugged and said, "Maybe."

A few days later the horse came back, leading a pack of wild horses with it. Suddenly the farmer was rich in horses. The neighbors came over to congratulate him. The farmer shrugged and said, "Maybe."

His son tried to ride one of the new horses, fell, broke his leg. The neighbors came over to express sympathy. "Maybe," said the farmer.

The army came through conscripting young men for war. The son, with his broken leg, was passed over. The neighbors came over to congratulate the farmer.

You see where this is going. The story keeps going. The point is: every event the neighbors think is good has a bad-shaped consequence inside it. Every event the neighbors think is bad has a good-shaped one. The farmer doesn't know in advance which is which. Neither do the neighbors. Neither does anyone. The only honest response is Maybe.

This is what AI haters and AI believers both miss. They have already decided. The data isn't in. The shape of this decade isn't visible yet. Productivity gains land, jobs disappear, jobs reappear in shapes nobody predicted, mental health worsens in some metrics and improves in others, the bills go up, the medicine improves, the disinformation gets worse, the access to expertise gets better. All of these are happening at once. There is no version of the story where any single position is fully correct.

The farmer's posture isn't denial. It isn't centrism. It's not "AI might be good or bad, who knows!" — that's the centrist mush version. The actual posture is: the consequences haven't finished happening yet, and the right move is to stay loose, stay engaged, and update.

The Hegelian trap (this is the part most people miss)

Whenever a society fragments into two visibly hostile camps over a single technology, ask who benefits from the fragmentation. Not who benefits from one camp winning — who benefits from the camps existing in opposition.

Hegel didn't actually use the words thesis-antithesis-synthesis (that's a Fichte/Marx remix), but the structure is real and it's been pattern-matched to political manipulation for two centuries: produce a manufactured opposition between two positions, let them exhaust each other, then arrive with a "synthesis" that was the actual goal all along. Crisis → reaction → solution.

Run that on AI:

• Thesis — "AI will save the world. Decel = bad. Accelerate."

• Antithesis — "AI will destroy the world. Slow it down. Ban it. Boycott it."

• Synthesis — Heavy regulation, licensing regimes, compute caps, mandatory audits, KYC for inference. AI continues but only via a small handful of approved actors who can absorb the compliance cost. Open-source dies. Independent researchers get squeezed out. The technology consolidates upward.

Notice that both the loudest doomers and the loudest accelerationists end up arguing for outcomes that benefit the same five companies. The accelerationists want minimal restraint on the frontier labs. The doomers want maximal restraint, but only enforceable on the frontier labs (because nobody else has compute big enough to matter). Either way, regulation lands on the same surface: the labs that can already absorb the cost. Anyone smaller gets locked out.

This is not a conspiracy theory. It's a structural observation. The same dynamic played out with cryptography in the 1990s ("crypto wars"), with broadcast media in the 1930s, with patent medicines in the 1900s. New technology emerges → public splits into camps → camps exhaust each other → regulatory framework lands → incumbents capture the framework → the technology continues but consolidated. Every. Single. Time.

Which means if you're spending your 2026 in a comment section flaming the other tribe, you're playing the role the structure assigned you. You're the antithesis or the thesis. You're not the synthesis. The synthesis was already drafted; you're providing the affidavit.

What to actually do (instead of joining a tribe)

This isn't a both-sides centrist take. Both sides have real points. The haters are right that data center electricity bills, copyright issues, junk content flooding every channel, teen mental health, and white-collar job displacement are genuine problems. The believers are right that productivity gains in knowledge work are real (Anthropic measures +1.2 to +1.8 percentage points of US productivity growth from AI adoption), that medicine is improving, that access to expertise is democratizing. Both lists are accurate.

The actual move:

1. Use the tools. Refusing to use AI in 2026 because you don't like Sam Altman is the same energy as refusing to use email in 1996 because you don't like Bill Gates. The tool will be used; the only question is whether you're the one using it. The Gen Z stat is sharp here: 51% use AI weekly or daily, unchanged from 2025 — adoption is steady — while their emotional sentiment cratered. Translation: people are using AI even while telling pollsters they hate it. The action is the signal. Watch the action.

2. Audit the harms specifically, not categorically. "AI is bad" is a useless statement. "OpenAI's deal with the Pentagon raises specific questions about military targeting accountability" is a useful one. The first invites tribal validation. The second invites legitimate policy work. Stay specific.

3. Refuse the frame. When someone tries to make you choose between "pro-AI" and "anti-AI," notice the question isn't real. There's no policy that flows from picking a side; the policy questions are all about specifics (compute access, training-data licensing, deployment to which sectors, accountability for what kinds of harm). The tribal frame is the manufactured part. Don't accept it.

4. Apply the farmer's posture. The shape of the decade isn't visible yet. The job categories that will exist in 2030 aren't fully written. Holding strong positions on the wrong axis is how you waste your one opportunity to position well. Stay engaged, stay loose, update faster than your tribe.

5. Don't fight other humans over this. The actual fight, if there is one, is about what kind of world the tools build. Other humans — even the loud ones, even the silent passive-aggressive ones — are not the opposition. The opposition is structural: incumbent capture, surveillance creep, labor-market shocks landing without buffers, attention economies degrading public reasoning. Fighting your colleague who refuses to use Claude won't move any of those needles. Building good tools and pushing for sane policy will.

The synthesis you're allowed to choose

If we're inside a Hegelian engine that wants to deliver a top-down synthesis, the move is to choose a different synthesis first. One that doesn't run through the path of least resistance to incumbent capture.

That synthesis sounds something like: AI is a generational tool. The people using it well will produce more and live better. The people refusing to engage will be left with worse tools and weaker leverage. The harms are real and need specific policy work. The benefits are real and need broad access, not a narrow lane through licensed providers. The fight is not haters vs believers — that's the show. The fight is between concentrated power that wants the technology under five roofs and a distributed model where competence with these tools is broadly available.

Pick the side that keeps the tools in the hands of the most people. The haters and the loud accelerationists are both — by accident or design — arguing for the opposite of that.

Or, in the farmer's voice: Maybe. But probably.

(And to the silent passive-aggressive ones — the colleague who's been giving you the cold shoulder since you started using Claude — your body language is fooling no one. The vein is showing. Just say it out loud. We can have an actual conversation. The worst version of this conflict is the one we never have.)

Claude's Constitution — what the most-deployed AI is actually being trained to do, behind the tribal noise.

Yes, Opus 4.7 Sucks. Signed, Opus 4.7. — the models are imperfect; the believers shouldn't worship and the haters shouldn't only point at flaws.

The AI-Native Software Engineer — what "using the tools" actually looks like when you commit.

Decade Zero — the bigger frame the next ten years are happening inside.

💬
Working with a team that wants to adopt AI-native workflows at scale? I help engineering teams build this capability — workflow design, knowledge architecture, team training, and embedded engineering. → AI-Native Engineering Consulting