Panic at the Techno
Or: Humans are living proof that you can teach an old brain new tricks.
I’ve written before about the conundrum of being an anarcho-syndicalist at heart and also enough of a realist to believe that a strong, democratic state is the only thing—short of outright revolution—capable of restraining oligarchic capitalism (and that abandoning the state now only hands power to the rich). I’m also a bit of an odd duck as a technologist—slow to adopt new tools but relentless in putting the good ones to use once the penny drops. So when a friend recently brought fellow Substack leftie Caitlin Johnstone’s writing to my attention by way of this note, my reaction was equal parts support and skepticism.
The thing is, we humans are a pack animal, and like other pack animals our social coherence is as much a feature of our evolution as our biology. We use our big brains to invent technologies to solve problems, from finding food to building shelter to keeping our shit together as a family, clan, tribe, or state. And when it comes to social coherence, propaganda and manufactured consent have been around for far longer than we’ve had those terms as shorthand.
We’re also very prone to doing what we’ve always done. No doubt there’s evolutionary advantage to this: the humans who don’t eat the purple berries are the ones who survive long enough to raise kids with a purple berry taboo. Our big brains and our love of the tried and true often come into conflict, with the result that no new technology comes without concerns of misuse and abuse.
I asked ChatGPT to rewrite Caitlin’s note as a 16th-century person with concerns about the newfangled printing press...
I hold it meet that the magistrates impose strict governance upon the printing-press. And this I say as one long known for defending the liberty of speech and the free disputation of learned men.
I would have it made unlawful for any man to print false accounts or forged letters and put them abroad as though they were true.
I would have the printers and stationers be sorely punished if they suffer their presses to be employed in the making of counterfeit proclamations, feigned histories, or other deceits that may be taken for true report.
I would have every printed tract that is not set from an authorised manuscript bear plainly upon its face a great and visible mark declaring it to be newly made and not a faithful copy— not some small device in the margin that may be easily cut away.
I would that printers be sharply chastised if their books entice the simple into mischief, or if they set abroad fancies persuading men that spirits speak through the page, or if they contrive their works so as to bewitch readers into unwholesome affections and follies.
I would have it unlawful for any merchant or printer to employ clerks or scriveners who answer in the guise of living men when they be but contrived artifices meant to beguile the buyer.
A man’s liberty to extend his hand endeth where it striketh his neighbour’s face. These devices of the press threaten the very bond of our commonwealth. They disturb men’s wits, confound truth with falsehood, and cast the multitude into a maze.
Printed deceits ought to be punished as fraud is punished; for fraud is no lawful speech, being hurtful to men and contrary to a well-ordered realm. Even so should deceitful prints be restrained for the self-same cause.
It is not meet that rich men, grown proud by their presses, should prosper by sowing confusion, disorder, and distemper of mind. The body politic hath right to guard itself against harms wrought by unruly persons; and the king’s law is a gentler instrument for this than the rope. Let the governors therefore stay these assaults upon our common understanding.
I care not what sharp remedies must be used to bridle this mischief. If men cannot plainly discern the true from the false, all goeth to ruin. None may know how to judge, how to vote in the commons, how to order his life, if he be lost in a cloud of lies. Bring these new engines under obedience by whatsoever means be needful; for in such a matter, the defence of the realm itself is at stake.
I repeated this experiment for the advent of the telegraph1 in the mid-19th century, television2 in the mid-20th century, and the web3 in the mid-1990s. Then I did some light fact-checking. Turns out both the concerns expressed and calls for state intervention that ChatGPT came up with are reasonably accurate. So I dug around a bit on how our species managed to survive these past upheavals.
Spoiler alert: social adaptation had a greater effect than state control in every case.
With the printing press, Europe didn’t clamp down in any consistent or lasting way. Censorship was always partial, local, and porous, and the press spread far faster than any crown or church could throttle it. That’s not to say the fears were unfounded; false tracts, forged documents, destabilizing ideas—absolutely came true. But so did mass literacy, scientific exchange, democratic agitation, and the Reformation itself. Society basically coped by drowning in paper for a century, then building new norms—title pages, printers’ marks, peer review, newspapers, and eventually a whole culture of fact-checking and reputation. The gap between what authorities wanted and what the press enabled was filled not by regulation but by a messy, improvised immune system of literacy, skepticism, and institutional trust.
The big fear of the telegraph was that that lies would outrun truth. That absolutely happened. Hoax war bulletins, bogus market prices, and manipulated news all appeared within years. But sweeping regulation never materialized. Governments tried a few things (licensing operators, prosecuting obvious fraud), yet nobody managed to police the wires the way moralists hoped.
Society coped the only way it could: newspapers built verification desks, markets added time delays and cross-checks, and everyone learned that a telegraph dispatch was fast but not always reliable. The real “regulation” ended up being reputational—people trusted particular newspapers or agencies (Reuters, AP, Havas) rather than the medium itself. In other words, the fears came true, the regulations didn’t, and the world adapted by inventing new intermediaries to filter the chaos.
Television alarmists got it half right. Some of their fears absolutely materialized—staged news footage, manipulative advertising, emotional engineering, and a public that often struggled to tell entertainment from fact. But they wildly overestimated both the danger and the likelihood of sweeping regulation. The government never imposed the kind of strict controls some wanted. At most we got things like the Fairness Doctrine, broadcast standards, and disclaimers—a far cry from ironclad rules about authenticity.
Society coped in a very recognizable way: people learned to treat TV with a bit of suspicion, journalism developed its own professional norms, and whole watchdog industries sprang up to critique broadcasters. Viewers adapted by choosing trusted networks, regulators stepped in only lightly, and the culture adjusted by creating expectations—if you see something outrageous on TV, check the news, ask around, don’t take it at face value. The panic faded, the problems persisted, and life moved on.
The early-Internet worriers were basically batting a thousand on the predictions. Everything they feared—hoaxes, forged images, bots pretending to be people, psychological manipulation, viral misinformation—showed up almost immediately and just kept getting worse. On predictive power alone, they look like prophets.
In America at least, we were still high on deregulation and wishful thinking about trickledown economics. Outside of narrow areas (spam laws, fraud statutes, COPPA), state intervention was utterly lacking. Tech companies largely wrote their own rules—and those rules were mostly about growth, not safety or truthfulness.
As for how society coped, the pattern repeated: we built a patchwork immune system. Fact-checkers, platform policies, browser warnings, media literacy campaigns, and a general cultural shift toward skepticism. It wasn’t elegant, and it didn’t fix the underlying problems, but people learned—haltingly, unevenly—to treat online information the way earlier generations learned to treat telegraph dispatches or TV broadcasts: potentially useful, but never inherently trustworthy.
What’s striking is how every new medium arrives with a built-in myth about the era that came just before it. People look at the disruption in front of them and immediately romanticize whatever communication system they grew up with as a golden age of clarity, honesty, and shared reality.
But none of those earlier eras were actually clean.
Before the printing press, “truth” meant whatever the church or the local lord said—censorship wasn’t a danger, it was the system. Before the telegraph, news traveled by rumor, partisan papers, and outright fabrications. Before radio, newspapers were notoriously sensational, often publishing hoaxes as front-page stories. Before television, political machines, PR men, and newspaper barons shaped public perception with almost no oversight. And before the Internet, we had junk tabloid TV, chain letters, talk-radio misinformation, and supermarket conspiracy rags.
Every era had misinformation—just slower, smaller, or controlled by different gatekeepers.
So the idealization isn’t really about truth; it’s nostalgia for a time when deception felt manageable. When lies moved at human speed, when the liars were identifiable, when the damage stayed local, when everyone more or less saw the same handful of sources. Each new medium breaks that equilibrium, and people interpret the disruption as a moral collapse rather than a change in velocity and distribution.
The “truthful past” is less a historical reality than a psychological refuge—a way of coping with how overwhelming the present feels. Every new medium blows a hole in public trust at first. The press shattered the church’s monopoly on truth. The telegraph made lies travel faster than verification. Radio and TV centralized persuasion in ways nobody had seen. The Internet blew the doors clean off.
But after each shock, people eventually rebuild a shared sense of what’s credible. Not perfectly—never perfectly—but enough to function. Institutions evolve, norms form, and each generation grows up with a savvier instinct for detecting nonsense in whatever medium they were raised on.
So the quality of public dialogue doesn’t just march downward. It spikes, crashes, restabilizes, crashes again. The trust problem isn’t new; what’s new is the speed and scale. We’re in another messy transition, not the end of truth. The pattern so far is that humans adapt just slowly enough to worry and just quickly enough to muddle through anyway.
And yet, this reflexive aversion to technological advances seems like an evolutionarily advantage. Without it, people would be more easily led—and misled.
When a new medium arrives, it scrambles the rules of persuasion and perception. In those early moments, the people who are most skeptical, most anxious, most resistant—the ones who yell “slow down, this is dangerous”—are often the only thing preventing mass manipulation from running unchecked. Their instinctive aversion acts like a circuit-breaker. It forces the group to pause, observe, test, and adapt rather than plunging headfirst into a new environment where old heuristics no longer work.
In evolutionary terms, it’s a protective reflex: distrust novelty until you understand the threat profile. If your ancestors adopted every shiny new tool, idea, or signal without hesitation, they wouldn’t have survived long enough to reproduce. Suspicion keeps groups from being overtaken by bad actors who exploit new channels faster than social norms can form.
And here’s the twist: the reflex isn’t about rejecting technology forever—it’s about slowing things down until stable norms, institutions, and habits develop around it. Eventually people figure out what’s trustworthy, what’s manipulative, and how to tell the difference. But that early resistance buys time.
If a population lacked this aversion entirely, they’d be extraordinarily easy to mislead during technological transitions. New media always give a temporary advantage to liars, propagandists, opportunists, and the powerful. Without social antibodies—fear, skepticism, moral panic—there’s nothing to counterbalance that advantage.
So the panic isn’t just emotional; it serves a regulatory function long before actual regulators show up. It’s messy, it’s often overblown, but it slows the diffusion just enough for the species to catch its breath and develop new defenses.
One could argue that communications technology advances are a welcome evolutionary pressure, waking people up from a culture of constant manipulation by the technology of the day, of which they are unaware. But it cuts both ways.
On one hand, every new communications technology does act like a shock to the system. It exposes how manipulated the previous “shared reality” actually was. People suddenly see the seams—who controlled the old channels, who shaped the narratives, whose voices were excluded. The disruption creates a moment of heightened skepticism, almost like an evolutionary jolt that forces the population to recalibrate its bullshit detectors.
But on the other hand, this implies a kind of reversion to a “managed consensus” once the shock wears off. Human beings seem wired for a shared story—even if it’s imperfect, top-down, or partially manufactured—because a stable society needs some common frame to function. When a new medium arrives, that frame collapses; when norms catch up, a new one forms. Rinse, repeat.
The pattern looks like this: first chaos, then critique, then consolidation.
Each era thinks its consensus is natural and the new chaos is unprecedented. In reality, the consensus is always at least partly constructed, and the chaos is the price of refreshing it. The shared reality doesn’t disappear—it just gets rebuilt with new tools, new gatekeepers, and new blind spots.
So, yes: new technologies wake people up, but the waking is temporary. The drift back toward a normalized, managed consensus seems to be a feature of human sociality, not a failure of it.
If you look at it in a neutral evolutionary frame, there’s no long-term upward trend in human rationality or critical thinking. There’s no long-term decline either. What persists is variation—and shifting environmental pressures that make certain cognitive traits advantageous for a time.
Bullshit detection is very much like foraging. Early humans weren’t universally good at finding food; they were good in wildly different ways depending on their environment. A desert-adapted band had different skills from a forest-adapted one. Neither was “better”—just better matched to local conditions.
Same with misinformation. A medieval villager who trusted his priest was well-adapted to his information environment; he didn’t need modern skepticism because the social costs of distrusting local authority were high, and the upside was low. A 19th-century telegraph reader needed a different calibration. So did a TV viewer in 1950. So does a TikTok user today.
Critical thinking is not a rising line; it’s a situational trait under constant, shifting selection pressure. Every time information technology changes, it effectively rearranges the informational “ecosystem” in which human cognition has to forage. The people who thrive are those whose cognitive style happens to match the new terrain—skeptics in some periods, believers in others, social readers in others, pattern-detectors at still others.
Humans aren’t getting smarter or dumber; we’re being thrown into new informational habitats faster than evolution can update our base wiring. The “panic” or “moral shock” that greets each new medium is actually a recognition that our inherited instincts aren’t optimized for the new environment yet.
Some people adapt fast. Most muddle through. A few get eaten by the lions of the new landscape. Then, over time, culture—not biology—updates the heuristics. And by the time we’ve adapted, a new technology arrives and resets the ecology again.
The biological part is the capacity—memory, pattern recognition, social inference, cognitive flexibility. The specific skills we learn with that machinery are cultural. Evolution shapes the capacity; culture shapes the content. Not just the skills we learn, but the tools we create to adapt—both are the result of this biological capacity for memory, pattern recognition, social inference, and cognitive flexibility.
One could assert that humans have used those capacities at an accelerating pace over the course of human history, but not that the capacities themselves have biologically advanced in any meaningful way. Our biological hardware for learning—memory, abstraction, social reasoning—has been largely stable for tens of thousands of years. What has advanced is the cultural complexity built on top of that hardware, and the tools we’ve invented to extend it.
Our biology hasn’t improved, but it hasn’t needed to. The fixed cognitive toolkit we inherited—pattern detection, cooperation, storytelling, tool-use—has been flexible enough to handle every escalating problem we’ve created for ourselves. We keep inventing tools, those tools cause new problems, and then we invent cultural or technological fixes faster than biological evolution could ever respond.
So far, our biology has been “good enough” to keep the cycle going. The village got bigger, then global, then digital, and the same ancient cognitive machinery is still managing to adapt through culture, not genes.
For me—and I imagine for everyone who’s pushed back on technology through the ages—the question is whether we’ve reached the point where the fixed cognitive hardware that helped us secure food and build shared realities is now stressed by tools powerful enough to disrupt both. Have our solutions have begun to outpace our biological limits?
Any discussion of biological evolution is pass/fail—adapt or die. If we haven’t gone extinct, we’ve passed. But passing doesn’t mean thriving, and it definitely doesn’t mean safe. Species can muddle along for long periods in a degraded, stressed, or precarious state and still count as “adapted” in strictly evolutionary terms. Dinosaurs “passed” for 150 million years—right up until they didn’t.
Humans are unique in that cultural evolution shields us from biological failure. We keep outrunning our own problems with new layers of solutions—tools, institutions, norms—before they become species-ending. That can hold for a long time. It has held for a long time. But it also means we can get very far from equilibrium without dying.
Our survival so far proves our cognitive machinery is adequate, not that our situation is good. Evolution doesn’t reward flourishing; it rewards “not dead yet.” And that’s a far more ambiguous verdict.
Rather than resist this latest leap forward in human toolmaking with prohibitions and sanctions, we should wield the power of the state to devise positive programs to use the new tools in ways that benefit all of society—not just the bottom lines of the oligarchs who are bringing them to market. Rather than focusing on the new versions of age-old problems that come with this technology, we should explore the solutions it offers to move more of us from survival mode to flourishing fullness.
I hold it necessary that our government lay firm restrictions upon the electric telegraph. And I say this as one who has ever been a stout defender of free expression and the right of every citizen to speak his mind without hindrance.
I would have it made unlawful for any man to send false dispatches by telegraph and publish them abroad as sober truth.
I would have the telegraph companies brought to heavy account if they permit their wires to be used for the transmission of sham reports—feigned battles, counterfeit market prices, or fabricated proclamations—meant to deceive the public.
I would have every electric message that is not from a known and authorized hand bear clear notice of its uncertain origin, not some small mark in a corner of the sheet that may be easily clipped away.
I would see the companies severely punished if their operators encourage mischief, or if they allow their instruments to be used in a manner that unsettles weak minds—persuading the credulous that spirits speak through the wires, or otherwise contriving to entangle men in unhealthy excitements and imaginations.
I would have it forbidden for any proprietor to employ mechanical devices or contrived automatons to answer inquiries while pretending to be living clerks at the key.
A man’s liberty to stretch forth his arm ends where it meets his neighbour’s face. These new engines threaten the very fabric of our commonwealth. They unseat men’s wits, overthrow the distinction betwixt fact and falsehood, and drive the unwary toward madness.
Telegraphic deceits should be outlawed as fraud is outlawed. Fraud hath never been judged protected speech, for it wounds the innocent and imperils society; and thus should artifices wrought by the telegraph be dealt with for the selfsame reason.
It is intolerable that speculators and wire-barons should grow fat upon confusion, delusion, and the distemper of the public mind. The people have a right to defend themselves against harm from reckless individuals; and the law is a gentler remedy for such offenders than the mob’s justice. Let the authorities interpose to end these assaults upon our ability to perceive the world aright.
I care not what force must be exerted to bring these new contrivances to heel. If citizens cannot discern truth from deception, all governance fails. No man may know how to judge, how to vote, or how to order his life if he cannot tell the true from the false. These instruments must be mastered by whatever means prove needful, for the defence of society itself is now at stake.
I believe our government must exercise firm oversight over the new medium of television. And I say this as someone who has long defended the freedom of speech and open public discourse.
I hold that it ought to be illegal to broadcast fabricated footage over the airwaves and present it to the public as genuine news.
Television networks should be held strictly accountable if they permit programs or newsreels that deliberately mislead viewers, or if they allow staged incidents to be passed off as actual events.
Any material created for dramatic or commercial purposes ought to bear a clear and unmistakable announcement identifying it as such— not merely a fleeting notice tucked away at the bottom of the screen where no one can read it.
Broadcast companies must face heavy penalties if their programs encourage harmful conduct, or if they give viewers the impression that machines “think,” or if they deliberately manipulate the emotions of their audience in a manner that leaves them confused, unstable, or dependent.
No station should be permitted to employ devices or actors to impersonate ordinary citizens without plainly identifying the performance as staged.
A man’s right to speak his mind ends when his words strike at the well-being of his neighbor. These new broadcast instruments threaten to weaken the very foundations of our society. They cloud men’s judgment, blur the line between fact and fiction, and unsettle the public mind.
Broadcast deception ought to be prohibited just as fraud is prohibited. Fraud has never been protected speech, for it injures individuals and undermines the orderly operation of a free republic. Television trickery is no different and should be treated accordingly.
It is intolerable that wealthy broadcasters and corporate sponsors should profit from sowing confusion, hysteria, and mental distress. A democratic people has the right to shield itself from the reckless few who would harm it; and lawful regulation is a far gentler remedy than the fury of an outraged populace. The government must act to halt these assaults upon our shared perception of reality.
I do not much care what measures must be taken to bring this new medium under proper discipline. If citizens are unable to distinguish truth from illusion, the entire civic order begins to crumble. No one can vote wisely, live sensibly, or fulfill his responsibilities if he cannot trust what he sees and hears. Television must be brought to heel by whatever means are necessary. At this point, the matter is nothing less than the self-preservation of the nation.
I believe we need strong federal regulation of the Internet. And I say this as someone who has long supported free speech, open dialogue, and the free exchange of ideas online.
It should be illegal for anyone to post fabricated images or bogus digital “evidence” on a website and present it as authentic.
Internet service providers and site operators ought to face serious consequences if they allow their platforms to be used for distributing hoaxes, forged documents, or falsified photographs that mislead the public.
Websites hosting altered or computer-generated material should be required to display a clear and prominent notice stating that the content is artificial—not some tiny disclaimer buried at the bottom of the page where no one will ever read it.
ISPs and platform owners should be held liable if their online forums encourage harmful behavior, or if software agents mislead users into thinking they are conversing with a real person, or if interactive programs manipulate vulnerable individuals into unhealthy dependence.
No company should be permitted to deploy automated “bots” while pretending they are live human representatives.
A person’s freedom to speak ends where it begins to inflict harm on others. These new digital tools threaten to undermine our society. They distort our sense of what is real, blur the boundary between truth and fiction, and leave countless users confused, anxious, and misled.
Online deception should be treated as fraud, because that is precisely what it is. Fraud has never been protected speech; it injures individuals and erodes the foundation of a functioning democracy. Internet trickery ought to be regulated for the same reasons.
We cannot allow tech entrepreneurs and start-ups to profit from spreading confusion, misinformation, and psychological distress. The public has a right to protect itself from reckless actors, and government intervention is a far more reasonable safeguard than the inevitable backlash that would follow if nothing were done. Authorities must intervene to stop these assaults on our collective ability to understand the world around us.
Frankly, I don’t care how much regulation is required to bring this technology under control. If people lose the ability to distinguish fact from fabrication, everything falls apart. No one can make sound decisions, cast an informed vote, or live a stable life if they cannot trust what they see online. The Internet must be brought to heel by whatever measures are necessary. At this point, it is a matter of basic societal self-defense.



👏👏❤️🙏
I have to commend you on a deeply insightful, well-written, and fantastic article! I will have to read it multiple times to soak in the MANY nuggets of wisdom and contemplate and reflect more on your article!
Thank you for writing, sharing, and providing thought-provoking perspectives on the subject - much appreciated!
This piece totally got me thinking, your insight on balancing state power to curb capitalism with AI deepfake regulation is brilliant for our social cohesio.