Why AI Panic Misses the Real Threats

Humans have always been better at destruction than robots

Every generation picks its apocalypse. Right now, the smart money says it’s AI. Machines will steal our jobs, collapse the economy, render humans obsolete. The solution everyone’s pitching is Universal Basic Income, because apparently that’s the only thing standing between us and complete societal breakdown.

There’s another way to think about this.

Look at the track record of boogeyman predictions. Automation was going to create mass unemployment in the 1960s. Economists wrote serious papers about it. Spreadsheets were going to eliminate millions of jobs. ATMs would make bank tellers extinct. Outsourcing would hollow out the entire American economy. Every single time, the predictions were wrong. Not just a little wrong, but fundamentally wrong about how economies adapt.

And every single time, society didn’t collapse. Jobs shifted. People learned new skills. New industries nobody predicted emerged from nowhere. The economy kept functioning.

Meanwhile, the actual boogeymen keep racking up body counts.

Governments murdered over 200 million people in the 20th century. Not soldiers in war. Civilians. Their own citizens. The Soviet Union killed around 62 million people. Mao’s China killed 35 million. Hitler’s Germany murdered 21 million. Cambodia’s Khmer Rouge wiped out over 2 million people in four years. The odds of surviving those four years as a Cambodian were only 2.2 to 1.

None of those regimes needed artificial intelligence. Stalin managed with prison camps and forced starvation. Hitler used concentration camps and gas chambers. Pol Pot preferred machetes. The Rwandan genocide happened with radios spreading hate and people using farm tools to kill their neighbors. Authoritarianism doesn’t need advanced technology to destroy millions of lives. It just needs power and the willingness to use it.

Climate change? Also not robots. That’s humans burning fossil fuels because it’s profitable. Fossil fuel pollution kills around 1 in 5 people worldwide right now. That’s roughly 8.7 million deaths per year from air pollution alone. In the United States, 350,000 premature deaths annually are attributed to fossil fuel pollution. Researchers estimate that burning a trillion tons of fossil carbon will cause 2 degrees of warming, which will kill roughly a billion people over the next century.

The planet isn’t being destroyed by algorithms. It’s being destroyed by actual human decisions made by actual corporations and governments who’ve decided quarterly earnings matter more than long-term survival.

Nuclear weapons exist because humans built them. The Cuban Missile Crisis brought us closer to ending civilization than anything AI has ever done. That crisis was resolved by humans choosing not to launch, not by some safety protocol programmed into a machine.

So why is everyone so convinced AI is the threat that’s going to end everything?

Part of it is novelty. AI feels new and unpredictable. It’s easier to fear something we don’t fully understand than to confront the threats we know are real but feel too big to solve. Authoritarianism is scary, but it’s a known problem. Climate change is terrifying, but addressing it requires confronting powerful industries and changing how society functions. AI is abstract enough to project all our anxieties onto.

Part of it is that AI job displacement studies make for great headlines. Goldman Sachs says 2.5 percent of US employment would be at risk if current AI capabilities expanded across the economy. The Federal Reserve found unemployment increased in AI-exposed occupations, particularly among young tech workers. But the actual job loss so far is minimal. Most research shows at most small changes in hiring patterns.

Compare that to the certainty of climate deaths. Between 2030 and 2050, climate change is expected to cause approximately 250,000 additional deaths per year from malnutrition, malaria, diarrhea, and heat stress. Those aren’t hypothetical future deaths if AI gets really advanced. Those are deaths that will happen based on emissions we’ve already put into the atmosphere.

Or look at democratic backsliding happening right now. Authoritarian governments are gaining power across multiple continents. Press freedom is declining. Political violence is increasing. The mechanisms that historically led to mass atrocities are being rebuilt in real time. But we’re supposed to worry more about ChatGPT writing code than about actual humans consolidating power and dismantling democratic institutions.

Could AI eventually become dangerous? Sure. Anything’s possible. A supervolcano could erupt. An asteroid could hit Earth. A pandemic worse than COVID could emerge. We live in a universe where lots of things can go catastrophically wrong.

But choosing to focus on the AI apocalypse story over the ones with actual body counts happening right now is a choice. It’s not the only logical response to uncertainty. It’s one particular narrative that lets us avoid confronting harder truths.

The harder truth is that humans are really good at killing each other and destroying the planet. We’ve been doing both for thousands of years with increasing efficiency. We didn’t need machine learning for the Holocaust. We didn’t need neural networks for Stalin’s purges. We don’t need AGI to pump carbon into the atmosphere until the planet becomes uninhabitable.

When people panic about AI, they’re often really panicking about loss of control and economic instability. Those are legitimate concerns. Job displacement is hard and painful, especially for people without resources to retrain or relocate. The economy does need to adapt as technology changes.

But pretending Universal Basic Income is the only solution, or that AI is uniquely dangerous compared to threats that have already killed hundreds of millions, is missing the forest for the trees.

The actual threats to civilization have names and addresses. They run oil companies. They lead authoritarian governments. They make political decisions that prioritize short-term power over long-term survival. They’re not hypothetical. They’re not future problems we might face if technology advances too quickly.

They’re here now. They’ve always been here. And focusing all our anxiety on AI gives them cover to keep doing what they’ve always done.

If genuinely transformative threats to human civilization ever register the same fear as AI does in tech circles, maybe we’ll start taking them seriously.

SOURCES:

https://www.goldmansachs.com/insights/articles/how-will-ai-affect-the-global-workforce

https://www.stlouisfed.org/on-the-economy/2025/aug/is-ai-contributing-unemployment-evidence-occupational-variation https://eig.org/ai-and-jobs-the-final-word/

https://www.hawaii.edu/powerkills/DBG.CHAP1.HTM

https://reason.com/volokh/2022/11/09/data-on-mass-murder-by-government-in-the-20th-century/

https://en.wikipedia.org/wiki/Democide

https://hsph.harvard.edu/climate-health-c-change/news/fossil-fuel-air-pollution-responsible-for-1-in-5-deaths-worldwide/

https://www.nature.com/articles/s41467-021-24487-w

https://news.westernu.ca/2023/08/climate-change-human-deaths/

https://www.un.org/en/climatechange/science/climate-issues/health