A simple tool for exposing the hidden premises that make weak conclusions sound convincing
This is a sidebar to Essay #13 – let’s call it #13.5. I’ll link the previous essays in the comments.
Okay, so in Essay 13, I mentioned that Stephen Toulmin was pushing back against “formal logic – the kind with syllogisms and proofs.” Here’s the thing though – the more I;ve thought about this – the more ai think that formal logic is actually useful – not for building arguments, but for tearing them apart.
A syllogism is a specific argument structure that philosophers have been using since Aristotle. It has exactly three parts: a major premise (a general claim), a minor premise (a specific case), and a conclusion.
Here’s a famous example:
All men are mortal.
Socrates is a man.
Therefore, Socrates is mortal.
If both premises are true, the conclusion must be true. No exceptions. The structure guarantees it.
Now, when someone makes an argument that sounds convincing but feels wrong, you can often expose the problem by rebuilding it as a syllogism. This forces the hidden assumptions into the open – and that’s usually where the argument falls apart.
Let me show you with a real example based on a premise that I push back on a lot.
The argument:
“AI uses too much energy, therefore we should stop using AI.”
Sounds reasonable on the surface. AI does use significant energy. Concern about energy consumption is legitimate. But something’s off. Let’s rebuild it as a syllogism to see what’s actually being claimed.
Major premise: We should stop using anything that uses too much energy.
Minor premise: AI uses too much energy.
Conclusion: Therefore, we should stop using AI.
Now look at that major premise. Do you actually believe it?
If you accept that premise, you also have to accept: We should stop using air conditioning. We should stop using refrigerators. We should stop using the internet. We should stop using hospitals. We should stop flying airplanes. We should stop manufacturing anything.
Does anyone actually believes the major premise when stated explicitly? The argument only works when that premise stays hidden.
So, this is the trick. Bad arguments almost always depend on unstated assumptions that nobody would accept if forced to say them out loud.
But there’s another problem with this argument. Even if we granted that high energy use is a serious concern, “stop using AI” isn’t the only possible conclusion. It’s not even the most logical one.
Consider the alternatives: We could make AI more energy efficient. We could power AI with renewable energy. We could use AI to optimize energy grids and reduce overall consumption. We could prioritize high-value AI applications and limit frivolous ones. We could invest in fusion or other breakthrough energy technologies.
The jump from “X has a problem” to “eliminate X entirely” skips over a dozen more reasonable responses. It’s like saying “Cars cause accidents, therefore we should ban all cars” – ignoring seatbelts, airbags, traffic laws, better road design, and driver training.
When you encounter an argument that leaps to an extreme conclusion, ask yourself: What would the major premise have to be for this argument to work? And then ask: Are there other conclusions that follow more reasonably from the actual evidence?
Most bad arguments fail one of these tests. Either the hidden premise is something nobody would actually endorse, or the conclusion skips over a bunch of more sensible options to land on something dramatic.
The syllogism structure isn’t how people naturally argue – but it’s a fantastic tool for reverse-engineering arguments to find where they break down. Force the premises into the open. Check if they’re actually true. Check if the conclusion is the only one that follows.
Nine times out of ten, it isn’t.