AI Consciousness: A Distraction from What Matters
- Jonathan Razza
- Sep 1
- 1 min read
Updated: Sep 8
One of the biggest wastes of energy in AI discourse isn't about capabilities or applications.
It’s the endless philosophical debate about whether AI is actually thinking, or is conscious, or has a soul (by the way, that’s from defensible to debatable to absurd 🙂).
The answer largely comes down to semantics, and what “thinking” or “consciousness” means to you. It’s interesting to debate, but this semantic argument often misses what matters.
When we obsess over whether AI meets someone’s definition of consciousness, we're having the wrong conversation. The more important questions are about what AI can do, what it cannot do, how it's changing our world, and how we should respond.
Is Claude thinking when it creates code that powers mission-critical systems? Does GPT-5 have consciousness when it considers its own previous outputs in a chain of thought about a medical diagnosis? The practical impact remains the same regardless of your philosophical stance.
This isn't to suggest ethics don't matter. They absolutely do. But we’re not (yet) near the point of reasonably debating whether AI deserves rights because it’s conscious. Instead the focus is and should be on how to align these systems with human needs and values.
What matters is capabilities, impact, and governance - not whether AI meets some subjective gray-area threshold called "thinking".
Next time you see someone dismissing AI capabilities by saying "AI is not really thinking," ask yourself: Does that distinction actually change anything about how we should approach this technology?




Comments