A Grain of Salt

Singularities: When the Rules Break Down

· Teddy Aryono

The word “singularity” sounds exotic and a bit ominous - and it should. Whether we’re talking about the heart of a black hole or the hypothetical emergence of superintelligent AI, singularities represent the same fundamental concept: points where our current understanding breaks down and we can’t predict what happens next.

Astronomical Singularities: Where Physics Fails

In astronomy and physics, a singularity is a point where the normal rules simply stop working. Our equations produce infinities, and our models can’t describe what’s actually happening.

Black Hole Singularities

The most famous astronomical singularities lurk at the centers of black holes. When a massive star collapses under its own gravity, general relativity tells us that all its mass gets compressed into an infinitely small point with infinite density. At this singularity:

There are actually two flavors:

Schwarzschild singularity: A point singularity in non-rotating black holes - the simplest case.

Ring singularity: Theorized to exist in rotating (Kerr) black holes, where the singularity forms a ring shape rather than a point due to the effects of rotation.

The Big Bang Singularity

Our universe itself is thought to have emerged from a singularity approximately 13.8 billion years ago. According to the Big Bang model, all the matter and energy in the observable universe was compressed into an infinitesimally small point of infinite density and temperature before rapidly expanding.

The Key Insight: Singularities Aren’t “Real”

Here’s the crucial point that most physicists agree on: these singularities probably aren’t actually infinite. Instead, they’re likely indicators that our theory - general relativity - is incomplete.

When you reach these extreme conditions, quantum effects should become critically important, but general relativity doesn’t account for quantum mechanics. A complete theory of quantum gravity (which we don’t yet have) would presumably resolve these singularities into something else - perhaps very small but finite regions where entirely new physics takes over.

Think of singularities as “here be dragons” markers on our map of physics - points where we know our current understanding fails and we need better theories.

The Technological Singularity: When AI Exceeds Us

The technological singularity is a completely different beast, but it shares that crucial property: it represents a point beyond which we can’t reliably predict what happens.

The Core Mechanism

The basic logic of the technological singularity goes like this:

  1. Humans create AGI (Artificial General Intelligence) that matches human-level intelligence
  2. That AI improves itself - it can write better code, design better algorithms, optimize its own architecture
  3. Improved AI improves itself faster - each iteration is smarter and can make bigger improvements
  4. Exponential acceleration - this creates a runaway feedback loop where progress compounds rapidly
  5. Superintelligence emerges - potentially within days or hours, AI vastly exceeds human cognitive abilities in all domains
  6. Unpredictable transformation - the world changes so fundamentally and rapidly that we literally cannot predict what happens next

Hence “singularity” - like the physics version, it’s a point beyond which our ability to model or predict breaks down.

The Historical Context

Vernor Vinge, a mathematician and science fiction author, popularized the term in his 1993 essay “The Coming Technological Singularity,” arguing that once we create greater-than-human intelligence, “the human era will be ended.”

Ray Kurzweil expanded on this extensively in “The Singularity Is Near” (2005), predicting it would happen around 2045. He based this prediction on observing exponential trends in computing power and drawing parallels to Moore’s Law.

The Case For: Why It Might Happen

The optimistic (or pessimistic, depending on your perspective) case rests on several observations:

Computing power keeps growing exponentially: We’ve seen consistent doubling patterns for decades, though there are questions about how long this can continue.

AI capabilities are accelerating: The jump from GPT-3 to GPT-4 to current models shows rapid improvement in a short timeframe.

Intelligence is substrate-independent: There’s no fundamental reason why silicon can’t eventually match or exceed biological neurons in creating intelligence.

AI has inherent advantages: Digital systems run faster, can be copied instantly, don’t need sleep, can be directly modified without waiting for evolutionary timescales.

The Case Against: Why It Might Not Happen (Or Look Different)

Diminishing returns: Intelligence might not scale linearly. Going from IQ 100 to 200 might be much easier than going from 200 to 300. There could be hard cognitive limits we don’t yet understand.

Embodiment matters: General intelligence might require physical interaction with the world in ways that pure computation can’t replicate. Maybe you can’t think without a body.

No true recursive improvement: Current AI systems can’t really improve their own fundamental architecture in meaningful ways. LLMs can write code, but they’re not redesigning their own neural architectures or inventing entirely new training paradigms.

Intelligence ≠ godlike power: Even a superintelligent AI still needs to work within physical constraints. It can’t violate the laws of physics, can’t instantly build molecular assemblers, can’t bypass the need for actual resources and time.

Continuous rather than discontinuous change: Maybe we get steady, predictable improvements rather than a sudden explosion. This is arguably what we’re seeing now - impressive progress, but incremental.

The Real-World Connection: Agentic Coding

If you’re working with modern AI coding assistants, you’re actually seeing early versions of the underlying mechanism that singularity theorists talk about - AI systems that can write and improve code.

But there’s a massive gap between:

The trillion-dollar question is whether that gap is bridgeable through iteration, or whether there are fundamental limitations that prevent true recursive self-improvement.

The Current Debate

The AI safety and research community is genuinely split on this:

“Singularity is near” camp: Researchers like Eliezer Yudkowsky argue we’re dangerously close and woefully unprepared. They focus on alignment - ensuring superintelligent AI shares human values before it emerges, because we won’t get a second chance.

“Steady progress” camp: Many practicing AI researchers think we’ll see continued impressive advances but nothing resembling a sudden intelligence explosion. They point to how current systems still struggle with basic reasoning, planning, and maintaining context.

“It’s physically impossible” camp: Some argue that consciousness, general intelligence, or recursive self-improvement hit fundamental barriers we don’t yet understand - similar to how black hole singularities probably aren’t actually infinite.

The Philosophical Wrinkle

Here’s what makes the technological singularity genuinely analogous to physics singularities: we’re trying to predict what happens when something smarter than us is in control.

By definition, we can’t fully model what superintelligence would do - just like ants can’t predict human behavior by extrapolating from ant behavior. That’s the actual “singularity” part - not just fast change, but a prediction horizon beyond which our models fundamentally fail.

The Common Thread

Whether we’re talking about the heart of a black hole or the potential emergence of superintelligent AI, singularities share a crucial characteristic: they mark the boundaries of our understanding.

In physics, singularities tell us where general relativity breaks down and we need quantum gravity.

In technology, the singularity concept tells us where our ability to predict and control might break down, and we need… well, we’re not sure what we need. Better alignment research? Fundamental theoretical breakthroughs in understanding intelligence? International coordination on AI development?

That uncertainty is precisely the point.

Singularities - whether in spacetime or in the trajectory of technological progress - are humbling reminders that our best theories and predictions have limits. They’re the edges of the map, where the territory becomes genuinely unknown.

The question isn’t whether singularities are real in some absolute sense. The question is whether we can develop better theories - better physics for black holes, better frameworks for thinking about transformative AI - before we run into them.


What’s your take? Do you think the technological singularity is plausible, or are there fundamental limits that will keep AI progress continuous rather than explosive? The comments are open.

#thoughts

Reply to this post by email ↪