From Linear to Compounding: A Fundamental Shift in Production #
Traditional software follows a simple equation: you get exactly as much functionality as you code, nothing more. The capability ceiling is firmly locked at the number of engineers multiplied by their working hours—a world of linear growth with no compounding returns, no emergence.
AI fundamentally breaks this constraint. It's not "more powerful software," but rather a completely different mode of intelligence production: as long as data, post-training, and inference continue to scale, capabilities keep strengthening without requiring humans to write code. It's as if software has been equipped with a compounding engine—the ceiling has been lifted, no longer constrained by human labor input.
But this isn't the deepest insight yet.
Even If All Current Paths Hit Walls, the Change Has Already Occurred #
The industry debates AI's specific routes intensely: Has Transformer exhausted its potential? How far can reinforcement learning go? Where is the data ceiling? These debates lead many non-specialists to believe AI is a bubble.
However, technical routes may hit walls, but the paradigm shift is already irreversible.
As Jensen Huang summarized with "accelerated computing," the more fundamental change is: intelligence has acquired the ability to generate itself, no longer constrained by human labor input. Even if Transformer becomes obsolete and reinforcement learning reaches its limits, the most essential elements won't change.
Three Eternal Pillars #
Stripping away the uncertainty of "specific routes might fail," we can see three architecture-agnostic pillars:
1. Optimization Power Whether gradient descent, evolutionary search, program induction, or human-machine hybrid search, the essence is finding better strategies in vast solution spaces. As long as "effective search steps per unit cost" keeps improving, intelligence will continue to grow. This isn't the privilege of any single algorithm, but a universal capability.
2. Representation Learning (Compression → Generalization) Routes can change: Transformer, RNN, neuro-symbolic hybrids, retrieval augmentation, world models... but the essence remains: "extracting reusable structures from the environment, compressing them, then extrapolating to new scenarios." Better learned transferable structures mean stronger performance on new tasks.
3. Closed-Loop Integration (Perception→Plan→Act→Feedback) Models no longer just read and write text—they call tools, write code, access databases, manipulate software—forming true closed loops. These loops let systems generate their own training signals, completely escaping the ceiling of static corpora. After o1's release, AI essentially stopped merely repeating human corpus; reinforcement learning and world model routes no longer depend on compressing existing data.
Four Compounding Flywheels #
These three pillars drive four self-reinforcing cycles:
- Capability Flywheel: Stronger models → broader competency → more real use cases → more returns → reinvestment in compute and algorithms
- Data Flywheel: More usage → more interaction data → training signals closer to actual value → further improved practicality
- Tool Flywheel: Tool-using models → output quality leaps → embedded in more workflows → richer tool ecosystem
- Capital Flywheel: Inference (not just R&D) generates direct cash flow → forms a "invest compute = produce value" positive loop
Once these flywheels start spinning, they produce exponential growth—this is true compounding.
The Cambrian Analogy: Birth of the DNA Mechanism #
If we use an analogy, the current AI revolution is like the Cambrian Explosion.
The Cambrian revolution wasn't about how perfect the first organisms were—in fact, 90% of those early life forms went extinct. The revolution was in establishing the "genetic mechanism" of DNA:
- Information can be replicated and transmitted
- Allows for mutation and trial-and-error
- Environmental selection for survival of the fittest
- Modules can recombine to innovate
Once this mechanism was in place, even if the first organism failed, a flourishing ecosystem was destined to emerge.
What is AI's "DNA mechanism"?
- DNA = Training paradigms and toolchains (model architectures, optimizers, data pipelines, evaluation and deployment workflows)
- Mutation & Selection = Research exploration and market selection (papers/products are "individuals," users and profits are "fitness")
- Nutrients & Energy = Data and compute/capital (real-world energy consumption and capital allocation become new bottlenecks)
- Ecological Flourishing = Even if specific models or approaches fail, the engineering paradigm has stabilized, and the system will continue evolving
Transformer might be replaced, but the direction of "learning patterns through large-scale data" won't change; specific reinforcement learning algorithms will evolve, but the path of "letting AI explore and optimize itself" has already been validated. Just as the DNA double helix might not be the only possible structure, once the idea of "genetic information encoding + replication" emerges, it becomes irreversible.
The Complete Meaning of Accelerated Computing #
Jensen Huang's "accelerated computing" captures the surface: computational power has become "the means of production for intelligence"—like land in the agricultural age, machines in the industrial age.
But the deeper change is: computers have upgraded from machines that "execute known programs" to factories that "manufacture better programs".
The complete formula is:
Compute (physical limits) × Algorithms (search efficiency) × Data/Environment (signal quality) × Tool Interfaces (available action space) → Effective intelligence accumulated per unit time
This is the underlying principle of "capabilities no longer linearly constrained by human labor input." It's not just acceleration, but a transformation of the production mode itself—from manual crafting to automated generation.
AGI Is Not God, But Productivity #
Judging from the actual impact of various AI releases today, they all exhibit stronger tool characteristics. The eventual AGI won't necessarily be omnipotent like a god, but will decisively outperform ordinary humans on various specific tasks—similar to autonomous driving versus human drivers, AlphaGo versus Ke Jie. The more judgment and creativity a person has, the more powerful they become with these tools.
Silicon Valley's definition of AGI isn't about whether it has "consciousness"—that question is too philosophical, more about how to define consciousness. What they see is productivity.
Observable metrics aren't about "demo showmanship," but:
- Continuous decline in per-inference cost ($/call)
- Rising task completion rates and automation rates
- Increasing "unattended operation ratio" in real workflows
- Improving tool invocation success rates
- Better end-to-end business KPIs (revenue, work hours, defect rates)
Improvement in these metrics proves compounding is happening, not one-time hype.
Conclusion: The Paradigm Has Shifted, Prosperity Is Pre-Ordained #
AI's essence isn't "bigger models," but stronger "automated search→learn→act→collect feedback" compounding loops.
Even if a specific technical route reaches its ceiling, the three pillars of representation learning, closed-loop integration, and accelerated computing continue advancing. The ecosystem will evolve like the Cambrian: individuals may fail, but the "DNA" (training and engineering paradigms) is in place.
This isn't faith, but an already-occurred fact: the mode of intelligence production has changed. From manual manufacturing to self-replication, from linear growth to exponential compounding, from being constrained by human labor to being limited only by compute and capital.
Routes may vary, compounding remains constant. The paradigm has shifted, prosperity is pre-ordained.