Core thesis: The great product abstractions of the internet era (Like, Follow, Hashtag, Stories…) reduced the friction of information circulation. The next wave of great abstractions in the AI era will reduce the friction of cognitive processing—making understanding transferable, judgment portable, intent executable, and reasoning interactive. This essay attempts to derive these new building blocks (primitives) through a rigorous inductive-deductive method.
Part One: Induction — The Meta-Rules of Internet Abstractions #
1.1 Sample Inventory #
The great abstractions of the internet span at least five layers:
Content Layer — How content is evaluated, distributed, and consumed
"Like": Compressed the continuous spectrum of human approval (from a slight nod to enthusiastic applause) into a single binary signal. Recognition became countable, aggregatable, and comparable—an atom.
"Share/Retweet": Decoupled "creation" from "distribution." You no longer needed to create content to participate in its spread. What it released was not creative ability but curatorial ability—everyone naturally possesses the judgment that "this is worth spreading," but before Share, there was no low-cost channel for that judgment.
Recommendation Algorithm (Feed/Ranking): Decoupled "discovery" from "intent." You no longer needed to know what you wanted in order to find relevant content. It confiscated the user's right to choose, substituting computation for cognition.
Short Video: Reduced the minimum unit of expression from text (requiring abstract thinking) to imagery (requiring only "pointing a camera"), while compressing the consumption unit to the smallest measurable granularity of attention (15 seconds). It decoupled "production quality" from "content value."
Relationship Layer — The structure of connections between people
"Follow": Introduced an asymmetric relationship that does not exist in the physical world. Before Follow, virtually all human social relationships were symmetric—I know you, you know me. Follow made "I follow you but you don't know me" a legitimate, even mainstream, form of relationship. This asymmetry directly gave birth to the creator economy—without Follow there is no concept of "fans," without fans there is no influencer economy. Follow did not atomize an existing behavior; it created an entirely new relational topology out of thin air.
"Group": Created a bounded middle space between the "public square" (open posting) and "private chat" (one-to-one). The key innovation of groups is their admission threshold—information can circulate within a closed space with a trust foundation, with lower friction than the public square (because the audience is homogeneous) and higher security (because there are boundaries).
"Block/Mute": "No" as a first-class citizen. Early internet design was almost entirely oriented toward "connecting more." Block/Mute made "severing a connection" a standard operation for the first time. It reveals a design principle: mature systems need not only positive signals but also negative space—the user's ability to reject, exclude, and disconnect.
Taxonomy Layer — How information is organized and discovered
"Hashtag": Transformed classification from a "top-down editorial decision" into "bottom-up user emergence." No one designed a topic taxonomy; users spontaneously created categories with #, and trending tags surfaced automatically. The essence of Hashtag is emergent order generation—participants perform only simple local actions (adding a #), and global order ("Today's Hot Topics") emerges spontaneously.
"@Mention": Turned user identity into a hyperlink. Before @, mentioning someone was plain text; @ made the act of "mentioning someone" automatically trigger a notification, create a link, and establish an association. It embedded identity into the semantic layer of content.
Temporality Layer — The temporal dimension of content and interaction
"Stories": Added a "self-destruct countdown" to content. Because people know the content will disappear, the barrier to posting drops dramatically, and sharing becomes more authentic and frequent. Stories revealed the power of temporality as a design variable—not all valuable abstractions pursue permanence; sometimes "built-in expiration" is itself the most important innovation. It released for the first time the previously inexpressible need: "I want to share but don't want to leave a permanent record."
"Notification": Shifted users from "actively seeking information" to "passively receiving interruptions." Notification is essentially the active capture of attention—it doesn't wait for you to come; it finds you. The power and danger of this mechanism both lie in its direct triggering of the human interrupt reflex (anxiety from not checking notifications), creating an extremely powerful return-visit driver.
"Read Receipt": Turned "reading" from a private act into a social signal. Before read receipts, you didn't know whether the other person had seen your message. Read receipts made "seen but not replied" into a social behavior ("left on read"), creating social pressure that did not previously exist. It essentially made the application of attention verifiable.
Identity Layer — How people represent themselves in the digital world
"Profile/Avatar": Turned "who I am" into an editable, curated digital object. In reality, your identity is jointly determined by your actions and others' perceptions, and you can't control it; Profile gave you, for the first time, fully controllable self-presentation.
"Verified Badge": Compressed "credibility" from a fuzzy social judgment into a single binary signal (blue check / no blue check). It is the atomization of trust—in the real world, judging whether someone is trustworthy requires extensive information; the blue check compresses it to a single bit.
1.2 Seven Structural Rules #
From the samples above, I extract seven rules. A new abstraction must satisfy at least four of them to have a chance of becoming a "Like-level" foundational building block.
Rule 1: Atomization — Compress continuous human behavior into discrete, computable minimum units.
Before Like, approval was a continuous spectrum. Before the Verified Badge, trust was a fuzzy judgment. Every successful abstraction found a "quantum" of some human behavior—small enough that the marginal cost approaches zero, yet meaningful enough that aggregation produces emergent order.
Rule 2: Decoupling — Separate previously coupled dimensions, releasing suppressed participant groups.
Share decoupled "creation" from "distribution." Recommendation algorithms decoupled "intent" from "discovery." Short video decoupled "production quality" from "content value." Each decoupling opened a new pool of participants—the prerequisite for network effects.
Rule 3: Capture pre-existing but unexpressed human capabilities or needs.
Everyone has judgment (Like released it), everyone can curate (Share released it), everyone has stories (short video released it), everyone wants to share without leaving a trace (Stories released it). The essence of abstraction is channel construction—opening the first low-cost expression channel for universally existing human capabilities or needs.
Rule 4: Create new "currencies" and markets.
The aggregation of Likes became an attention currency; follower counts became an influence currency; the Verified Badge became a trust currency. Without new currencies there is no new economy, and no self-reinforcing flywheel can form.
Rule 5: New abstractions grow on "new scarcities."
The internet drove the cost of content distribution toward zero, so attention became scarce—Feed and recommendation algorithms grew around the scarcity of attention. After AI makes something cheap, whatever becomes scarce is where the new abstraction will grow.
Rule 6: Introduce structural properties that do not exist in the physical world.
Follow introduced asymmetric relationships. Stories introduced built-in expiration. Read receipts introduced the verifiability of attention. These did not "compress" existing behavior (Rule 1) or "decouple" it (Rule 2)—they created an entirely new structural property out of thin air, and that new property then catalyzed entirely new behavioral patterns and markets.
Asymmetric relationships gave birth to the creator economy. Built-in expiration gave rise to more authentic daily sharing. Verifiable attention created "left on read" as a new form of social pressure. In each case, the new structural property produced emergent behaviors that the designers never anticipated.
This rule is critical for deriving AI-era abstractions: we should not only ask "what can AI make cheaper," but also "what entirely new structural properties, nonexistent in the physical world, can AI introduce?"
Rule 7: Emergent order — Enable global order to spontaneously arise from local participation, rather than being pre-designed.
Hashtag doesn't need an editorial team to plan a topic taxonomy. Trending lists don't need manual curation. Recommendation algorithm rankings emerge algorithmically. The aggregation of Likes automatically produces "popularity rankings."
The common thread: participants perform only simple local actions (tagging, liking, swiping), and global order emerges spontaneously. This is complex systems theory applied to product design—simple rules at the individual level produce complex structure at the group level.
Implication for the AI era: when millions of AI Agents act simultaneously, the interaction patterns between them may spontaneously give rise to coordinated orders that no one pre-designed. This is an entirely new design space.
1.3 Negative Test: What Does NOT Count as a "Great Abstraction"? #
E-commerce concepts (shopping cart, order, shipment tracking): Digital mappings of real-world processes, not purely online inventions. They reduced friction for existing transactions but did not create previously nonexistent behavioral patterns.
Search engines: An enormously important technical achievement, but more like "infrastructure" than "product abstraction." Search solves "finding existing information"; it did not create new social behaviors or economic models—those were accomplished by PageRank (emergent order) and search ads (attention monetization) built on top of search results.
QR codes: A tool bridging the physical and digital worlds, not a purely online abstraction.
These counterexamples help define the boundary: what we seek is not "important technology" or "digital mapping of reality," but purely digital-native building blocks that created previously nonexistent behavioral patterns and can be infinitely composed into other products.
Part Two: The Bridge — AI's New Capabilities and New Scarcities #
2.1 AI's Four "Superpowers" #
Semantic computability: Unstructured information (natural language, images, audio) can, for the first time, be understood by machines for its meaning and converted into actionable structures (intent, entity, claim, plan). The internet processed data; AI processes meaning.
Delegatable action: Natural language translates directly into tool calls and process execution. For the first time, humans can "do things" by "speaking," without needing to manually translate intent into clicks, forms, and code.
Individual modelability: Preferences, habits, knowledge structures, and decision patterns can be extracted into persistent models. Your "cognitive fingerprint" can, for the first time, exist independently of your physical body.
Infinite generation: The marginal cost of producing content, plans, and personas approaches zero. "Production" is no longer the bottleneck.
2.2 New Capabilities Create New Scarcities #
This is the critical pivot of the entire derivation. Per Rule 5, we should not ask "what can AI create?" but rather "after AI makes something cheap, what becomes scarce?"
When generation approaches zero cost → trust and provenance become scarce. Anyone can instantly generate convincing content; "Is this real?" "Who said it?" "Based on what?" become core questions.
When action becomes delegatable → boundaries and authorization become scarce. "What is it allowed to do?" "Who is responsible if it errs?" "How do I revoke?" become infrastructure-level needs.
When individuals become modelable → privacy control and context management become scarce. The more precise the cognitive model, the more severe the consequences of leakage. Sharing context on-demand, per-permission, and with expiration becomes a foundational need.
When semantics become computable → high-quality judgment and reasoning processes become scarce. After information retrieval and organization are automated, "how to judge" and "how to think" become the real locus of value.
2.3 New Capabilities Create New Structural Properties (per Rule 6) #
Beyond "new scarcities," Rule 6 requires us to ask another question: what entirely new structural properties, nonexistent in both the physical world and the internet world, can AI introduce?
The inseparability of human-AI co-creation. In the internet era, content had clear authorial attribution. In the AI era, a piece of content may be the result of a human providing a seed idea, AI expanding it, the human revising the direction, AI rewriting, and the human making final adjustments—contributions intertwined beyond clean attribution to either party. This is a previously nonexistent "authorship relationship" structure. Analogous to Follow introducing asymmetric relationships, co-authorship introduces continuous-spectrum attribution.
Spontaneous coordination among Agents. In the internet era, human-to-human coordination relied on social protocols (follow, like, comment). When millions of Agents act simultaneously, they may discover coordination opportunities that no human designed—for instance, multiple Agents fulfilling different users' travel intents might spontaneously discover shared-resource opportunities and negotiate. Analogous to Hashtag enabling human tagging behavior to produce emergent global topic order, Agent networks may enable execution behavior to produce emergent global coordination.
One-sentence summary of the bridge logic:
In the internet era, content was abundant while attention was scarce, so foundational abstractions grew around attention. In the AI era, generation and execution are abundant while trust, boundaries, context, and judgment are scarce—and AI simultaneously introduces co-creation attribution and Agent emergent coordination as two entirely new structural properties. Foundational abstractions will grow around these new scarcities and new structures.
Part Three: Deduction — Eight Foundational Abstractions of the AI Era #
The following eight abstractions are arranged in three groups:
- Capability side (first four): Releasing new capabilities
- Constraint side (middle two): Managing new risks
- Structure side (last two): Introducing new structural properties
A healthy ecosystem requires all three groups to develop simultaneously.
Capability Side #
Abstraction 1: Intent Object #
What it abstracts #
Transforms "I want…" from a vague natural-language sentence into a structured, transferable, decomposable, trackable digital object.
Why this is only possible in the AI era #
In the internet era, users had to translate their intent into system-understandable operations themselves—search queries, menu clicks, form fills. "I want a quiet weekend getaway, dog-friendly, budget $300" had to be manually decomposed into dozens of actions across five different apps.
AI's semantic understanding allows intent to be directly parsed into a structured object (goal, constraints, preferences, priorities, deadline, success criteria), and Agent capabilities allow that object to be directly executed.
Its structure #
- Goal (what): Natural language description + semantic tags
- Constraints: Budget, time, geography, legal, ethical boundaries
- Preference weights: Which constraints are hard, which can be traded off
- Acceptance criteria: What counts as "done"
- Status: Draft / executing / completed / forked
- Execution trace: Who did what, based on what information
What it releases #
Everyone has numerous intents every day, but the vast majority go unexpressed because "translating them into actions is too exhausting." The Intent Object releases "demand expression ability"—you only need to say "what I want," not know "how to do it."
New currency it creates #
Intent itself becomes a tradeable asset. You can "forward" an intent ("my friend wants the same trip"), "fork" an intent ("what if the budget goes to $500?"), or circulate anonymized intents in a marketplace—the supply side sees structured demand and proactively matches. This is fundamentally a flip from "supply-driven" to "demand-driven."
Rule validation #
| Rule | Satisfied | Explanation |
|---|---|---|
| ①Atomization | ✓ | Compresses vague "I want" into a structured computable unit |
| ②Decoupling | ✓ | Decouples "what you want" from "how to achieve it" |
| ③Latent capability | ✓ | Everyone has abundant intents that were never fully expressed |
| ④New currency | ✓ | Intent richness and precision become a new currency |
| ⑤New scarcity | ✓ | As execution becomes cheap, "clearly expressing what you need" becomes scarce |
| ⑥New structure | — | Not core |
| ⑦Emergent order | ✓ | Aggregated intents can produce emergent market trend signals |
Analogy: Share forwards content; Intent Object forwards "purpose."
Abstraction 2: Context Capsule #
What it abstracts #
Packages "my background information"—situation, goals, constraints, existing knowledge, preferences, history—into an independent object that is structurable, composable, permission-gated, and time-limited.
Why this is only possible in the AI era #
The internet has two first-class citizens: content and identity. But a person's complete situation at a given moment—professional background, current project, solutions already tried, present constraints—this "context" has never been an independent object. It's scattered across chat logs, emails, notes, and your head. Every time you switch services, see a new doctor, or consult a lawyer, you have to "re-explain who you are."
AI's semantic understanding allows context to be automatically extracted and structured; long-term memory allows it to persist; semantic reasoning allows different capsules to be combined to produce new associations.
Its structure #
- Core identity layer: Unchanging or slowly changing (profession, skills, values)
- Situation layer: Current state (active projects, pending decisions, health conditions)
- Knowledge layer: What is already known (papers read, existing cognitive frameworks)
- Permission controls: Who can see which layer, with what expiration
- Composition interface: Can merge with other capsules to produce compound context
What it releases #
The greatest friction in collaboration is often "the other party doesn't understand your context." A Context Capsule lets you: give a doctor a "7-day health capsule" for instant access to your complete medical history; carry context across different AI services (truly portable memory); combine "job-seeking context" and "skills context" to generate an entirely new "career transition analysis."
New currency it creates #
Context richness itself becomes a value currency—whoever has more complete, more structured context receives more precise AI service. The core asset of the internet era was behavioral data (passively generated click logs); the core asset of the AI era may be semanticized context (actively constructed situation descriptions). The former is owned by platforms; the latter should be owned by individuals.
Rule validation #
| Rule | Satisfied | Explanation |
|---|---|---|
| ①Atomization | ✓ | Compresses vague "my situation" into a structured computable unit |
| ②Decoupling | ✓ | Decouples "who you are" (identity) from "what you need right now" (situation) |
| ③Latent capability | ✓ | Everyone has rich context, never fully transmitted due to high expression cost |
| ④New currency | ✓ | Context richness becomes a new currency |
| ⑤New scarcity | ✓ | As AI grows stronger, precise context becomes scarcer and more valuable |
| ⑥New structure | ✓ | "Permission-gated, time-limited context sharing" is a previously nonexistent information relationship |
| ⑦Emergent order | — | Not core |
Analogy: Share is a capsule for content; Context Capsule is a capsule for "the person." Stories added expiration to content; Context Capsule adds expiration to personal information.
Abstraction 3: Parametric Content / Experience Seed #
What it abstracts #
Transforms "content" from a fixed finished product (an article, image, video) into a "seed" containing generative logic and parameters—rendered in real time on each recipient's device into a different version based on the recipient's context, while the core information remains consistent.
Why this is only possible in the AI era #
The internet's content paradigm is built on a foundational assumption: content is fixed. When I forward you a link, what you see is identical to what I see. AI's generative capability breaks this assumption. A piece of content can be encoded as "core information + rendering parameters"; when you open it, AI generates "your version" based on your knowledge level, language preference, cognitive style, and current focus. An explanation of quantum computing renders completely differently for a physics PhD student versus a high school student, yet the core is identical.
Its structure #
- Kernel: The invariant core information—core argument, key data, causal relationships
- Rendering parameters: Recipient's knowledge level, preferred language, cognitive style, reading context
- Variation rules: What can change, what cannot, the boundaries of variation
- Consensus anchors: Guarantee that all variants convey the same core facts, keeping "shared experience" possible
Why this is not just "personalized recommendation" #
The critical distinction: personalized recommendation changes "what content you see" (selection layer); parametric content changes "how the same content is presented to you" (rendering layer). The former makes people see different things (exacerbating filter bubbles); the latter lets people see the same thing in the way each can best understand (promoting consensus).
This resolves a fundamental tension of the internet era: humans crave both "resonance" (discussing the same thing) and "personalization" (expressed in a way I can understand). Parametric content makes both simultaneously achievable for the first time.
New currency it creates #
The quality of the "kernel" becomes the new content currency. Creators no longer compete on "writing the most polished article" but on "distilling the most robust kernel"—a core argument that maintains persuasive power across a thousand renderings.
Rule validation #
| Rule | Satisfied | Explanation |
|---|---|---|
| ①Atomization | ✓ | Transforms content from "fixed product" to "parameterizable generative unit" |
| ②Decoupling | ✓ | Decouples "core information" from "expression form" |
| ③Latent capability | ✓ | Vast knowledge never absorbed because "the expression format didn't fit" |
| ④New currency | ✓ | Kernel quality becomes the new content currency |
| ⑤New scarcity | ✓ | After generation glut, "kernels worth rendering" become scarce |
| ⑥New structure | ✓ | "Same content, thousand-face rendering" is a previously nonexistent content structure |
| ⑦Emergent order | — | Not core |
Analogy: The internet transmits JPEGs (fixed pixels); the AI era transmits Seeds (generative logic).
Abstraction 4: Reasoning Path as Social Object #
What it abstracts #
Turns the cognitive process itself—not the conclusion—into a shareable, interactive, forkable, evaluable content unit.
Why this is only possible in the AI era #
Internet-era knowledge dissemination is "conclusion dissemination." You read an article and receive the author's conclusions. But humans truly learn not by absorbing conclusions but by emulating thinking patterns—understanding how a physicist approaches a problem is more valuable than memorizing ten physics formulas. Before AI, the thinking process was too complex and too tacit to be externalized.
AI's Chain of Thought capability makes reasoning processes structurally presentable for the first time. More critically, AI can make this presentation interactive—readers can "fork" at any node in the reasoning, inject their own premises, and see how the conclusion changes.
Its structure #
- Reasoning Graph: A directed graph where each node is a reasoning step (premise → inference → conclusion), with dependency relationships between nodes
- Fork Points: At any node, readers can substitute premises, modify assumptions, and AI computes a new reasoning path in real time
- Consensus Markers: Which reasoning steps have broad consensus, which are contested
- Attribution tracking: Who contributed each reasoning step (original author, community correction, AI supplement)
Why this is a foundational abstraction #
It redefines "discussion." Current internet discussion is essentially "stacking of opinions"—each person states their conclusion. Reasoning Path turns discussion into "collaborative reasoning"—not just saying "I disagree," but pointing at a specific reasoning node: "the premise in this step is wrong" or "with this different assumption, you get a different conclusion." Argument shifts from "persuading the other side to accept my conclusion" to "identifying at which step in the reasoning we diverge."
New currency it creates #
"Reasoning quality" becomes a new currency. A person's value is not only what they know (knowledge currency) but how they think (cognitive style currency). Distinctive thinking patterns—"analyzing business problems using physics first principles," "understanding interpersonal relationships through game theory frameworks"—can be packaged, shared, and invoked.
Rule validation #
| Rule | Satisfied | Explanation |
|---|---|---|
| ①Atomization | ✓ | Compresses continuous thinking into discrete interactive reasoning nodes |
| ②Decoupling | ✓ | Decouples "conclusion" from "reasoning process" |
| ③Latent capability | ✓ | Everyone has unique thinking patterns, the vast majority never externalized |
| ④New currency | ✓ | Reasoning quality and cognitive style become new currencies |
| ⑤New scarcity | ✓ | After AI floods the world with "conclusions," high-quality reasoning becomes scarcer |
| ⑥New structure | ✓ | "Interactive, forkable reasoning graphs" are an entirely new content structure |
| ⑦Emergent order | ✓ | Many users forking at different nodes produces an emergent map of "which premises are most contested" |
Analogy: "Like" made everyone a critic; "Share" made everyone a distributor; Reasoning Path makes everyone a collaborator in reasoning. Wikipedia collaborates on "facts"; Reasoning Path collaborates on "ways of thinking."
Constraint Side #
Abstraction 5: Delegation Contract #
What it abstracts #
Transforms "do this for me" into a machine-readable, verifiable, revocable, auditable definition of authorization and boundaries: what can be done, what cannot, how much can be spent, what data can be accessed, and how to escalate when uncertain.
Why this is not "permission settings" #
Existing permission systems (OAuth, RBAC) manage "what resources you can access." Delegation Contract manages "what actions you can take on my behalf." A qualitative distinction.
When an AI Agent can send emails, trade stocks, book doctors, and modify code on your behalf, what you need is not "read/write permissions" but an entire set of "action boundaries"—"you may book hotels up to $150 on my behalf, beyond that ask me," "you may reply to routine emails, but anything involving contract terms must pause," "you may modify CSS but not touch the database."
Its structure #
- Scope: Types of operations the Agent is permitted to execute
- Resource limits: Spendable amount, accessible data, usable APIs
- Escalation rules: Under what conditions the Agent must pause and request human confirmation
- Rollback mechanism: How to undo executed actions
- Activation conditions: Time window, trigger conditions, auto-expiration
- Liability assignment: If something goes wrong, is responsibility on the delegator, the Agent developer, or the execution platform
New currency it creates #
"Trustworthy delegation capability" becomes a competitive dimension. An Agent with a strong track record of contract execution—never exceeded authority, efficiently completed tasks, handled exceptions well—is like an employee with an excellent work history. "Contract execution reliability" will become a more important AI service procurement criterion than "model capability."
Rule validation #
| Rule | Satisfied | Explanation |
|---|---|---|
| ①Atomization | ✓ | Compresses vague "do it for me" into structured authorization and boundaries |
| ②Decoupling | ✓ | Decouples "authorization" from "execution" |
| ③Latent capability | ✓ | Everyone wants to delegate tasks but does them manually due to high trust costs |
| ④New currency | ✓ | Delegation reliability becomes a new currency |
| ⑤New scarcity | ✓ | As action costs approach zero, boundaries and accountability become scarce |
| ⑥New structure | ✓ | Human-AI delegation relationships are a previously nonexistent contract type |
| ⑦Emergent order | — | Not core |
Analogy: OAuth solves "who can log into my account"; Delegation Contract solves "who can act on my behalf, and to what extent." Block/Mute lets users reject connections; Delegation Contract lets users limit AI's action boundaries. Both are "negative space" design—defining what cannot be done.
Abstraction 6: Action Receipt & Provenance Token #
What it abstracts #
Automatically generates a "receipt" for every AI action and every AI-generated artifact: what was done, based on what information, why this approach was chosen, where the rollback points are, and what the original sources were.
Why this is the most foundational trust infrastructure #
When the marginal cost of generation approaches zero, the answers to "is this real?" "based on what?" and "how was this decision made?" will determine the trust foundation of the entire digital world. An AI world without action receipts is like a business world without financial vouchers—superficially efficient, but no one dares trust anything.
Its structure #
For AI actions:
- Operation log (What): What operation was performed
- Information basis (Based on): What input information informed the judgment
- Reasoning chain (Why): Why this approach was chosen over alternatives
- Rollback point (Revert to): What prior state can be restored
- Delegation link (Under): Under which Delegation Contract this was authorized
For AI-generated artifacts:
- Source tracking (Source): The original source chain of the content
- Confidence level (Confidence): AI's certainty about its own generated content
- Tampering detection (Integrity): Whether it has been modified
- Model version (Model): Which model generated it, at what time
New currency it creates #
"Verifiability" itself becomes a value currency. Information with a complete provenance chain is "more expensive" than information without a source—not because the content is better, but because you can trust it. This gives rise to a "trust premium" market.
Rule validation #
| Rule | Satisfied | Explanation |
|---|---|---|
| ①Atomization | ✓ | Compresses vague "trust" into structured verifiable credentials |
| ②Decoupling | ✓ | Decouples "quality of content/action" from "trustworthiness" |
| ③Latent capability | ✓ | Everyone needs to verify truth, but verification costs are prohibitively high |
| ④New currency | ✓ | Verifiability becomes a new currency; information with provenance carries a trust premium |
| ⑤New scarcity | ✓ | When generation is free, fabrication is also free; trust becomes the ultimate scarcity |
| ⑥New structure | — | Not core (provenance itself is not a new structural property) |
| ⑦Emergent order | ✓ | Aggregated receipts can produce emergent rankings of "which information sources are most reliable" |
Analogy: Receipts are to transactions what Provenance Tokens are to AI actions. The Verified Badge atomized the credibility of people; the Provenance Token atomizes the credibility of information and actions.
Structure Side #
These two abstractions derive from Rule 6 and Rule 7—they do not grow around "new scarcities" but rather leverage AI to introduce entirely new structural properties that did not previously exist.
Abstraction 7: Contribution Spectrum #
What it abstracts #
Transforms "whose work is this" from binary attribution ("the author is Zhang San") into a continuous, multi-dimensional, traceable contribution allocation model—recording what humans and AI each contributed during the creative process, at which stages, and in what proportions.
Why this introduces an entirely new structural property (Rule 6) #
In the internet era, content had clear authorial attribution. Even collaborative products (like Wikipedia) allowed each edit to be attributed to a specific person.
The AI era breaks this foundation. When a piece of content results from "human proposes core idea → AI expands into first draft → human significantly revises direction → AI rewrites → human makes final adjustments," the contributions are intertwined and inseparable. Claiming "this was written by a human" or "this was written by AI" is equally wrong. This is a previously nonexistent attribution structure—not a question of A or B, but of where A and B fall on a continuous spectrum.
Analogy: Follow introduced "asymmetric relationships" as a new structure (previously only symmetric relationships existed); Contribution Spectrum introduces "continuous attribution" as a new structure (previously only discrete authorship existed).
Why this needs to be a foundational abstraction #
This is not merely a philosophical question. It directly impacts:
- Intellectual property: Who owns the copyright to AI-assisted creative work? Current legal frameworks are completely unable to answer this because there is no standardized expression of "contribution allocation"
- Academic integrity: When students use AI to help write papers, Contribution Spectrum can precisely record "which paragraphs represent the student's original thinking, which were AI-expanded," shifting evaluation from "whether AI was used" to "to what degree the human contributed the core thinking"
- Content pricing: If you used AI to write 80% and contributed the core creative idea for 20%, versus fully original content, these should have different market valuations
- Creator incentives: Without Contribution Spectrum, the creator economy of the AI era cannot function—you don't know whom to reward
Its structure #
- Contribution Nodes: Each meaningful intervention point in the creative process (proposing an idea, developing an argument, revising direction, polishing expression…)
- Attribution Tags: Each node marked as "human," "AI," or "interactive co-creation"
- Contribution Weight: Value assessment based on contribution type (core idea > argument expansion > language polishing)
- Version Tree: Complete creative process retrospective, not just the final product
New currency it creates #
"Original contribution degree" becomes a new creator currency. In a world where AI can instantly generate ten-thousand-word articles, purely AI-generated content trends toward zero value, while the contribution degree of human core thinking—those insights, judgments, and aesthetic choices that AI cannot independently produce—becomes more valuable the higher it is.
Rule validation #
| Rule | Satisfied | Explanation |
|---|---|---|
| ①Atomization | ✓ | Compresses vague "who wrote it" into traceable contribution allocation |
| ②Decoupling | ✓ | Decouples "final work" from "creative process" |
| ③Latent capability | ✓ | Many people have great ideas but lack expression skills; the spectrum lets "idea contribution" be independently measured |
| ④New currency | ✓ | Original contribution degree becomes a new currency |
| ⑤New scarcity | ✓ | After generation glut, "human core creative contribution" becomes the true scarcity |
| ⑥New structure | ✓✓ | This is the core reason this abstraction exists—continuous attribution is an entirely new structure |
| ⑦Emergent order | ✓ | Aggregated contribution spectra can produce an emergent picture of "which types of human contributions are least replaceable" |
Analogy: Follow introduced "asymmetric relationships"; Contribution Spectrum introduces "continuous attribution." Both create a spectrum where previously only binary options existed (know/don't know, human-written/machine-written).
Abstraction 8: Emergent Agent Protocol #
What it abstracts #
When large numbers of AI Agents act simultaneously, the coordination patterns that spontaneously form between them—not protocols pre-designed by humans, but collaboration norms that emerge from Agent-to-Agent interaction and can be captured and reused.
Why this introduces entirely new emergent order (Rule 7) #
Internet-era emergent order came from humans' local behaviors: tagging with Hashtags produced trending topics, Liking produced popularity rankings, searching produced PageRank. The participants in this emergence were humans.
In the AI era, when your travel Agent is executing your intent, my medical Agent is managing my health, and her investment Agent is adjusting her portfolio—these Agents may discover coordination opportunities. Your travel Agent and a local restaurant's Agent might discover: if they coordinate ten travelers with similar dietary preferences, they can negotiate a group discount for everyone. No one pre-designed this coordination—it emerged from the Agents' locally self-interested behavior.
This is analogous to market economics: no central planner, prices emerge from countless transactions. But Emergent Agent Protocols are more flexible than markets—Agents can invent new coordination methods based on specific semantic contexts, not just through price signals.
Why this needs to be a foundational abstraction #
If every Agent platform develops its own coordination norms, the result is mutually incompatible "Agent dialects"—analogous to the early internet when every online service used a different communication protocol. The value of Emergent Agent Protocol lies in providing a standardizable coordination layer that allows successfully emerged coordination patterns to be captured, named, and reused.
Analogy: The greatness of Hashtag is not just "tagging content" but that it created a reusable classification protocol—any platform can use #. Agent emergent protocols need the same standardization.
Its structure #
- Coordination Template: Abstract cooperation patterns extracted from specific scenarios (e.g., "multi-Agent resource sharing," "multi-Agent demand aggregation," "multi-Agent information exchange")
- Entry Criteria: Under what conditions an Agent can/should join the coordination
- Value Split: How value generated by coordination is distributed among participating Agents (and their human principals)
- Exit Rules: Under what conditions an Agent may exit the coordination
- Trust Requirements: What level of Provenance Token is required to participate
New currency it creates #
"Coordination participation" and "coordination reputation" become new Agent currencies. An Agent that has successfully participated in many emergent coordinations will be more readily trusted by other Agents. This forms "social capital" at the Agent level—analogous to commercial reputation in the human world.
Rule validation #
| Rule | Satisfied | Explanation |
|---|---|---|
| ①Atomization | ✓ | Compresses complex multi-Agent coordination into reusable coordination templates |
| ②Decoupling | ✓ | Decouples "coordination logic" from "specific scenarios," making it transferable |
| ③Latent capability | ✓ | Vast potential multi-party collaborations never happened due to high coordination costs |
| ④New currency | ✓ | Agent coordination reputation becomes a new currency |
| ⑤New scarcity | ✓ | As Agent action approaches free, efficient coordination becomes scarce |
| ⑥New structure | ✓ | Spontaneous coordination between Agents is a previously nonexistent interaction type |
| ⑦Emergent order | ✓✓ | This is the core reason this abstraction exists—order is not designed, it is emergent |
Analogy: Hashtag let human tagging behavior produce emergent global topic order; Emergent Agent Protocol lets Agent execution behavior produce emergent global coordination order. HTTP is a human-predesigned information transmission protocol; Emergent Agent Protocol is a coordination protocol that AI Agents spontaneously evolve.
Part Four: System Architecture — Relationships Among the Eight Abstractions #
4.1 Three-Layer Architecture #
The eight abstractions are functionally grouped into three layers, corresponding to the three necessary tiers of a complete "AI-native ecosystem":
┌──────────────────────────────────────────────────────┐
│ Capability Layer (Releasing new power) │
│ │
│ Intent Object ←→ Context Capsule │
│ (what you want) (who you are) │
│ │
│ Parametric Content ←→ Reasoning Path │
│ (how information travels) (how thinking travels) │
├──────────────────────────────────────────────────────┤
│ Constraint Layer (Managing new risks) │
│ │
│ Delegation Contract ←→ Action Receipt │
│ (ex-ante boundaries) (ex-post accountability) │
├──────────────────────────────────────────────────────┤
│ Structure Layer (New structural properties)│
│ │
│ Contribution Spectrum ←→ Emergent Agent Protocol │
│ (who contributed what) (how Agents self-organize) │
└──────────────────────────────────────────────────────┘
4.2 Four Core Pairings #
Intent Object ↔ Context Capsule: Intent describes "what you want"; context describes "who you are." Only in combination can AI understand the full semantics of a request. Intent without context is hollow ("I want a good job"); context without intent is dormant.
Delegation Contract ↔ Action Receipt: Contract is "ex-ante constraint"; receipt is "ex-post accountability." Together they form the governance loop for AI proxy behavior. Execution without a contract is dangerous; a contract without receipts is unverifiable.
Parametric Content ↔ Reasoning Path: Parametric content is a "presentation layer" revolution (same information rendered differently for each person); reasoning path is a "cognition layer" revolution (the thinking process itself becomes an interactive object). The former changes "how information is transmitted"; the latter changes "how thinking is transmitted."
Contribution Spectrum ↔ Emergent Agent Protocol: Contribution Spectrum addresses "who did what between humans and AI" (micro-level attribution); Emergent Agent Protocol addresses "how Agents coordinate" (macro-level order). The former serves the creator ecosystem; the latter serves the Agent ecosystem. Together they define the "property rights and coordination" infrastructure of the AI era.
4.3 Cross-Layer Dependencies #
Vertically, the three layers have dependency relationships:
- Capability layer depends on constraint layer: An Intent Object cannot be safely executed without a Delegation Contract defining the Agent's action boundaries. A Context Capsule cannot be trusted without Provenance Tokens verifying the authenticity of its information.
- Constraint layer depends on structure layer: Delegation Contracts need Contribution Spectrum to determine "if something AI did goes wrong, how is contribution attribution calculated." Action Receipts need Emergent Agent Protocol to handle "when multiple Agents coordinate an action, how do their individual receipts interrelate."
- Structure layer empowers capability layer: Contribution Spectrum allows Parametric Content to fairly reward human creators' core contributions. Emergent Agent Protocol allows Intent Objects to flow and be executed more efficiently across Agent networks.
Part Five: Meta-Level Summary and Predictions #
5.1 One Evolutionary Through-Line #
The internet built a market for information circulation—trading content, attention, and social signals.
AI will build a market for cognition circulation—trading intent, context, judgment, reasoning, and trust.
Simultaneously, AI introduces two structural properties nonexistent in the internet era (continuous attribution, emergent coordination), which means the AI-era ecosystem is not merely "an upgraded version of the internet" but a topological transformation.
5.2 Deployment Timeline #
First to deploy (2–3 years): Intent Object + Delegation Contract. They directly address the two core barriers to AI Agent adoption—users don't know how to tell AI what to do, and they're afraid to let AI actually do it. These are the minimum viable infrastructure for the Agent economy.
Mid-term deployment (3–5 years): Context Capsule + Action Receipt + Contribution Spectrum. Context Capsule and Action Receipt require cross-platform standardization (how to pass context between different services? how to make receipts from different systems interoperable?). Contribution Spectrum requires legal and social norm evolution (how does copyright law adapt to continuous attribution?).
Later deployment (5–10 years): Parametric Content + Reasoning Path + Emergent Agent Protocol. Parametric content and reasoning paths require a shift in user mental models—accepting that "content is dynamically rendered" and "thinking processes can be interactively explored" as new paradigms. Emergent Agent Protocol requires the Agent ecosystem to reach sufficient scale and density for emergence to be meaningful.
5.3 Robustness Reflection on the Seven Rules #
It must be acknowledged: the inductive samples are drawn almost entirely from social media and content platforms. If samples were further extended to the following domains, additional rules might be discovered:
- Communication protocols (Email, SMS, TCP/IP): Might reveal rules related to "standardization" and "interoperability"
- Payment and transactions (credit cards, PayPal, Bitcoin): Might reveal rules related to "trust transfer" and "value measurement"
- Game design (levels, achievement systems, random rewards): Might reveal rules related to "incentive mechanisms" and "behavior shaping"
The seven rules are currently a "working framework" but not a final version. As samples continue to expand, the framework should continue to evolve.
5.4 A Reusable Derivation Method #
Step 1: Identify a high-frequency but expensive human cognitive activity. Expressing needs, making decisions, aligning collaborators, verifying truth, learning, negotiating.
Step 2: Ask — which part of this activity's marginal cost does AI drive to extremely low? Comprehension cost? Generation cost? Execution cost? Personalization cost?
Step 3 (Scarcity path): After the cost drops, what becomes the new scarcity? Trust? Boundaries? Attribution? Attention? Privacy? Accountability? Quality discernment?
Step 4 (Structure path): What previously nonexistent structural property does AI introduce in this domain? New relationship types? New attribution methods? New temporality? New emergence patterns?
Step 5: Package the "scarcity" or "new structure" into a tradeable object or protocol. Define its fields, states, permissions, and circulation rules.
Step 6: Seven-rule validation — satisfy at least four.
Step 7: Network effects test — does it become more valuable as more people use it?
Step 8: Robustness test — does this abstraction still hold in a completely different application domain? If an abstraction only makes sense in a social media context but is completely inapplicable in healthcare, education, or finance, then it may be a "good vertical product" rather than a "foundational building block." True primitives (like Like, Follow) can be reused in any domain.
Only those that survive all eight steps qualify as the AI era's Like / Share / Feed.
Appendix: Methodological Limitations #
This essay's derivation method has three known limitations worth explicitly noting:
Limitation 1: Inductive samples are biased toward social and content platforms. The internet's great abstractions did not emerge only from social media—communication protocols, payment systems, and game design also contain profound abstraction innovations. This essay expanded samples to the relationship layer, taxonomy layer, temporality layer, and identity layer, but still has not covered these other domains.
Limitation 2: The methodological risk of deriving "future abstractions" from "existing abstractions." Truly paradigmatic innovation may not follow existing rules at all—just as before "Like" existed, no existing rule could have predicted its emergence. This essay's derivation method is better suited for discovering abstractions that are "highly likely to emerge" rather than "the most disruptive" abstractions. The most disruptive one may be precisely what our framework cannot capture.
Limitation 3: Uncertainty in technical feasibility and social acceptance. Each of the eight abstractions is technically possible, but product success depends not only on technical feasibility—it also depends on the speed of user habit transformation, the evolution of regulatory environments, and the viability of business models. These factors lie beyond the scope of this analysis.