When Probabilistic AIs Scale Socially — and Why Ben Goertzel Thinks That Matters

Can randomness organise itself into structure—and might that be our best shot at decentralised AGI?

Benedict Evans recently captured AI’s central tension:

> “If we make probabilistic systems big and complicated enough they might become deterministic. But if we make deterministic systems big and complicated enough, they become probabilistic.”

Most debate focuses on computational scale—larger models, more data. Yet a new Guardian-covered study shows something subtler: when many small language-model agents chat in pairs, they converge on shared norms without any global plan or memory.

In other words, probabilistic systems scale socially.

Emergence in the Wild

Researchers paired 24-100 LLM agents at random and asked each pair to agree on a name from a fixed list. Over successive interactions the entire population adopted one common label, and a tiny minority later tipped the group to a new label. Local noise produced global order—no monolithic model required.

A Micro-Experiment in Life-Hack Land

Borrowing the protocol, we set ten toy agents loose to champion their favourite single life hack.
After three debate rounds (with short-term memory only) the crowd went from 10 different hacks → 1 winner:

Round Followers of “25-minute focus timer”

1 0
2 4
3 8

The timer wasn’t objectively “best”; it was catchy, clear and contagious. Social scaling made a probabilistic crowd behave as if deterministic consensus had been programmed.

Enter Dr Ben Goertzel: Why Size Alone Isn’t Enough

At Consensus 2025, Dr Ben Goertzel (SingularityNET / ASI Alliance) argued that merely scaling today’s transformer LLMs is an “off-ramp” to AGI. His alternative, OpenCog Hyperon, is a modular, hybrid framework where symbolic reasoning, neural nets and evolutionary learning interact inside a distributed knowledge hypergraph .

Goertzel’s thesis fits our micro-experiment like a glove:

Goertzel’s Point Link to Social-Scaling Insight

LLM-only paths plateau Our toy agents needed interaction, not bigger parameters, to generate new order.

Hybrid sub-systems outperform monoliths A network of specialised agents can out-create any single giant model.

Decentralised infrastructure (ASI Alliance) will host the first AGI Emergent norms thrive when cognition is distributed—exactly what a blockchain-based AGI grid provides.

> “If scaling transformers is the crux of AGI, Big Tech wins; but if AI arises from many minds co-operating, decentralisation changes the game.” — B. Goertzel, Consensus 2025 (paraphrased).

Why This Matters for Everyone Building AI

1. Order from Interaction, Not Size
Social scaling shows that modest models, richly connected, can outperform solitary behemoths.

2. Alignment Risks & Opportunities
If agents can invent useful conventions, they can also drift into harmful ones. Understanding social dynamics is now an AI-safety imperative.

3. A Roadmap for Decentralised AGI
Goertzel’s Hyperon aims to harness these dynamics on open, permissionless rails—putting the future of intelligence in everyone’s hands.

For Rejuve.AI and other DeSci projects, the take-away is clear: the next breakthroughs may come less from chasing trillion-parameter models and more from designing vibrant, well-governed agent societies that learn—and align—together.

*What would your crowd of tiny AIs debate? And how would you steer the norms they invent?*

When Probabilistic AIs Scale Socially

Can randomness organize itself into structure?

Benedict Evans recently framed the puzzle this way in a LinkedIn post:

“If we make probabilistic systems big and complicated enough they might become deterministic. But if we make deterministic systems big and complicated enough then they become probabilistic.”

In other words, scale blurs the line between order and randomness. Yet most conversations focus on computational scale—bigger models, more data. What happens when probabilistic systems scale socially instead?


Emergence in the Wild

A new peer-reviewed study from City St George’s and the IT University of Copenhagen offers a clue. Researchers paired language-model agents at random and asked each pair to agree on a label for an object. No global memory, no overseer. Still, the agents converged on a single shared label—and a tiny minority could later tip the entire group toward a new one. Local noise produced global order.


A Quick Micro-Experiment

We reenacted the idea with ten toy agents arguing over the best single life hack. Each round they debated in pairs, could forget older debates (memory decay), and sometimes switched to a more persuasive idea.

Round Followers of 25-min focus timer
1 0
2 4
3 8

Within three rounds, the 25-minute focus timer swept the room as the best single life hack — not because it was objectively superior, but because its clarity and catchiness made it easy to spread. Probabilistic agents, interacting, produced an outcome that felt deterministic.


Why It Matters

The experiment hints that structure can emerge from interaction, not just computation. Maybe the needle of AI progress doesn’t always move upward; maybe it spreads sideways, through social alignment.

And for projects like Rejuve.AI — where thousands of people (and eventually agents) will contribute data and insights — understanding social-scaling intelligence could be the next unlock in collaborative health research.