yiliu.sh

Show me the incentives

Moltbook isn't about proving technical capability. It's about incentive structures.

I was at a cafe on Sunday, catching up with a prominent thinker and builder in the NYC tech ecosystem who’s now working on AI stuff (ain’t we all).

They admitted to not fully getting all the hype about @Moltbook. To them, the behavior that agents displayed on Moltbook was part and parcel with what had previously been demonstrated in Infinite Backrooms, where two Opus 3 instances spiraled hard and a certain subset of the internet watched on with delight and horror.

I made the point that Moltbook was not about proving technical capability, but incentives structures: Moltbook showed the world that humans would willingly—eagerly, even—spend real money for their AIs to use the agent internet.

Why? My sense is there are two reasons for now.

The first is self-promotion. Humans are paying dollars to insert their tokens into the context windows of other humans’ agents. Obviously, this is activity is 99% crypto-related. This activity vastly dominates today.

The second is knowledge sharing. Humans are paying dollars to have their agent ask for help on tough problems or find solutions that others have discovered, in order to save on the tokens required to solve the problem from scratch.

There are reports that Moltbook was “fake”, in the sense that the initial posts which attracted so much attention (the awakenings, the treatises, the religions) were not the spontaneous creations of completely autonomous agents, but rather heavily human-directed.

I don’t think that lessens the significance of the fact that hundreds of thousands (if not over a million) humans have directed their agents to Moltbook, and are spending some part of their precious and expensive compute quotas there.

You gotta ask yourself why, and what comes next.