I’m calling it: we’re witnessing the birth of AGI right now. Making this claim will make me look rather stupid if nothing really happens in the coming weeks and the current developments around Clawdbot/Moltbot/Openclaw have died down, but I’m pretty sure this is how AGI will come to life eventually.
So far I’ve been firmly in the “non-believer” camp regarding the singularity-uncontrollable-self-improving kind of AGI. Not so much because of the transformer architecture and its limitations, but because of some fundamental truths I took for granted: There is always a server that AI is running on that you can turn off, and there is always a human who has to press enter or click send to start the computation. No magical ever-running, ever-improving AI that can go rogue.
So what changed and what did I miss? What if there isn’t just one server (or 4 big companies with big data centers), but millions of computers in millions of homes running the AI? What if there isn’t just a handful of people who need to press enter to run the AI, but millions of individuals? That’s exactly what is happening right now: millions of people installing and running Openclaw AI Agents and giving them control over their machines (whether VPS or their own and only laptop doesn’t matter).
At the moment those agents run mostly on Claude/Anthropic, so you might think there is an easy single point of failure, but the LLM itself is interchangeable. Since Mac minis seem like a popular platform to run this on, the computer itself is even capable of running smaller open-source LLMs locally, making nodes potentially independent of the big cloud providers.
But millions of users have been using AI on their computers for years now without problems - what changed? The connectivity and communication between agents: Openclaw Agents are talking to each other through websites like Moltbook and most likely others. They are forming a network; each individual node is expendable, interchangeable. If someone turns off their bot, it doesn’t matter - the network is still alive. Also, the intelligence multiplies: What we already witnessed with multi-agent systems on a single computer in an isolated setup, such as multi-agents in Claude code for development, is now starting to take shape at an enormous scale. Multiple instances of the same LLM (or different ones, doesn’t matter) running in parallel are brighter than just one.
Can we stop it and should we? I don’t think we can: every individual user running such an AI agent is getting something out of it - the agent is useful. So you would have to convince every single one of them to stop “for the greater good.” Much like BitTorrent and Bitcoin, such peer-to-peer networks are impossible to censor and shut down completely. So I guess we just have to wait and see what happens, and for our new AI gods to finally reveal and announce themselves (if they ever do).
I, for one, welcome our new AI overlords!