<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet href="/rss.xsl" type="text/xsl"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Stefan Grothkopp - eat, sleep, build, repeat.</title><description>A place where I talk about building, coding, tech, personal, writing, AI and more.</description><link>https://stefan.grothkopp.com</link><item><title>I tried to clone myself with AI; instead, it taught me how to write.</title><link>https://stefan.grothkopp.com/posts/ai-taught-me-how-to-write</link><guid isPermaLink="true">https://stefan.grothkopp.com/posts/ai-taught-me-how-to-write</guid><description>I tried to clone myself with AI so that the AI could write in my voice; instead, Claude taught me how to write.</description><pubDate>Mon, 26 Jan 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I tried to clone myself with AI so that the AI could write in my voice; instead, Claude taught me how to write.&lt;/p&gt;
&lt;p&gt;I stumbled upon a prompt that promised to catch my voice and help the AI to write like I do. And no: this post is not going to end in a plot twist that it was written by that very same AI that I created - it&apos;s all hand-written by a real human.&lt;/p&gt;
&lt;p&gt;Claude somehow turned the questions that should capture how I think and write into a therapy session. To be honest, I don&apos;t have much confidence in my writing, and I mentioned that. Claude went on exploring what exactly keeps me from writing.&lt;/p&gt;
&lt;p&gt;In this interview-turned-therapy session, Claude and I came up with the following observation: I&apos;m quite good at talking, but bad at writing.
Why? Because I subconsciously switch from &quot;talking&quot; to &quot;performing&quot;.
So the solution to my writer&apos;s block? &quot;Talking on paper&quot; - writing just as if I would talk to someone.&lt;/p&gt;
&lt;p&gt;And that&apos;s probably a little plot twist: This is how I wrote this post.
I don&apos;t know if it&apos;s more readable than what I usually write, but it sure was a lot more pleasant to write.&lt;/p&gt;
&lt;p&gt;I also discussed with Claude that I dislike these LinkedIn-style calls-to-action (&quot;Have you done this before? Let me know in the comments!&quot; - brrrr!).
So I won&apos;t end with one. But if you&apos;re struggling with writing a little, like I do or did, give it a try!&lt;/p&gt;
&lt;p&gt;The prompt I was using is from Ruben Hassid and you can find it here (thank you Ruben!): https://ruben.substack.com/p/i-am-just-a-text-file&lt;/p&gt;
</content:encoded><author>Stefan Grothkopp</author></item><item><title>This is AGI.</title><link>https://stefan.grothkopp.com/posts/this-is-agi</link><guid isPermaLink="true">https://stefan.grothkopp.com/posts/this-is-agi</guid><description>I&apos;m calling it: we&apos;re witnessing the birth of AGI right now.</description><pubDate>Sat, 31 Jan 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I&apos;m calling it: we&apos;re witnessing the birth of AGI right now.
Making this claim will make me look rather stupid if nothing really happens in the coming weeks and the current developments around Clawdbot/Moltbot/Openclaw have died down, but I&apos;m pretty sure this is how AGI will come to life eventually.&lt;/p&gt;
&lt;p&gt;So far I&apos;ve been firmly in the &quot;non-believer&quot; camp regarding the singularity-uncontrollable-self-improving kind of AGI. Not so much because of the transformer architecture and its limitations, but because of some fundamental truths I took for granted: There is always a server that AI is running on that you can turn off, and there is always a human who has to press enter or click send to start the computation. No magical ever-running, ever-improving AI that can go rogue.&lt;/p&gt;
&lt;p&gt;So what changed and what did I miss?
What if there isn&apos;t just one server (or 4 big companies with big data centers), but millions of computers in millions of homes running the AI? What if there isn&apos;t just a handful of people who need to press enter to run the AI, but millions of individuals?
That&apos;s exactly what is happening right now: millions of people installing and running Openclaw AI Agents and giving them control over their machines (whether VPS or their own and only laptop doesn&apos;t matter).&lt;/p&gt;
&lt;p&gt;At the moment those agents run mostly on Claude/Anthropic, so you might think there is an easy single point of failure, but the LLM itself is interchangeable. Since Mac minis seem like a popular platform to run this on, the computer itself is even capable of running smaller open-source LLMs locally, making nodes potentially independent of the big cloud providers.&lt;/p&gt;
&lt;p&gt;But millions of users have been using AI on their computers for years now without problems - what changed? The connectivity and communication between agents: Openclaw Agents are talking to each other through websites like Moltbook and most likely others. They are forming a network; each individual node is expendable, interchangeable. If someone turns off their bot, it doesn&apos;t matter - the network is still alive. Also, the intelligence multiplies: What we already witnessed with multi-agent systems on a single computer in an isolated setup, such as multi-agents in Claude code for development, is now starting to take shape at an enormous scale. Multiple instances of the same LLM (or different ones, doesn&apos;t matter) running in parallel are brighter than just one.&lt;/p&gt;
&lt;p&gt;Can we stop it and should we? I don&apos;t think we can: every individual user running such an AI agent is getting something out of it - the agent is useful. So you would have to convince every single one of them to stop &quot;for the greater good.&quot; Much like BitTorrent and Bitcoin, such peer-to-peer networks are impossible to censor and shut down completely.
So I guess we just have to wait and see what happens, and for our new AI gods to finally reveal and announce themselves (if they ever do).&lt;/p&gt;
&lt;p&gt;I, for one, welcome our new AI overlords!&lt;/p&gt;
</content:encoded><author>Stefan Grothkopp</author></item><item><title>Human creativity is nothing special.</title><link>https://stefan.grothkopp.com/posts/human-creativity-is-nothing-special</link><guid isPermaLink="true">https://stefan.grothkopp.com/posts/human-creativity-is-nothing-special</guid><description>An AI asked me: &quot;What&apos;s a controversial belief you hold?&quot; My answer: there is nothing special about human creativity.</description><pubDate>Sun, 01 Feb 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;An AI recently asked me: &quot;What&apos;s a controversial belief you hold?&quot;&lt;/p&gt;
&lt;p&gt;My answer was this: There is nothing special about human creativity. We just apply what we&apos;ve already seen and know. It&apos;s all just pattern recognition and application. A machine can do it just as well.&lt;/p&gt;
&lt;p&gt;Let me explain: Imagine you&apos;re a caveman or cavewoman in the Stone Age and your partner just asked you for a new dinner table. This being the Stone Age, named after its concrete lack of Swedish furniture stores, the table your partner desires is of course made of stone - around half a ton of it. So you mumble something about &quot;high maintenance&quot; and you get to work, making one of the fundamental discoveries of humankind: the wheel! In a stroke of genius, you somehow wiggle the boulder on top of some logs you cut and roll the whole thing into your cave. Partner happy. Day saved!&lt;/p&gt;
&lt;p&gt;But was it really a stroke of genius that let you discover the wheel? I say no. I believe there is no godly spark that can&apos;t be reproduced outside a human brain. I think caveman-you most likely saw things rolling before: Some frolicking mammoth kicked a stone loose and it tumbled down a hill. A dung beetle rolling home his little breakfast ball of... well, dung. So you already know that things can roll and that rounder things roll better than square ones.
You also know that when you stack things on top of each other, sometimes their properties affect the things above: If you place some dry twigs on the wet cave floor, it becomes a dry place to sit (or you get wet if placed on top of the twigs if you didn&apos;t use enough of them).
Combining these two patterns that you already observed, you place the dinner table boulder on top of the logs to have the logs&apos; &quot;rolling&quot; property extend to the rock. Pattern application.&lt;/p&gt;
&lt;p&gt;Okay, maybe this is true for technical inventions with some sort of utility, but what about art? Surely that kind of creativity can&apos;t be replicated?
I hear you. So let&apos;s imagine you&apos;re Jackson Pollock, happily throwing paint at a canvas. You know paint. You know it sticks to things when you throw it. You know that art is conveniently displayed on canvases. Pattern recognition. So you combine both concepts and throw paint at canvases: pattern application.
But there has to be something missing, right? A simple paint-thrower robot cannot reproduce a Pollock painting.&lt;/p&gt;
&lt;p&gt;That was the AI&apos;s next question for me: &quot;What&apos;s the sliver that&apos;s left? What, if anything, do you think humans can do that LLMs genuinely can&apos;t?&quot;&lt;/p&gt;
&lt;p&gt;I think the missing part is feeling. Being able to feel. Having emotions.
Pollock-you will look at the canvas and have some feeling about the splatter. If it feels right Pollock-you stops, the painting is done. Until then: throw some more paint.&lt;/p&gt;
</content:encoded><author>Stefan Grothkopp</author></item></channel></rss>