⟨ back to home │ March 23, 2026

AI for Children: From Game Addiction to Creative Agency

I've been programming on and off for about 15 years, and the feeling of being in control, of directing a machine and seeing reasonable, analyzable output, has always been genuinely fascinating. Now with OpenClaw, I can see that many other people share the same fascination. Retirees in China are "raising lobsters." Office workers are automating their inboxes. People who never wrote a line of code in their lives are suddenly experiencing what programmers have always known: the thrill of making something do what you told it to.

What's interesting is how strong this pull is. I know people who haven't touched a video game in years and are now spending hours with OpenClaw. The feeling of directing something and seeing it respond to your intent seems more compelling than the passive reward loops of most games.

Which makes me think about children. Game addiction among kids has been a growing problem since the PC era, and smartphones made it worse. Games are now always in their pocket, optimized for engagement. The mechanism is similar to what makes OpenClaw addictive: a tight feedback loop where you do something and immediately see a result. The difference is that games channel that loop into consumption, while OpenClaw channels it into creation.

If the hook is the same, maybe the replacement is too. What if children could get that same addictive feedback loop from directing AI to create things, instead of from grinding through game levels? The question is how to make that accessible to them.

From computer classes to no classes at all

Computer literacy used to be a real thing. Schools taught it. There were entire curricula dedicated to learning how to use a PC--file management, keyboard shortcuts, basic troubleshooting. This wasn't because computers were dumb. It was because Windows 9X era computers were powerful but their interfaces were terrible. Command lines, nested menus, the registry, viruses. The gap between what the machine could do and what a normal person could figure out on their own was enormous. So we taught literacy.

Then the iPhone came out, and something remarkable happened. When people first saw toddlers swiping an iPad, the reaction wasn't "we need tablet literacy programs." It was awe. A 2-year-old could operate a computer without being taught. Nobody runs smartphone classes in schools. The device is just obvious. Apple didn't solve the literacy problem by teaching people more. They solved it by designing an interface so intuitive that literacy became irrelevant. Powerful functionality, constrained and surfaced through a UI that fingers already understood.

That was the real revolution of iOS. Nobody needed to learn how to do it.

AI and the interface regression

The way we interact with AI today is a blank text box. The machine is extraordinarily powerful: it can write code, generate images, compose music, explain quantum physics to a 5-year-old. But the interface is essentially a command line with better autocomplete. It requires knowing what to ask, how to ask it, how to evaluate the output, and how to iterate when it's wrong.

This is hard for everyone, not just children. Even among adults, the gap between a skilled prompter and a casual user is enormous. AI in the best hands is a 10x multiplier. In average hands it produces mediocre drafts. The blank prompt is an unsolved interface problem across all age groups.

Children are already using AI, though. Two-thirds of U.S. teens use chatbots, with nearly a third doing so daily. Some of them use AI creatively: writing, coding, making images. But the younger the child, the more the interaction tends toward companionship. A friend of mine uses ChatGPT as a bedtime storyteller. His little girl loves talking to it when he's too tired to read. The AI performs for her. It tells stories. It answers questions. It's a friendly voice.

For adults, the blank prompt is at least an option you can learn to wrestle with. For young children, the path of least resistance is just talking and listening, which gives them companionship but not much agency.

The progression from "take computer classes" to "a toddler can use it" to "learn prompting" looks like a regression. I think this is an affordance problem worth discussing: AI's current interface doesn't offer children a natural way to direct it. What does an AI interaction look like that a child can engage with naturally, the way they swipe a screen, while still giving them real agency over something genuinely powerful?

A blank prompt with limitless possibilities probably isn't it for young children. A static, non-interactive experience isn't it either; we already have books. The interesting design space is somewhere in between: structured enough that a child can engage without literacy, open enough that the engagement is real.

Dimensions of the design space

A few things seem worth thinking about here:

There's also an interesting relationship between affordance and safety. A well-designed interaction naturally reduces the risks of open-ended AI, because the child is engaged within a considered experience rather than wandering a blank canvas. Safety doesn't have to be a separate layer bolted on top; it can be a property of the design itself.

Where things stand

A few startups have started exploring different points in this space.

Heeyo ($3.5M from OpenAI's fund) launched an open-ended AI chatbot for ages 3-11, sitting at the conversational end of the spectrum. Giant ($8M seed, Matrix) lets kids create episodes and talk to AI characters, reporting 1M+ minutes of engagement since its May 2025 launch, with storytelling as the frame. There are also dozens of AI storybook generators like Childbook.ai, MyStoryBot, and Imajinn that let parents create personalized picture books, essentially print-on-demand with AI illustrations. These sit at the fully authored end: AI assists creation, but the output is static.

Each of these represents a different bet on where the right affordance lives. It's early, and it's not yet clear which approach, or which combination, will resonate with children and parents.

On the regulatory side, the landscape is taking shape. California's SB 243 (effective January 2026) is the first US law targeting AI chatbots for minors. COPPA was overhauled for the first time since 2013. The FTC has launched a formal inquiry into AI chatbot impacts on children. These regulations are focused on open-ended AI chatbots, which may shape which parts of the affordance spectrum are practically viable for startups to build in.

Open questions

A few things I'm genuinely curious about:

  1. Is an "iOS moment" possible for AI? Can someone design an AI interface so intuitive that children use it without instruction, the way they use a touchscreen? Or is the nature of AI interaction, formulating intent in language, inherently harder than pointing and swiping, especially for pre-literate kids?

  2. Who designs the affordance? If the right product is a designed experience powered by AI, someone has to author that experience. Is it children's book authors? Game designers? Educators? The answer shapes the economics of the whole category.

  3. What do children actually want to do with AI? The research on how very young children interact with conversational AI is thin. Maybe they want responsive stories. Maybe they want to build things. Maybe they want a character who remembers them. The affordance should probably follow the child's natural intent, and we don't have great data on what that intent is yet.

  4. Does the format matter? For ages 3-7, static reading may not be the ideal format, and neither is a chat window. The right interface might be something that doesn't exist yet, something closer to play than to reading or chatting.