⟨ back to home │ March 23, 2026

AI for Children: The Affordance Problem

I've been programming on and off for about 15 years, and the feeling of being in control--of directing a machine and seeing reasonable, analyzable output--has always been genuinely fascinating. Now with OpenClaw, I can see that many other people share the same fascination. Retirees in China are "raising lobsters." Office workers are automating their inboxes. School kids are wiring up agents for homework. People who never wrote a line of code in their lives are suddenly experiencing what programmers have always known: the thrill of making something do what you told it to.

That feeling--agency, creation, seeing your intent come alive--is exactly what children need when they are learning and developing. It is the core of how they build confidence and understanding of the world. But children's digital experiences today are moving in the opposite direction. Digital gaming dominates because it delivers faster feedback and stronger stimulation. The child isn't creating; they're reacting. The agency is an illusion designed to keep them swiping.

AI could change this. It could give children real creative agency for the first time in a digital medium. But there's a problem: the interface.

From computer classes to no classes at all

Computer literacy used to be a real thing. Schools taught it. There were entire curricula dedicated to learning how to use a PC--file management, keyboard shortcuts, basic troubleshooting. This wasn't because computers were dumb. It was because computers were powerful but their interfaces were terrible. Command lines, nested menus, file systems with arcane naming conventions. The gap between what the machine could do and what a normal person could figure out on their own was enormous. So we taught literacy.

Then the iPhone came out, and something remarkable happened. When people first saw toddlers swiping an iPad, the reaction wasn't "we need tablet literacy programs." It was awe. A 2-year-old could operate a computer without being taught. Nobody runs smartphone classes in schools. The device is just obvious. Apple didn't solve the literacy problem by teaching people more--they solved it by designing an interface so intuitive that literacy became irrelevant. Powerful functionality, constrained and surfaced through a UI that fingers already understood.

That was the real revolution of iOS: not what it could do, but not what it could do, but that nobody needed to learn how to do it.

AI is back to the command line

Now look at how we interact with AI. It's a blank text box. The machine is extraordinarily powerful--it can write code, generate images, compose music, explain quantum physics to a 5-year-old. But the interface is, essentially, a command line with better autocomplete. It requires knowing what to ask, how to ask it, how to evaluate the output, and how to iterate when it's wrong.

For adults, this is manageable. We've learned to prompt. We've developed intuitions about what works. Some of us even enjoy the process of crafting the right instruction.

For a 4-year-old, it's a wall. They can't type. They can't formulate prompts. They can't evaluate whether the output is good. The power is all there, but the affordance--the thing that lets you actually use the power--is missing.

It's a regression. The progression went from "take computer classes" to "a toddler can use it" and back to "learn prompting." The cycle is: powerful technology with bad interface → literacy required → better interface design → literacy dissolved → new powerful technology with bad interface → literacy required again.

The affordance question

So the question for AI and children isn't really about safety, though safety matters. It's about affordance: what does an AI interaction look like that a child can engage with naturally--the way they swipe a screen--while still giving them real agency over something genuinely powerful?

A blank prompt isn't it. But a static, non-interactive experience isn't it either--that's just a book, and we already have books. The interesting design space is somewhere in between: structured enough that a child can engage without literacy, open enough that the engagement is real and not just the illusion of choice.

The design space is wide. A few dimensions worth thinking about:

There's also an interesting relationship between affordance and safety. The better the affordance is designed--the more naturally a child can engage within a well-considered interaction--the less likely they are to stumble into the risks of open-ended AI. Not because safety is enforced, but because the experience is compelling enough that there's no reason to leave it. Good design and safety aren't separate problems; they're the same problem from different angles.

Where things stand

A few startups have started exploring different points in this space.

Heeyo ($3.5M from OpenAI's fund) launched an open-ended AI chatbot for ages 3-11--the conversational end of the spectrum. Giant ($8M seed, Matrix) lets kids create episodes and talk to AI characters, reporting 1M+ minutes of engagement since its May 2025 launch--more structured, with storytelling as the frame. There are also dozens of AI storybook generators--Childbook.ai, MyStoryBot, Imajinn--that let parents create personalized picture books, essentially print-on-demand with AI illustrations. These sit at the fully authored end: AI assists creation, but the output is static.

Each of these represents a different bet on where the right affordance lives. It's early, and we don't yet know which approach--or which combination--will resonate with children and parents.

On the regulatory side, the landscape is taking shape quickly. California's SB 243 (effective January 2026) is the first US law targeting AI chatbots for minors. COPPA was overhauled for the first time since 2013. The FTC has launched a formal inquiry into AI chatbot impacts on children. These regulations are focused on open-ended AI chatbots, which may shape which parts of the affordance spectrum are practically viable for startups to build in.

Open questions

I don't have answers. But a few things seem worth thinking about:

  1. Is the "iOS moment" possible for AI? Can someone design an AI interface so intuitive that children use it without instruction--the way they use a touchscreen? Or is the nature of AI interaction (formulating intent in language) inherently harder than pointing and swiping, especially for pre-literate kids?

  2. Who designs the affordance? If the right product is a designed experience powered by AI, someone has to author that experience. Is it children's book authors? Game designers? Educators? The answer shapes the economics of the whole category.

  3. What do children actually want to do with AI? We're mostly guessing. The research on how very young children interact with conversational AI is thin. Maybe they want responsive stories. Maybe they want to build things. Maybe they want a character who remembers them. The affordance should follow the child's natural intent, but we don't have great data on what that intent is.

  4. Does the format matter? For ages 3-7, static reading may not be the best format--but neither is a chat window. The right interface might be something that doesn't exist yet. Something closer to play than to reading or chatting.