⟨ back to home │ June 6, 2025

Conservation of Complexity When Using AI

Today I happened to notice that Codex is being featured on the ChatGPT sidebar. So out of curiosity I took a look. In summary, it is a web-based or a CLI program that helps you code in a conversational UI. Later I read another article saying that Claude Code is much much better than Cursor, the latter I use every day (even when writing this post), so naturally I took a look at it as well. It also features a conversational UI.

Curious, very curious, as this is an apparent break from the traditional IDE/editor-like programming experience. I just could not make sense of it at that moment so I had a discussion with GPT-4.1. In a conversation (titled “Future of Coding AI” by GPT itself), it kindly explained that:

[D]riven by the fundamental shift toward treating code as a communication medium rather than as an artifact itself. They’re betting on coding workflows moving towards:

  1. AI as a Direct Coding Agent (not merely assistant)
  2. Explicit Promotion of a Higher Abstraction Level

And further:

Codex and Claude Code’s terminal-based, request-reaction paradigm signals confidence that the future of coding increasingly relies on human developers defining intention rather than explicitly managing implementation details.

Wow.

Reading this, I can't help but wonder, where does this stop? In a traditional, fully fledged software development workflow, you have the Product Manager, the Designer, the Engineer. Among the Engineers you have someone who submits a PR and someone who reviews and merges it, among many other people.

Now we have AI help the Product Manager write better PRDs, the Designer build better prototypes and assets, the Engineer write better code. More provocatively, with tools like Codex and Claude Code, the Engineer might not even need to see the whole codebase—just snippets from AI in the PRs. So, why stop there? Why not have GPT/Claude translate user needs directly into PRDs for the Engineer? Why not just drop the Engineer and let the coding GPT/Claude read the prompts directly from a GPT-written PRD and another senior GPT/Claude approve the PRs?

So the end picture is that instead of DAO you get AAO--AI Automonous Organization, replacing the traditional form of company so that humans can now focus on being the consumer rather than the producer.

Wait a second. Why did the software development process have so many stakeholders in the first place, and why did everyone complain that their time is mostly consumed in endless meetings? You may say it is because 1) humans have limited skillsets and 2) humans are bad at communication--

But somehow humans are good at conversing with Codex and Claude Code?

To a certain extent, it may be true. GPT/Claude are excellent listeners, they sense the undertones, notice your choice of words, see through the blabbing, and can easily reorganize a piece of incoherent, messy text into McKinsey style bullet points with near-perfect logic.

But have you heard of Kolmogorov complexity? The idea that there is a measure of the minimal effort required to represent a piece of information? GPT/Claude can reduce the apparent complexity of a verbal representation, but they cannot go beyond the Kolmogorov complexity. And what's more than Kolmogorov complexity is that special languages can make representation of certain information even easier. That's why whenever I'm working on a task to "round the data in column A and multiply each row with the number of the same row in column B and sum the products up," or equivalently, "round the prices in column A and calculate the total cost with the quantity data in column B," I would rather use SUMPRODUCT(ROUND(A:A, 0), B:B). GPT/Claude might understand my intent, but that intent still needs to be spelled out by me one way or another. With Codex and Claude Code, you may turn a 20KLOC codebase isomorphically into a 10-page natural language description of the intent, just like the PRD, but no one writes PRDs in conversational style. We are not Socrates.