LLMs.
Tokens.
Context windows.
Are these the new compilers, bytes, and RAM of this generation of software development, or perhaps new primitives for computing altogether?
Probably not. But for the sake of the argument, let’s suspend disbelief and treat them as such. I’ll stretch the analogy to its limits, just to make a point.
On July 16, 1969, at 8:32 AM CDT, Apollo 11’s Saturn V launched from Kennedy Space Center. Days later, it would land the first humans on the Moon. Onboard was one of the earliest computers of its kind: the Apollo Guidance Computer (AGC).
The AGC handled every phase of the mission, guidance, navigation, and control, from launch to splashdown. At any moment, it was juggling an enormous number of critical tasks.
40 KB.
That was the AGC’s RAM. Forty kilobytes carried the computation that put humans on the Moon.
Now consider:
A token.
A kilotoken.
A megatoken.
A gigatoken.
A teratoken.
200 kilotokens (200KT).
1 megatoken (1MT).
What can 200KT do today?
First, we used it to document large codebases.
Then we used it to write tests.
Now it can implement features end-to-end, doing both of the above along the way.
So what future are we imagining?
Many developers who daily-drive tools like Claude Code, Cursor, or Antigravity believe the endgame is simple: we describe what we want, the AI builds it, maintains it, processes bug reports, and ships features autonomously.
But
Context size is the constraint.
200KT isn’t enough.
Neither is 1MT.
Neither will 1GT or even 1TT.
So what do we do?
We route around it.
We send only what’s necessary for a specific task, and nothing more.
A task becomes an atomic operation on the codebase: transforming it from one consistent state to another. We provide the task and its dependencies, hand them to the compiler(the LLM), and expect it to resolve the change into a concrete set of operations.
Context windows are hard limits.
But how we use them is not.
By getting smarter about task decomposition and dependency selection, even 200KT is enough to make this future viable. And when 1TT finally arrives, we’ll have new problems to think about, rather than still wrestling with the basics of software development.
The journey toward smarter use of context windows has only just begun.
And there’s still a lot that needs to be built before this future becomes real.
I want to be part of building it, so I will be taking my AI skeptic hat off for a while.
Inspired by: