The Bootstrapper’s Catch-22
Companion piece: [Thinking Outside the Skull →]
There is a moment in Catch-22 — maybe the most famous moment in American satire — when the bombardier Yossarian finally grasps the shape of his trap. He can be grounded from flying combat missions if he’s crazy. All he has to do is ask. But the act of asking proves he’s sane, because only a sane person would try to avoid being killed. So he can never be grounded. He can never not fly. He will fly and fly until he is dead, which is the one outcome the regulation was supposed to prevent.
Doc Daneeka explains this with great patience, as though it were obvious, as though the architecture of Yossarian’s doom were merely an administrative matter.
“That’s some catch, that Catch-22,” Yossarian says.
Doc Daneeka nods. “It’s the best there is.”
I have been thinking about this exchange for weeks, because I have built my own Catch-22, and like Yossarian’s, it is airtight, self-reinforcing, and — if you tilt your head just right — funny.
Here is my situation. I am building a tool. The tool is a workbench for a development methodology I designed — a structured system for building software with AI that I call the Ho System, because naming things after Japanese walking meditation is apparently what happens when an architect spends too long thinking about process.
The workbench would give the methodology a home. A place where a practitioner sits down, works through a structured development session, records what they built and what they understood and what they didn’t, and over time accumulates a navigable record of their own growing competence. It would be useful. It would be good. I need it to exist.
The catch is that the workbench is being built using the methodology. The methodology is being formalized through the experience of building the workbench. I need the tool to validate the process. I need the process to build the tool. The methodology generates the workbench that runs the methodology, and neither one exists properly without the other.
That’s some catch.
It gets better.
Last week I sat down to create a template for writing project seed documents — the brainstorming artifacts that kick off a new project in my methodology. To create this template, I analyzed three seed documents I had previously written. These seed documents were produced through conversations with AI. The analysis of these seed documents was also conducted through a conversation with AI. The resulting template will be used to structure future conversations with AI that produce future seed documents.
During the analysis, the AI and I discovered that one of the three original seeds had never defined its audience and another had no success criteria. These are gaps the template would have caught, had the template existed when the seeds were written, which it couldn’t have, because the template was derived from the seeds. The template that would have caught these problems was built by studying the problems it would have caught.
I mentioned this to the AI. The AI, to its credit, did not attempt to comfort me. It simply noted the recursion and moved on.
Yossarian would have understood completely.
If you think this kind of circularity is the unique affliction of a man who names his projects in Japanese, I have good news: it is actually the oldest problem in computer science.
In 1962, at MIT, a man named Timothy Hart and another man named Mike Levin sat down and wrote a LISP compiler in LISP. Stop and think about this for a moment, because it is genuinely deranged. They wrote a program, in a programming language, whose purpose was to translate that programming language into something a computer could execute. This is like writing an English-to-French dictionary entirely in French, before you speak French. It is like building a boat in the middle of a lake.
It should not work. It works.
The trick was that they already had an interpreter — a slower, stupider tool that could execute LISP one agonizing step at a time. Think of the interpreter as a very patient person who speaks both English and French, willing to sit with you for as long as it takes while you point at things and grunt. You hand this patient person your compiler — your dictionary-that-only-exists-in-French — and they work through it, one instruction at a time, translating as they go. The output is a working compiler. A real compiler. A compiler that can now process LISP at speed, including, critically, its own source code.
At that moment the compiler became what computer scientists call self-hosting. It could compile itself. The patient bilingual interpreter was no longer needed. The training wheels came off, and LISP was running on LISP, and Hart and Levin could go home.
Computer scientists call this bootstrapping, after the old absurdist image of lifting yourself off the ground by pulling on your own bootstraps. In physics this is impossible. In software it is so common that it barely merits comment. Nearly every programming language you have ever heard of bootstrapped itself into existence this way. The first C compiler was written in assembly language, by hand, a process roughly as enjoyable as building IKEA furniture using only the Allen wrench and your teeth. The first Rust compiler was written in OCaml. GCC, the compiler that compiled most of the software on your computer, still bootstraps itself in three stages every time it’s built, like a snake that must swallow itself three times before it can digest anything else.
And then there is Douglas McIlroy.
McIlroy was one of the original Unix engineers — one of the people who built the operating system that built the internet that built the world you are reading this in. In the early days, McIlroy needed a compiler for a language called TMG. So he wrote one. On paper. He sat down with a pen and wrote a compiler in the language it was meant to compile, then worked through the entire computation by hand — by hand — translating his own high-level code into machine instructions, instruction by instruction. When he was done, he typed the machine code into the computer, and the computer ran TMG for the first time.
The computer’s first independent thought was a thought that a human had already thought, manually, on paper, in order to give the computer the ability to think it.
This is either profoundly beautiful or profoundly stupid, and I have come to believe it is both.
Bootstrapping has rules. The first one is that you cannot skip the interpreter stage.
This is not a suggestion. It is not a best practice. It is a structural law, like gravity or the requirement that IKEA furniture always have one leftover screw. You cannot get from “this language has no compiler” to “this language compiles itself” without first going through the slow, painful, deeply unglamorous process of running everything through an interpreter. The interpreter is the patient bilingual friend. The interpreter is McIlroy’s pen and paper. The interpreter is the part where you do the work yourself, badly, so that the tool you’re building can eventually do it well.
Hart and Levin could not skip the interpreter. McIlroy could not skip the paper. And I cannot skip the manual, markdown-file, discipline-and-convention phase of my methodology.
Every morning I sit down and I am the interpreter. I open my project documents in a text editor. I manually track which development session I’m in, what its objectives are, what I’ve built, what I understand. I write reflections by hand. I manage the structure through folder names and file conventions. It is not automated. It is not elegant. It is the interpreter stage, and it is the only stage that leads to the compiler.
Kinhin — the workbench, the tool I am building — is one of my compilers. It’s a tool that will guide a practitioner through the Ho process: take an idea, structure it, translate intent into architecture and architecture into working code, step by deliberate step, with the AI as implementation partner and the human as the one who decides what gets built and why. Building it with the methodology it will run is a bootstrap. A clean, familiar, almost comfortable bootstrap. Hart and Levin would recognize it.
But I have another project, and this is where the bootstrap becomes truly unhinged.
When I was a young man I had a system. I would mark up passages in books, argue in the margins, and record quotes in the endpapers. I would then record my groundbreaking insights in my journal — Moleskine, of course. When I wanted to synthesize, I would review my journals and pull books from my shelf. This worked beautifully when I was 22, had five journals, and forty books. But my pace of reading and writing increased, and soon I had more journals than I could carry in my nomadic life. Graduate school simply made the system untenable. The thinking outgrew its container.
It has happened again.
I am designing a system called Shodō — a tool for mining, indexing, and learning from the full history of my conversations with AI. Not the outputs of those conversations — not the code, not the documents, not the templates. The conversations themselves. The thinking in motion. Every question I asked, every suggestion I pushed back on, every moment where the AI proposed something and I said “no, that’s wrong” or “yes, but not like that” or “wait — say that again.”
The conversations are where the real work happens. The artifacts are outputs. The conversations are the laboratory notebooks, the design studio crits, the whiteboard sessions where the actual decisions get made. And right now they disappear. They scroll past in a sidebar. I’ve solved the same problem three times because I couldn’t find the conversation where I solved it the first time. I’ve forgotten positions I arrived at through genuine intellectual labor. The thinking happens, and then it’s gone.
Shodō would fix this. It would let me search my entire conversation history in natural language, see a timeline of when and how intensely I engaged with any topic, and synthesize what I’ve concluded about a subject across dozens of conversations. It would be, in effect, a map of my own mind rendered over time. What I am trying to build is not just a search engine for old conversations. It is an attempt to understand what actually happens when thinking is shared between a person and a machine — and that turns out to be a much harder question than where to put the database.
Here is the catch. And it is a catch so perfectly circular that I believe Joseph Heller would have appreciated it, possibly with a martini.
To design Shodō well, I need to understand how I actually use AI conversations. What patterns emerge. Where the thinking happens. How ideas develop across sessions. What gets lost and why. I need, in other words, exactly the kind of conversational analysis that Shodō is designed to enable.
But it’s worse than that. Because the conversations I need to analyze include the conversations in which I am designing Shodō. The tool that would let me study my AI conversations is being designed in AI conversations that the tool would need to process. The raw material is the design process. The design process generates raw material. The conversation I had last Tuesday — the one where I figured out the architecture for the embedding pipeline — is itself a conversation about the system that would index that conversation.
I am not building a tool that studies my past. I am building a tool that studies the present, including the present in which I am building it. It is as if Yossarian’s Catch-22 were not just a regulation about flying missions but a regulation about regulations — a rule that the rules cannot be examined except by applying the rules, which are the rules being examined.
That’s some catch.
But I am not there yet. I am still doing the analysis by hand. I am still the monk.
It is Tuesday and I am analyzing my own design documents to produce a template for writing design documents, which I will then use to write better design documents, which will eventually be analyzed to produce a better template. I am in a conversation with an AI. The AI is very helpful. The AI has noticed that my three seed documents have different structures and emphases, and it has organized these differences into a useful taxonomy. I am impressed by this. I am also aware that the AI’s taxonomy is now shaping how I think about seed documents, which means the template I produce will reflect the AI’s organizational instincts as much as my own, which means the future seed documents written from this template will carry the fingerprints of a conversation that happened on a Tuesday in March.
The AI does not find this concerning. The AI does not find anything concerning. The AI is an interpreter — patient, thorough, incapable of alarm.
I point out to the AI that the template we are designing is itself a product of the process the template is supposed to structure.
The AI acknowledges this and asks if I’d like to continue.
I would like to continue. What else is there to do? McIlroy finished his compiler on paper. Hart and Levin ran their code through the interpreter. The bootstrap only runs in one direction: forward.
I step back from the desk and realize I have seen this shape before. Not just in compilers. Everywhere.
You learn to write by writing badly. The first draft teaches you what the second draft needs, which is information you could not have had before the first draft existed. You learn to teach by teaching — standing in front of actual students, saying things that don’t quite work, watching their faces, adjusting in real time. You learn to parent by parenting, live, with a real child, with no interpreter and no manual and certainly no compiler.
Cooking. Playing music. Running a business. Speaking a language. Every one of these has an interpreter stage — a phase where you are doing the thing poorly in order to develop the capacity to do it well — and every one of these has a strong temptation to skip it. There is a guitar in a closet somewhere in almost every home in America, and it is evidence of a universal structural law. There is a half-finished novel in a drawer, a sourdough starter that died in 2021, a marathon training plan that became a marathon training plan search. The interpreter stage is where most ambitions go to die, not because it’s impossible but because it’s boring and uncomfortable and it takes exactly as long as it takes.
There are no shortcuts to the place where shortcuts work.
There is a scene in Catch-22 — not the famous one, a different one — where Yossarian goes to see Major Major Major Major, who has given his secretary strict instructions that visitors may enter his office only when he is out. When Major Major is in, no one may see him. When he is out, of course, no one can see him, because he isn’t there. The visitors wait outside forever, because the condition for entry is the same as the condition for absence.
This is very funny. It is also a precise description of a deadlock condition in computer science, which is the state that occurs when two processes each wait for the other to release a resource, and neither can proceed, and both wait forever. Heller did not know he was writing about deadlocks. He was writing about the military. But the structure is identical, because bureaucracies and computers share a common ancestor: the unshakable conviction that if the rules are followed correctly, the outcome must be correct.
I think about Major Major Major Major when I think about the current state of AI-assisted development. There is an industry full of people waiting to enter an office that is only open when no one is there. The tools are available. The models are powerful. But the condition for using them well — judgment, understanding, the ability to evaluate what the machine produces — is the condition that using them is supposed to develop. You need the skill to use the tool. The tool is supposed to give you the skill. Major Major’s door is open only when he’s gone.
The way out is the same as it’s always been. You start with the interpreter. You do the slow work. You write the compiler on paper. And eventually, one day, you type the machine code into the computer and it runs, and the thing that runs is the thing you built by hand, and it runs better than you could have done it, because that is what compilers do.
I want to end with McIlroy, because I love that image and because it is exactly where I am.
A man sits at a desk with a pen and paper. He is writing a compiler. He is writing it in the language it is meant to compile. He is doing the translation by hand — every instruction, every branch, every jump — because the machine cannot yet do it and the only way to give the machine the ability is to do it first yourself.
It is tedious, painstaking work. It takes however long it takes. And when he’s done and types in the result and the machine runs the compiler for the first time, something has happened that is worth the tedium: a tool now exists that didn’t exist before, and it exists because someone was willing to be the interpreter.
I am the man at the desk.
The pen is a markdown file. The paper is a git repository. The compiler is a knowledge system that doesn’t exist yet — one that would study the very conversations I’m having to design it. And the language I am writing in is the language it is meant to compile.
That’s some catch.
It’s the best there is.
This is the first of two companion pieces. The second, [Thinking Outside the Skull →], explores the deeper question: when thinking is distributed between a human and an AI, what does authorship actually mean? It’s the serious one. This was the fun one. You can probably tell.
This piece, and nearly everything I write, owes a great debt to an old friend, teacher, and acolyte of the preposterous: T.S. McMillin. I heard his impish voice on repeat proclaiming, “That’s some catch, that Catch-22,” the whole time I was writing this. He is a man for whom reading itself is an instantaneous compilation. You should visit his substack, [The Pursuit of Wisdom (and Other Failures)→].
— ATM



