Thinking Outside the Skull
Companion piece: [← The Bootstrapper’s Catch-22]
On a spring afternoon in San Diego harbor, the USS Palau lost power to its engines.
The ship — an amphibious helicopter transport, 602 feet long, 18,000 tons — was entering the narrow channel when the engineering plant failed. The navigation team on the bridge had been computing position fixes every three minutes, a routine operation performed by a chain of specialized sailors using bearing instruments, plotting tools, and a set of procedures refined over centuries. The routine suddenly became the only thing between the ship and the channel wall.
What happened next is the opening scene of Edwin Hutchins’s Cognition in the Wild, and it is the most important thing I’ve read about how thinking actually works.
The team kept computing. The quartermasters took bearings. The plotters plotted. The recorder logged. The navigator integrated and called corrections to the helmsman, who was steering with emergency power. No single person on the bridge knew the ship’s complete situation. The quartermaster reading a bearing through the pelorus didn’t know the ship’s position. The plotter drawing the fix on the chart didn’t know the bearing. The navigator coordinating everything didn’t operate the instruments. But the system — the people, the instruments, the procedures, the physical layout of the bridge — knew. And what the system knew kept the ship off the rocks.
Hutchins, an anthropologist and open-ocean racing sailor, had been on the bridge that day to study something else entirely. What he found instead was a new way of understanding cognition. The navigation team was not a group of individuals collaborating on a shared understanding. It was a cognitive system — capable of computing things no member could compute alone. The knowledge lived in the interaction, not in any single head.
This is not the same thing as teamwork, and the distinction matters. A factory assembly line distributes labor — each worker performs a simplified subtask designed by an engineer who holds the complete picture. The worker doesn’t need to understand the system. That’s efficiency. On the Palau, the cognition itself was distributed, and no one held the complete picture — not even the navigator. The system worked not because each person’s role had been simplified but because each person’s expertise was genuine. Degrade any participant’s understanding — replace the experienced quartermaster with someone going through the motions — and the system doesn’t just slow down. It produces wrong answers. The ship drifts toward the rocks.
Cracking voices. Perspiration-soaked shirts. A ship drifting in a narrow channel. And a cognitive system that held together because the distribution of knowledge across people and tools was robust enough to survive the loss of its most powerful component.
I have been thinking about the Palau for weeks, because I think the same structure operates at every scale — from a warship in a channel to a person at a desk. The stakes are different. The cognition is not.
Where was the thinking?
Last month I analyzed three of my own project seed documents — the brainstorming artifacts that capture the first crystallization of a project idea. No one’s life was at risk. The channel was conceptual, the rocks metaphorical. But the distribution of thinking was the same.
One seed had never defined its audience. Another had no success criteria. A third had deep architectural thinking but no precedent research. The template I built would have caught these gaps — had it existed when the seeds were written, which it couldn’t, because it was derived from studying them.
Where was the thinking in this process? I can point to decisions that were mine: I chose the architecture, I defined the scope, I rejected suggestions that didn’t fit. I can point to contributions that were the AI’s: it organized differences I hadn’t seen, surfaced gaps I’d missed, proposed structures I wouldn’t have generated alone. But the template itself — the structural insight about what seed documents need to contain — emerged from the interaction. Not from me. Not from the AI. From the system that included both of us.
This is not a metaphor. It is a literal description of what happened, and it maps precisely to what Hutchins observed on the Palau. A cognitive system produced an output that no individual participant could have produced alone. The question “who thought of this?” doesn’t have a clean answer — not because the answer is complicated but because the question assumes cognition happens inside a single container, and it doesn’t.
I went looking for people who had thought clearly about this — about cognition that lives in systems rather than in individual skulls — and found them, almost without exception, at the margins of their fields. The pattern is not accidental. The insight requires standing at an intersection, and intersections are not where disciplines put their centers.
Five people who saw from the edges
Edwin Hutchins was an anthropologist and a sailor who ended up studying the Navy. His Cognition in the Wild (1995) was published by MIT Press, but its argument was a direct attack on mainstream cognitive science — which, at the time, was almost exclusively interested in what happened inside individual brains. By studying navigation as a system rather than as a collection of individual performances, Hutchins showed that the most important cognitive phenomena were the ones that emerged between people and tools, not within them. He could see this because he stood at the intersection of anthropology and seamanship, not at the center of either.
Gregory Bateson was trained as an anthropologist. He became a cyberneticist, a systems theorist, a philosopher of mind, and eventually something no existing discipline had a name for. He never held a conventional academic position in any of the fields he influenced. His Steps to an Ecology of Mind (1972) is unified by a single conviction: that mind is not a thing inside a person but a pattern that emerges from the interaction of an organism and its environment. Bateson asked: what is the pattern which connects? What connects the crab to the lobster, the orchid to the primrose, and all four of them to me?
Donald Schön was a professor at MIT whose entire intellectual project was a revolt against the institution he inhabited. The Reflective Practitioner (1983) argued that the dominant model of professional knowledge — what he called “technical rationality,” the idea that practice is the application of theory — was wrong. Practitioners know more than they can say. An architect designing a building is not applying theory to a site. She is having a conversation with the materials of the situation — sketching, encountering resistance, reframing, adjusting. When a situation presents itself as uncertain or unique, the practitioner reflects in action: thinking about what they’re doing while they’re doing it, evaluating the consequences of their moves as they make them. Schön studied architects, therapists, managers, and planners — people who think by doing — and found that their expertise consisted not in knowledge they possessed but in the quality of their engagement with problems they could not fully specify in advance.
Andy Clark was a philosopher who spent his career arguing against the philosophical consensus that mind equals brain. In 1998, with David Chalmers, he published “The Extended Mind” and asked a question philosophy had carefully avoided: where does the mind stop and the rest of the world begin? They answered it with a thought experiment. Inga wants to visit the Museum of Modern Art. She thinks for a moment, remembers it’s on 53rd Street, and walks there. Otto has Alzheimer’s. He wants to visit MoMA too. He consults his notebook, where he’s written “MoMA: 53rd Street,” and walks there. Standard cognitive science says Inga had a belief about MoMA’s location and Otto didn’t. Clark and Chalmers argued this was wrong. Otto’s notebook-consulting is his believing. The notebook is part of his cognitive system. Where the information physically lives — in neurons or on paper — is what Clark called “an unprincipled distinction.”
Christopher Alexander was an architect whose most influential work was adopted not by architects but by software engineers. A Pattern Language (1977) described 253 design patterns spanning the scale from regional planning to window placement. The architectural establishment dismissed it — too prescriptive, too anti-modernist, too willing to say that some designs are objectively better than others. But in 1987, software engineers Kent Beck and Ward Cunningham recognized in Alexander’s patterns a structural insight their own field desperately needed: that complex systems are built from composable, human-tested units of design. Alexander’s patterns were not recipes for buildings. They were descriptions of recurring relationships between people, spaces, and practices — knowledge that lived not in a designer’s head but in the interaction between users and environments. The design patterns movement, object-oriented programming, and the Wiki itself — the technology behind Wikipedia — all trace directly to an architect the architecture profession largely ignored.
What the margins see
Different decades. Different disciplines. Different countries. But each of them stood at an intersection, and from there saw the same thing the specialists on either side could not: cognition is not a thing that happens inside a container. It is a process that happens across a system. The container — the skull, the discipline, the profession — is a convenience, not a boundary.
I am an architect by training. I worked as an architect for ten years. I spent fifteen years in education — designing curricula, building pedagogical frameworks, training teachers. I started building software three months ago, using AI as an implementation partner within a structured methodology I designed. I am not a computer scientist. I am not a philosopher. I am not a cognitive scientist. I am a practitioner studying his own practice while practicing it.
There is a word for this, and it comes from the social sciences: autoethnography. The researcher as subject, studying a process from inside it, using the tools of that process to conduct the study. It emerged in the 1980s from the recognition that the pretense of objective distance was itself a distortion — that sometimes the only honest methodology acknowledges the observer is part of the system being observed. Bateson’s metalogues were autoethnography before the term existed. Hutchins on the bridge of the Palau was one step from it. And my analysis of my own conversations, conducted through the very kind of conversation the analysis was trying to understand, is the recursive version.
The notebook talks back
Clark and Chalmers built their argument on passive external storage. Otto’s notebook holds information and returns it unchanged. The cognitive system is human-plus-tool, but the tool is inert.
An AI conversation is not inert.
When I design a system in conversation with an AI, I am engaging with an interlocutor that responds, proposes, restructures, and sometimes surprises me. I describe a problem. The AI asks a clarifying question I hadn’t considered. The question changes how I understand the problem. I revise my description. The AI proposes an architecture. I reject part of it and accept part of it. The rejection and the acceptance together produce a third thing — a design direction that neither of us held before the exchange.
Schön would recognize this. It is his “reflective conversation with the materials of the situation,” except the materials now include a participant that generates its own responses. Schön watched an architecture professor named Quist work with a student on a site design. The student was stuck. Quist made a move — rotated the building on the site — and then listened to what the move told him. The slope pushed back. The trees constrained. The view opened up in a direction he hadn’t anticipated. Each move created consequences that demanded further moves, and the design emerged not from a plan but from the discipline of attending to what each action revealed. Thinking that could only happen by doing, and that could only be good if the practitioner was skilled enough to hear what the situation was saying.
The same dynamic plays out in a productive AI conversation. The AI proposes. The proposal has consequences I didn’t foresee. Those consequences reshape my understanding. I make a new move. The conversation is a design studio — and the quality of the output depends entirely on whether the practitioner in the room can hear what the materials are telling them.
But what I want to describe goes beyond what Schön wrote about. Reflection-in-action, as he defines it, is the practitioner responding to surprise — encountering something unexpected and adjusting in real time. What an experienced designer actually does is something more like prescience. Not the ability to predict the future, but the ability to navigate a landscape of possibilities — to see, before a path is tried, which paths have potential and which lead to dead ends. Frank Herbert wrote about this in Dune: not prophecy, but the capacity to perceive the branching paths and navigate among them.
This capacity comes from deep practice. It is the internalization of so many iterations that the iteration becomes partially visible in advance. A good design teacher can look at a student’s early sketch and see five possible directions it could go — three that will fail, one that is safe but boring, one that has genuine potential but will require the student to give up an assumption they’re attached to. This is not genius. It is the compound interest of disciplined practice.
And here is what I have observed in my own work: AI accelerates the iteration, but it does not produce the prescience. I can explore more paths faster. I can prototype an idea in conversation in minutes rather than days. But the ability to evaluate those paths — to recognize which ones have potential, which ones are dead ends dressed up as progress, and which ones require me to give up an assumption I’m attached to — that ability comes from me. It comes from decades of watching design happen — doing it, teaching it, watching students hit the same walls and find the same doors.
The AI is a powerful partner in the conversation. But the conversation is only as good as the practitioner’s capacity for prescient evaluation. And that capacity cannot be generated by the AI.
I have built a Claude skill — a structured prompt — whose purpose is to serve as my interlocutor during the seed-writing process. The skill is designed to probe, challenge, surface assumptions, and push for multiple approaches before converging on a direction. Building it was itself an iterative process: I needed to calibrate how much pushback, how much structure, how much freedom. In the end, I had to feed the skill samples of my own writing so it could understand how I communicate and think.
And here is a detail that I think matters more than it might appear: the skill only works well with one specific model. Claude Opus produces the kind of genuine creative tension that makes the conversation productive — it pushes back, it holds ambiguity, it resists premature convergence. Claude Sonnet, given the same skill, is too literal, too eager to structure, too quick to fulfill clear goals. Other models can offer feedback and pointed critique but fail to integrate the whole. Different models create different kinds of cognitive tension. Some push back and hold ambiguity; others rush to structure and closure. The distributed cognitive system is sensitive to the temperament of its artificial participant.
This creates a version of distributed cognition that philosophers have not yet had to grapple with: a system whose artificial participant changes every few months. Hutchins’s navigation instruments were stable — they had evolved over centuries. My instruments are evolving every quarter.
Who wrote this?
This is the question that matters most, and almost no one is framing it correctly — not the technology industry, not education, not the public conversation about AI.
The anxious version of the question assumes a binary: either I wrote it or the AI wrote it. This framing is not just reductive — it is structurally incapable of describing what actually happened. Neither I nor the AI “wrote” the template the way a sole author writes a novel. The template emerged from a distributed cognitive process, the same way the ship’s position emerged from the distributed process of bearing-taking, plotting, and fixing.
The defensive version of the question — “AI is just a tool, like a calculator” — is equally wrong. A calculator does not change how you think about the problem. A calculator does not propose framings you didn’t ask for. A calculator does not push back. But then, Hutchins’s navigation instruments didn’t “understand” the ship either. What mattered was that the system as a whole produced a computation no individual component could produce alone. The AI conversation is not tool use. It is collaborative cognition with a non-human participant, and pretending otherwise is a failure of description that leads to failures of practice.
There is a better version of the question, and it starts with what architects actually do.
A single architect need not draw every line, choose every fixture, lay every brick. An architect does not calculate every structural load. In a modern practice, an architect may not create any technical drawing at all. But this is not what makes an architect an architect. What makes an architect an architect is the capacity to hold an enormous number of conflicting parameters simultaneously — structural, aesthetic, functional, budgetary, regulatory, social — and navigate among them toward something coherent. An architect is a conductor, not a soloist. And at their best, architects hold the ambiguity of unresolved parts while still moving forward — maintaining a vision they know will change, retaining the core while adapting to the reality of structural constraints, material shortages, budget cuts, and the discovery that what they designed doesn’t work the way they thought it would.
This is very different from how both the profession and the culture depict architects — as creative lone geniuses, the Howard Roarks of the world, standing alone on a cliff with their vision. That myth serves ego. The reality is a system: the architect, the team of architects, the engineers, the consultants, the contractors, the clients, the materials, the site. The building emerges from the system. The architect is accountable for the building.
Accountability — not origin — is what authorship means when cognition is distributed.
Hutchins understood this about the Palau. The navigation team computes the position. The navigator is accountable for it. If the ship runs aground, the navigator does not say “the system made an error.” The navigator made an error, because the navigator is the person whose job it is to verify the output, catch mistakes, and take responsibility for the result.
This is the answer I have arrived at — through practice, not through philosophy — to the question of who wrote the template, who designed the architecture, who authored the seed documents. I did. Not because every idea originated in my head — it didn’t. Not because the AI’s contributions were negligible — they weren’t. But because I made the decisions, evaluated the alternatives, verified the outputs, and signed my name.
The thinking was distributed. The responsibility was not.
And the participants are not interchangeable. The AI brings pattern recognition and generative capacity I do not have. I bring evaluative judgment and architectural prescience the AI does not have. We are not doing the same work at different quality levels — we are doing different kinds of work that compose into something neither could produce alone. You cannot replace one with the other any more than you can replace the quartermaster with the navigator. You can only degrade the system by weakening either one.
It is not a comfortable answer — not for people who want AI to be either a miracle or a threat, and not for people who want authorship to be simple. But it is the answer that survives contact with how the work actually happens, and I believe it is the answer that the people I’ve described in this essay — Hutchins, Bateson, Schön, Clark, Alexander — would recognize as structurally sound.
The instrument I’m building
Gregory Bateson wrote that mind is not a thing inside a person but a pattern that emerges from the interaction of an organism and its environment. I am building an instrument that makes that pattern visible.
The project is called Shodō. It is a system for mining, indexing, and learning from the full history of my AI conversations — not the outputs, but the conversations themselves, the thinking in motion. Every question I asked, every suggestion I pushed back on, every moment of uncertainty and revision and surprise. Shodō would let me search this record in natural language, see a timeline of when and how intensely I engaged with any topic, and visualize the density of my intellectual attention across time.
The heat map — the temporal visualization of what you returned to, what you abandoned, what you were obsessed with for three months and then suddenly stopped — is Bateson’s “pattern which connects” rendered as data. It is an attempt to make visible the shape of a mind that thinks partly outside its own skull, across months and years and thousands of exchanges.
The bootstrapping is total. I need Shodō to study my AI conversations. Shodō is being designed through AI conversations. The conversation I’m having right now — about the epistemological status of distributed cognition — would itself be in the corpus. The instrument and its subject matter are the same thing.
The messy middle
I want to be honest about what I don’t know.
I don’t know whether the patterns I see in my own practice generalize. Three seed documents and one template is not a dataset. It is an autoethnographic case study — which has value, Hutchins built a theory from one ship, Schön from a handful of design studios — but it has limits I cannot hand-wave past.
I don’t know where the boundary is between “thinking with a tool” and “thinking with a mind.” Clark would say the boundary doesn’t matter — that functional role is what counts. I am not sure he is right. But I am not sure he is wrong, and the uncertainty is itself worth holding.
I don’t know whether the observations I’ve made about model sensitivity — that the same skill produces different quality thinking with different models — will remain stable. The AI landscape is changing fast enough that any epistemological claims I make may describe a transient phenomenon. Hutchins’s navigation instruments were stable over centuries. My instruments are evolving every quarter.
But there is a thread that runs through every thinker in this essay: a willingness to study distributed cognition from inside it, to acknowledge that the observer is part of the system, to accept the messiness that follows. Bateson’s metalogues are messy. Hutchins’s ethnography is messy. Schön’s design-studio observations are messy. The mess is not a flaw. It is a consequence of taking the subject seriously.
The margins thinkers did not resolve the questions they opened. Hutchins did not produce a complete theory of distributed cognition. Bateson did not unify the fields he moved between. Schön did not convince the academy to abandon technical rationality. Clark did not settle the debate about where the mind ends. Alexander did not reform architecture. What each of them did was open a question well enough that the rest of us could think inside it. A good framework does not answer. It orients. It tells you where to stand and what to look at and how to describe what you see.
The questions this essay has been circling — where does cognition live when it’s distributed between a human and an AI? What does authorship mean when the thinking extends beyond the skull? How do we study a process we’re inside of? — these are not my questions alone. They belong to everyone working with these tools, which is rapidly becoming everyone. They are not being asked clearly enough. They are not being studied carefully enough. And the people best positioned to study them are not theorists observing from outside but practitioners working from within — people whose lab notebooks are their own conversations, whose evidence is their own practice, whose methodology is necessarily recursive because the subject and the instrument are the same.
I am building one instrument. It is called Shodō, and it is an attempt to make the trace of distributed cognition visible and studiable. It is one attempt, at one intersection, by one practitioner. The questions it is trying to answer are larger than any single project or any single person’s practice.
But they have to start somewhere. Hutchins started on one ship. Schön started in one design studio. Bateson started in one conversation with his daughter.
I am starting here.
This is the second of two companion pieces. The first, [← The Bootstrapper’s Catch-22], tells the same story from inside the loop. It’s the fun one. This was the serious one.
References:
Edwin Hutchins, Cognition in the Wild (MIT Press, 1995)
Gregory Bateson, Steps to an Ecology of Mind (1972)
Donald Schön, The Reflective Practitioner (Basic Books, 1983)
Andy Clark and David Chalmers, “The Extended Mind” (Analysis, 1998)
Christopher Alexander, A Pattern Language (Oxford University Press, 1977)
Carolyn Ellis and Arthur Bochner, “Autoethnography, Personal Narrative, Reflexivity: Researcher as Subject” (2000)
— ATM



