On Thinking With Machines
I’ve spent the last week in Paris, reading Sartre in the evenings and tinkering with a few side projects during the day. I didn't intend for the two activities to intersect, but at some point, the combination of distance, philosophy, and practical engineering began informing the same core question: What does it mean to be responsible for the creation of software today?When the Hierarchy Shifts
For a long time, the ideal for technology was captured by J.C.R. Licklider’s 1960 vision of Man–Computer Symbiosis.
Licklider envisioned a partnership where the human and machine worked together, yet their roles were strictly defined and asymmetrical. The human was the unquestioned expert, responsible for setting goals, framing problems, and evaluating responses. The machine was the executor, carrying out routine actions that could be fully specified in advance. Humans reasoned; machines assisted and acceleated.
It would be unfair to characterize the machine here as trivial. But the partnership still relied on a stable hierarchy of expertise. The human entered the interaction already knowing the contours of the problem and how to recognize a correct solution.
His proposal was founded on the assumptions that: one, reasoning happens upstream of execution, and two, correctness is something the human can comprehensively assess.
In the age of large generative models, those assumptions no longer feel stable.
Today, I often find myself interacting with systems that possess a breadth of knowledge far surpassing my own - ones that are able to recall every fact, pattern, and idea they've absorbed from the entire online world. These systems do not simply act on my instructions. They surface alternatives and challenge my problem exploration and solution definition in ways that are often surprisingly useful.
The interaction feels less like strict supervision and more like a Socratic dialogue:
- I ask a question
- The system responds with a proposal
- I interrogate its assumptions, add context, and push back
- It surfaces alternatives I did not originally consider
- We collaboratively revise the solution
Expertise is no longer housed in either of us. It emerges through the back and forth of our dialogue itself. Each side continuously destabilizes, productively challenges, and sharpens the other.
The Friction of Value
What's striking in hindsight is that Licklider was largely right about the shape of the relationship, but wrong about where the pressure would eventually accumulate.
His concerns reflected the bottlenecks of his era: speed, interactivity, human-computer bandwidth. The challenge was how quickly and fluidly a human could communicate intent to a machine that was fundamentally limited.
The pressure points of today feel different.
Modern systems can reason about many possible ideas. They can explore a solution space more broadly and quickly than I can on my own. They can surface alternatives and reframe problems in ways that can meaningfully influence the outcome. But they cannot be the ones to decide which direction ultimately becomes real.
At some point, the dialgoue ends.
A system ships. An abstraction becomes something other people need to work with. A tradeoff hardens into a production constraint. When things go wrong months later, when something fails or a customer complains or a reputational line is breached in a way no benchmark could have predicted - the human is responsible.
The asymmetry of intelligence becomes an asymmetry of responsibility.
On Thinking with Machines
What does it mean to take this new form of symbiosis seriously?
If the human is no longer the sole source of mastery, and the machine is no longer the passive executor, then our tools and mental models need to evolve to reflect that. Our task no longer only involves issuing instructions or evaluating outputs. It now includes engaging in productive dialogue, supplying context, and deciding when exploration should stop.
As execution and low-level reasoning become cheaper, more of the burden shifts upward. Less effort goes into specifying low-leval semantics and more into framing problems, defining constraints, and deciding what really matters.
This will change what competence looks like. It will feel less like factual expertise in the narrow sense and more about an ability to articulate intent, interrogate suggestions, and synthesize competing possibilities into coherent judgment.
Ultimately, this means competence is defined by those who are good at thinking with systems - not just commanding them - while remaining accountable for their outcomes.
Licklider imagined a symbiotic world that was one-directional and grounded in hierarchy: humans understood the problem, machines carried out the work. The world that is emerging now is more fluid and disorienting, and in many ways more demanding.
We are not just supervising our tools. We are learning to think alongside them while bearing responsibility for what becomes of that thinking.