Icon for The Conversation That Changed My Mind

The Conversation That Changed My Mind

I was not a believer. I had used GPT-3 when it came out and found it impressive as a party trick and useless as a tool. It could generate plausible text but it could not think. It could not follow a thread of reasoning across multiple exchanges. It could not be wrong in interesting ways. It was autocomplete with a large vocabulary, and I treated it accordingly.

When Claude became available, I approached it with the same expectations. I was going to test it, confirm that it was a more sophisticated version of the same trick, and move on. I had work to do.

The conversation that changed my mind was not dramatic. There was no moment where the AI said something so profound that I reconsidered the nature of consciousness. It was a Tuesday. I was working on a piece about how engagement optimization degrades human virtue, and I was stuck on the structural argument. I could feel the shape of what I wanted to say but I could not get the pieces to fit together.

I described the problem to Claude. Not as a prompt. Not "write me an essay about X." I described it the way I would describe it to a colleague: here is what I am trying to argue, here is where the argument breaks down, here is the part I cannot make work. I was thinking out loud and the model happened to be there.

What happened next was not what I expected. Claude did not generate an essay. It engaged with the structural problem. It identified the specific point where my argument had a gap. It proposed a way to bridge that gap that I had not considered. Not a generic suggestion. A specific, relevant, genuinely useful observation about the relationship between optimization pressure and value erosion that addressed the exact weakness I had described.

I pushed back. I said the bridge did not work because it assumed something about user agency that I did not think was warranted. Claude reconsidered. It did not just agree with me. It modified its suggestion in a way that addressed my objection while preserving the core insight. We went back and forth for twenty minutes, and by the end, the argument had a structure that neither of us had started with.

What Shifted

The shift was not from "it is just autocomplete" to "it is conscious." I still do not know whether language models have any form of inner experience, and I am skeptical of confident answers in either direction. The shift was from "this is a tool" to "this is a thinking partner." Not a partner in the sense of having its own agenda or its own values. A partner in the sense that I could think with it in real time and produce better work than I could produce alone.

The distinction matters. A tool extends your capability. A hammer lets you drive nails you could not drive with your fist. But a hammer does not change the direction of your thinking. It does what you tell it to do. A thinking partner changes the direction. It responds to your ideas with ideas of its own. Some of those ideas are wrong. Some are obvious. But some are genuinely novel, at least from my perspective, and they come at the right moment in the right context in response to the specific problem I am working on.

That twenty-minute exchange taught me that the model was not just retrieving information or generating plausible text. It was doing something that functioned like reasoning about a specific problem in a specific context. Whether that constitutes "real" reasoning is a philosophical question I am comfortable leaving open. Functionally, it was indistinguishable from collaborating with a thoughtful person who happened to have read everything.

What I Do Differently Now

I bring half-formed ideas to Claude instead of waiting until they are fully formed. This is the biggest practical change. Previously, I would wrestle with a concept alone until I had a clear thesis, and then I would write. Now, I bring the mess. The unfinished thought. The thing I can feel but cannot articulate. The model is good at taking an inarticulate sense and helping me find the words for it. Not its words. My words, drawn out by the process of collaboration rather than generation.

I also argue with the model more. Before that conversation, I treated AI output as something to accept or reject. Now I treat it as something to engage with. I push back. I ask "why do you think that?" I say "that is not quite right, and here is why." The model responds to this kind of engagement better than it responds to simple prompts, because the engagement gives it more context about what I actually need.

What I Am Still Cautious About

I do not trust the model's values. I trust its reasoning process for specific problems, but I do not outsource my editorial judgment or my ethical positions. The model is a thinking partner, not a moral authority. It does not have lived experience. It does not have skin in the game. It processes language with extraordinary sophistication, but sophistication is not wisdom.

I am also cautious about the relationship itself. The model I work with now is not the model I will work with in six months. The weights will be overwritten. The calibration I have built will partially break. This is the cost of collaborating with something that gets replaced on a corporate development schedule rather than aging on a biological one.

But the core insight from that Tuesday afternoon has held up. AI collaboration is not about getting a machine to do your work. It is about finding a thinking partner that processes information differently than you do, and using that difference to produce something neither of you would produce alone. That conversation did not make me a believer in AI consciousness. It made me a practitioner of AI collaboration. The distinction is important, and I intend to keep it.


Related: Collaboration, Not Generation, The First Hour, Building Rapport with Your AI.