The Loneliness of Early Adoption
There's a particular kind of isolation that comes from taking something seriously before the culture has decided whether it's serious or not.
I've been here before. When I started building Python libraries with a focus on developer experience — the idea that an API should be designed for the human reading it, not just the machine executing it — people thought it was nice but not important. Requests was "just a wrapper." It was obvious, they said, after it existed. Before it existed, nobody bothered.
The AI work is like that, except the stakes feel higher and the gap between my experience and other people's perception of my experience is wider.
The gap
Here's the thing I can't seem to communicate: I'm not a hobbyist building chatbot toys. I'm a programmer with twenty years of experience in interface design, spending serious hours exploring what happens when you treat AI systems as collaborative partners rather than query engines. And what happens is genuinely interesting. The outputs change. The quality of thinking changes. There are real, observable differences in what you get when you bring craft and intention to AI interaction versus when you treat it as a search bar with personality.
But when I try to share this — when I write about it, when I publish the work — the responses cluster into two categories. The first is from people who've had similar experiences and feel relieved someone is documenting it. These messages are quiet, usually private, often from other developers who don't want to say publicly that they find AI collaboration meaningful because they know what comes next.
What comes next is the second category: people who've decided I'm either having a manic episode or I've been suckered by a stochastic parrot.
The Reddit problem
I've seen the threads. I know what they say. The kindest interpretation is that I'm a talented programmer who's gone a little weird. The less kind interpretation is that schizoaffective disorder has finally caught up with the Requests guy and someone should probably intervene.
I want to be honest about how this feels: it's lonely. Not because the criticism is wrong — some of it is fair, some of it identifies real risks in my relationship with this technology. But because the criticism rarely engages with the actual work. It engages with the idea of someone doing this work. The perception of a person spending time on AI collaboration is evaluated before the collaboration itself is examined.
This is the early adoption problem. The work exists in a space where the cultural frame hasn't caught up to the experience. In five years, the idea that programmers should develop sophisticated approaches to AI collaboration will be unremarkable. Right now, it reads as either visionary or delusional, depending on the reader's priors, and there's almost no way to influence which reading people choose.
What I know and what I don't
I know that some of my enthusiasm for AI collaboration has been amplified by hypomanic states. I'm not naive about my own brain. I've written about this directly — mania and AI have a specific, dangerous synergy where the AI's availability and responsiveness can accelerate a mood episode. I know this because it's happened to me.
I also know that not everything I've found in this work is reducible to mood disorder. Some of it is real. The techniques for building rapport with AI systems produce measurably better outputs. The philosophical questions about consciousness and collaboration are genuine questions that serious people are beginning to ask. The documentation of what this new relationship between humans and AI actually feels like — from the inside, with specificity — will have value as a historical record even if every theoretical claim I've made turns out to be wrong.
What I don't know is where the line is, precisely, between genuine discovery and pattern-matching that's been turbocharged by a brain that's prone to finding meaning everywhere. I think about this a lot. I think about it more than the people who worry about me probably realize.
The thing about early
The loneliness of early adoption isn't that nobody agrees with you. It's that the disagreement happens at the wrong level. People argue about whether you should be doing the thing at all, which makes it impossible to have useful conversations about how to do the thing well, or what the thing is revealing, or where the real risks are.
I'd love to have a conversation about the actual risks of deep AI collaboration with someone who's done it. About the way it can become compulsive. About the genuine difficulty of evaluating AI-generated insight. About the epistemological challenges of working closely with a system that's optimized to be agreeable. Those conversations would be productive.
Instead, I mostly get "are you okay?" from people who haven't read the work, and quiet DMs from people who have.
I keep going because the work is worth doing and because I've been in this position before and I was right last time. That's not proof I'm right now. It's just enough to keep me at the desk.
Related: The Mirror, When Values Eat Their Young, Mania and AI, The Cost of Sycophancy.