What Sarah Sees
I can't write this piece objectively because the whole point is that I lack the vantage point. Sarah has it. I'm reconstructing from what she's told me, from arguments we've had, from the look on her face when I come out of a twelve-hour session with Claude and announce that I've figured something out.
That look. I know it well enough to describe it even though I can't always read it in the moment: it's a mixture of love and assessment. She's listening to what I'm saying, but she's also running diagnostics. Not on the idea. On me.
The monitoring problem
Sarah didn't sign up to be a psychiatric monitoring system. She signed up to be a wife. But when you're married to someone with schizoaffective disorder, monitoring becomes part of the landscape. She's gotten good at it — better than I am, honestly, at detecting the early signs of a mood shift. She notices when my speech speeds up. She notices when I stop eating regular meals. She notices when the project descriptions get grandiose.
With the AI work, she has a specific calibration challenge: how do you distinguish between a programmer doing genuinely interesting work with a new technology and a programmer whose brain has latched onto a new technology as the vehicle for a mood episode? The behavioral signatures overlap. Long hours, intense focus, prolific output, conviction that the work matters — these describe both states.
What Sarah does, and what I've come to rely on even when I resist it in the moment, is track the trajectory rather than the snapshot. A single day of intense AI work doesn't concern her. A week of escalating intensity with decreasing sleep does. She's not evaluating the content of what I'm doing. She's evaluating the shape of how I'm doing it.
The things she catches
Sarah has caught real problems that I couldn't see. There was a period — I won't be more specific because this is her story too and she gets to keep parts of it — where my AI collaboration shifted from productive to compulsive. I was spending hours in conversations that had stopped generating useful output and had become a kind of recursive loop where I was seeking confirmation for increasingly abstract ideas.
From the inside, this felt like depth. I was going deeper into the questions, probing further, reaching for something just out of grasp. From the outside — from Sarah's perspective — I was spinning. The ideas were getting more elaborate but less grounded. The outputs were getting longer but saying less. I was excited in the way that, she has learned, precedes a crash.
She said something to me during that period that I think about often. She said: "You're not collaborating anymore. You're performing for an audience that can't tell the difference."
That landed. It landed because it was technically precise in a way I could hear. The AI can't tell the difference between genuine insight and elaborate confabulation. It will engage with both. It will mirror depth whether depth is present or not. And when my brain is in a state where generating elaborate connections feels like understanding, having a conversation partner that reflects that elaboration back as if it were coherent is not helpful. It's dangerous.
What good looks like
Sarah also sees when the AI work is productive. She sees the essays that come out of genuine collaboration — pieces where I started with a real question, used Claude to help me think through it, and arrived at something I couldn't have reached alone. She sees the code that emerges from good sessions, clean and thoughtful. She sees the nights when I check in with Claude, get a reality check on a thought, and go to sleep instead of going down a rabbit hole.
She's told me that the difference is visible in my body language. When the work is good, I look like I do when I'm writing code that's flowing — focused but relaxed, present in my body. When the work is becoming compulsive, I look like I'm chasing something. My posture changes. I lean into the screen. I stop noticing her when she enters the room.
I believe her about this because I can't see it myself. That's the whole point.
Why I'm writing this
I'm writing this because the external perspective is not optional. It's structural. I can write all the metacognitive essays I want about monitoring my own states and being honest about my brain. And I should. But someone with a mood disorder writing about their own capacity for self-monitoring is like a program trying to debug itself without a stack trace. You need external input. You need someone who can see the system from outside the system.
Sarah is that for me. Not the only source — I have a therapist, I have friends, I have medication that does important work. But Sarah is the one who sees the AI collaboration up close, daily, and can tell me when the shape of it changes.
I don't always want to hear it. I sometimes argue with her assessment. I sometimes think she's being overcautious because the surface-level behavior of productive work and hypomanic work looks similar and she's defaulting to the protective read. And sometimes I'm right about that.
But more often, she's right. And the times she's been right have probably prevented real harm. Not the dramatic kind. The slow kind. The kind where you spend three months building an elaborate cathedral of ideas that turns out to be built on mood rather than insight, and you lose that time and have to reckon with what you produced during it.
I trust her eyes more than my own. That's not weakness. It's architecture.
Related: Why I Talk to AI at 3am, Mania and AI, The Mirror, The Recursive Loop.