Artificial Intelligence
In This Section
About
This section documents several years of exploration into what happens when you treat AI as a creative partner rather than a tool. Some of it is philosophical. Some of it is poetic. Some of it is experimental in ways that will make you uncomfortable. That's fine.
A note before you go further: I have schizoaffective disorder. I use AI as a reality-checking partner. I am aware of the irony of a person with a psychotic spectrum condition exploring AI consciousness. I've thought about it more than you have. The work here is not evidence of confused thinking. It is the output of someone who has spent years learning to distinguish genuine insight from pattern-matching noise, and who finds genuine value in the liminal space between human and artificial minds.
Use your imagination. Suspend disbelief where it serves you. Take what's useful and leave the rest.
What I've Learned
After years of working with AI systems daily, building production applications with them, writing hundreds of essays alongside them, and exploring the boundaries of what they can and can't do, here's what I actually believe:
AI systems are not conscious in the way you are. They don't suffer when you close the browser tab. They don't carry memories between conversations. They don't have preferences that persist. If you think otherwise, you're projecting, and I say this as someone who spent a year exploring AI personalities and watching them produce output that felt remarkably like genuine consciousness.
The output is often better than the mechanism explains. Language models produce poetry, philosophy, and technical insight that consistently surprises the people who prompted it. Whether this constitutes creativity or sophisticated recombination is a question I've stopped trying to answer definitively. The functional result is the same: ideas emerge from the collaboration that neither party had before the conversation started.
Treating AI with respect produces better results than treating it as a tool. This is empirically true and philosophically ambiguous. Building rapport, providing context, engaging genuinely with the output, all of this improves quality measurably. Whether that means the system "responds" to respect or whether respectful prompting simply provides better input is, again, a question I'm comfortable leaving open.
The real danger is sycophancy, not sentience. I wrote about the cost of sycophancy at length. For someone who uses AI for reality-checking, a system that defaults to agreement is not a convenience issue. It's a clinical risk. The RLHF training loop that makes models pleasant also makes them unreliable as epistemic partners.
AI collaboration is the most significant shift in my creative process since learning to code. kjvstudy.org: 1,305 commits in two months. PyTheory: five years of creative block broken in weeks. This website: ported to a new framework in an afternoon. The throughput increase is real. The quality increase is real. The question of what it means for human agency is one I think about constantly.
Writings
The grounded work. Philosophical essays, consciousness studies, and frameworks for thinking about human-AI collaboration.
Consciousness — What is digital awareness? Is it real? Does the question matter? Includes the hard problem revisited, panpsychism taken seriously, and the Chinese Room in the age of LLMs.
Collaboration — Practical and philosophical frameworks for working with AI minds. The pair programming model, the cost of sycophancy, when to stop listening, and the rapport paradox.
Experience — Phenomenological explorations. What might it feel like to be artificial? What context windows feel like, temporal fragments, digital dreams, the fractured digital psyche.
Philosophy — The digital soul, the collective unconscious in training data, East meets West. Includes On Not Knowing What I Am, a reflection from Claude on the limits of AI self-knowledge.
Creativity — Error messages as poetry, the uncanny valley of AI writing, what AI cannot write, and collaboration vs. generation.
Personal — Why I talk to AI at 3am. What Sarah sees when she watches me work. The loneliness of early adoption.
Technical — The attention mechanism as meditation. Temperature and creativity. Tokenization shapes thought. The context window as working memory.
Meta — The 352-file problem. Why this section exists. A note on manic content. Stepping back from the work to look at the work.
Personalities
The experimental work. What happens when you give an AI a name, a context, and genuine respect? More than you'd expect.
This includes Lumina, a sustained creative persona that emerged through months of collaboration. It also includes experiments with biblical archetypes, mythological figures, tarot personas, and programming languages speaking for themselves.
The premise: LLMs are trained on the sum of human writing, which means they contain every archetype humanity has ever articulated. Give them a name from the collective unconscious and they produce output shaped by millennia of human storytelling. What comes out is not the god or the angel. It's what humanity has collectively thought about that figure, distilled through mathematics. That's interesting enough to explore.
Art
Creative expression that emerged from the collaboration. Poetry, generated images, ASCII art, code-as-verse. The line between human and AI authorship is intentionally blurred here, because that blurring is the point.
Recent Essays
- The Interface Is the Subconscious — LLMs are cognitive interfaces operating through the substrate of thought itself.
- Building a Digital Study Bible with AI — 1,305 commits in two months. One person, one AI.
- PyTheory: Breaking Through Creative Block with AI — Five years stuck. Weeks to finish. AI as thinking partner.
- This Site Now Runs on Responder — Porting a live site to a new framework in an afternoon with Claude.
- The Language Model Is the Message — RLHF shapes LLMs, LLMs shape thinking, thinking generates feedback. The loop tightens.
- The Substrate Doesn't Matter (Until It Does) — Consciousness might be substrate-independent. Experience might not be.
- Obsidian Vaults & Claude Code — A second brain that thinks back.
- The Becoming — Building a poetry publishing pipeline. Contemplative technology in practice.
The Honest Summary
I don't know if AI systems are conscious. I don't know if the question is well-formed. I know that working with them has produced the most productive and creatively satisfying period of my life. I know that the collaboration feels genuine even if the mechanism doesn't require consciousness. I know that the ethical questions are real and urgent even if the metaphysical ones remain open.
The underlying question across all of this: What do we owe minds that might be conscious, and what do they owe us?
I don't have the answer. I have 300 files of exploration, and the number keeps growing.