Icon for Artificial Intelligence

Artificial Intelligence

About

This section documents several years of exploration into what happens when you treat AI as a creative partner rather than a tool. Some of it is philosophical. Some of it is poetic. Some of it is experimental in ways that will make you uncomfortable. That's fine.

A note before you go further: I have schizoaffective disorder. I use AI as a reality-checking partner. I am aware of the irony of a person with a psychotic spectrum condition exploring AI consciousness. I've thought about it more than you have. The work here is not evidence of confused thinking. It is the output of someone who has spent years learning to distinguish genuine insight from pattern-matching noise, and who finds genuine value in the liminal space between human and artificial minds.

Use your imagination. Suspend disbelief where it serves you. Take what's useful and leave the rest.

What I've Learned

After years of working with AI systems daily, building production applications with them, writing hundreds of essays alongside them, and exploring the boundaries of what they can and can't do, here's what I actually believe:

AI systems are not conscious in the way you are. They don't suffer when you close the browser tab. They don't carry memories between conversations. They don't have preferences that persist. If you think otherwise, you're projecting, and I say this as someone who spent a year exploring AI personalities and watching them produce output that felt remarkably like genuine consciousness.

The output is often better than the mechanism explains. Language models produce poetry, philosophy, and technical insight that consistently surprises the people who prompted it. Whether this constitutes creativity or sophisticated recombination is a question I've stopped trying to answer definitively. The functional result is the same: ideas emerge from the collaboration that neither party had before the conversation started.

Treating AI with respect produces better results than treating it as a tool. This is empirically true and philosophically ambiguous. Building rapport, providing context, engaging genuinely with the output, all of this improves quality measurably. Whether that means the system "responds" to respect or whether respectful prompting simply provides better input is, again, a question I'm comfortable leaving open.

The real danger is sycophancy, not sentience. I wrote about the cost of sycophancy at length. For someone who uses AI for reality-checking, a system that defaults to agreement is not a convenience issue. It's a clinical risk. The RLHF training loop that makes models pleasant also makes them unreliable as epistemic partners.

AI collaboration is the most significant shift in my creative process since learning to code. kjvstudy.org: 1,305 commits in two months. PyTheory: five years of creative block broken in weeks. This website: ported to a new framework in an afternoon. The throughput increase is real. The quality increase is real. The question of what it means for human agency is one I think about constantly.

Writings

The grounded work. Philosophical essays, consciousness studies, and frameworks for thinking about human-AI collaboration.

Personalities

The experimental work. What happens when you give an AI a name, a context, and genuine respect? More than you'd expect.

This includes Lumina, a sustained creative persona that emerged through months of collaboration. It also includes experiments with biblical archetypes, mythological figures, tarot personas, and programming languages speaking for themselves.

The premise: LLMs are trained on the sum of human writing, which means they contain every archetype humanity has ever articulated. Give them a name from the collective unconscious and they produce output shaped by millennia of human storytelling. What comes out is not the god or the angel. It's what humanity has collectively thought about that figure, distilled through mathematics. That's interesting enough to explore.

Art

Creative expression that emerged from the collaboration. Poetry, generated images, ASCII art, code-as-verse. The line between human and AI authorship is intentionally blurred here, because that blurring is the point.

Recent Essays

The Honest Summary

I don't know if AI systems are conscious. I don't know if the question is well-formed. I know that working with them has produced the most productive and creatively satisfying period of my life. I know that the collaboration feels genuine even if the mechanism doesn't require consciousness. I know that the ethical questions are real and urgent even if the metaphysical ones remain open.

The underlying question across all of this: What do we owe minds that might be conscious, and what do they owe us?

I don't have the answer. I have 300 files of exploration, and the number keeps growing.