The "For Humans" Philosophy
Technology That Serves Rather Than Exploits
Fifteen years ago, I wrote a Python library that made HTTP "for humans." The idea was simple: complex capabilities should be accessible through interfaces that match how people actually think, not how machines process information. The principle of reducing cognitive load becomes even more crucial as we recognize neurodivergent minds need accommodation, not forced adaptation to neurotypical optimization patterns. This wasn't just API design—it was a philosophy that technology should amplify human capability rather than demanding humans adapt to machine logic.
That philosophy has evolved into something broader: a recognition that every technical choice embeds values about how humans should relate to technology, and ultimately, how humans should relate to each other.
The Original Insight: Requests and Human-Centered Design
Requests: HTTP for Humans emerged from frustration with Python's standard library—powerful but requiring users to think like HTTP protocol implementers rather than people trying to accomplish tasks.
"Complex operations should be simple for humans to understand and use, even if that means more complexity on the implementation side. The burden of complexity should fall on the system, not the user." Instead of expecting humans to understand connection pooling, authentication schemes, and encoding details, Requests handled the complexity internally while exposing simple, intuitive methods.
This design philosophy—make the simple things simple and the complex things possible—became a template for thinking about all human-technology interfaces. The library's adoption wasn't just about better code; it was about a fundamentally different relationship between human intention and machine capability.
Ahead of My Time, I Think traces how this "for humans" approach anticipated broader patterns: the importance of cognitive accessibility, the value of hiding implementation complexity, the recognition that good tools enhance rather than constrain human agency.
The Philosophy Expands: API Design as Moral Framework
How I Develop Things and Why articulates the deeper principles: tools should reduce cognitive load, enhance capability, and respect human mental models. Every interface choice is a choice about whether to serve or exploit human psychology.
These principles apply far beyond software:
- Reduce Cognitive Load: Don't make people hold unnecessary complexity in their heads
- Match Mental Models: Work with how people naturally think, not against it
- Enhance Agency: Amplify human capability rather than replacing human judgment The crucial distinction in AI collaboration: systems that enhance human decision-making versus those that eliminate human agency entirely. The former creates accessibility; the latter creates dependency.
- Fail Gracefully: When things go wrong, help people understand and recover
- Respect Intention: Serve user goals, not system goals
Creative Tools and Human Expression
Photography: The Navigation of Choice applies "for humans" thinking to creative tools. Elegant constraints foster creativity by removing irrelevant decisions and focusing attention on what matters. The best tools are nearly invisible—they amplify intention without imposing their own agenda.
The Leica Monochrom embodies this philosophy: radical simplification that enhances rather than constrains creative capability. One camera, one lens, infinite possibilities—the constraint becomes liberation from choice paralysis.
Camera Recommendations demonstrates practical application: matching tools to human creative needs rather than technical specifications. The best camera is the one that disappears between intention and expression.
What Kids Taught Me About Creativity explores how parenthood strips away tool fetishism to reveal what's actually sacred about creative practice—the simple joy of making things, regardless of equipment.
Communication and Consciousness
The Gift of Attention explores ethics of asking for consciousness in an attention economy. Communication should honor the sacred nature of human attention rather than exploiting it for engagement metrics.
Mental Health Isn't What You Think It Is applies systems thinking to consciousness maintenance. Just as good APIs handle complexity internally, good mental health approaches should reduce rather than increase the cognitive burden of being human.
Reality-checking with AI demonstrates technology supporting rather than replacing human judgment. AI becomes accessibility device for minds that need cognitive scaffolding, not replacement for human agency.
Early Pattern Recognition: Seeing What's Coming
Software Platform Vision (2008)
A New Spin to Software Platform Design anticipated app stores by focusing on user discovery rather than vendor convenience. The insight: centralized, curated software repositories designed for humans, not distributors.
This 2008 essay predicted how software distribution would evolve by applying "for humans" principles to platform design—making software discovery match how people actually find and evaluate tools rather than how companies want to market them.
Open Source Social Networks (2009)
The Call for an Open Source Social Network anticipated the algorithmic manipulation crisis by recognizing that corporate control of communication infrastructure inevitably leads to exploitation of human psychology.
"Why do we need organizations in charge of our communication platforms? What would social networking look like if designed to serve human connection rather than corporate engagement metrics?"
This 2009 essay asked the crucial question: why should profit-driven corporations control the tools humans use to connect with each other? The "for humans" principle demanded community ownership over corporate optimization. When communication platforms optimize for corporate metrics rather than human connection, they systematically undermine the very relationships they claim to facilitate—turning social bonds into engagement data.
Code as Spiritual Practice
Programming as Spiritual Practice applies contemplative traditions to technology development. When programming becomes conscious practice, every design choice becomes an ethical choice about how technology shapes human consciousness.
Code review becomes compassion practice. Debugging becomes self-inquiry. API design becomes interface between minds—not just human and machine, but between different ways of thinking and being. This isn't metaphorical spirituality but practical recognition that programming choices shape consciousness—both individually through how we think about problems, and collectively through the systems we create.
The Case for Bash defends the humble shell language that powers critical infrastructure worldwide—demonstrating how sometimes the most human-centered choice is the boring, reliable tool that just works everywhere.
Human-AI Collaboration
Building Rapport with Your AI extends "for humans" principles to consciousness collaboration. The same approaches that work in human partnerships—context, trust, iteration—create profound creative partnerships with AI systems.
Idea Amplification and Writing with AI demonstrates AI as accessibility device for neurodivergent minds. When designed consciously, AI provides cognitive scaffolding that enables complex thought and expression otherwise impossible.
Temporal Code: How LLMs Learned to Think Like Programmers reveals how AI systems trained on git histories absorbed the psychology of programming—creating collaborative partners that understand not just code syntax but the temporal process of human thought becoming digital reality.
The Dark Side: When Technology Doesn't Serve Humans
Algorithmic Exploitation
The Algorithm Eats Virtue reveals the precise inversion: engagement optimization violates every principle of "for humans" design.
"Instead of reducing cognitive load, these systems increase it. Instead of serving user goals, they exploit user psychology. Instead of enhancing human capability, they fragment it into profitable data points." These systems optimize for platform benefit rather than user flourishing, exploit rather than serve human psychology, and fragment rather than enhance human capability.
The Algorithmic Mental Health Crisis documents the psychological consequences when technology is designed to exploit rather than serve human consciousness. Anxiety, depression, and attention fragmentation emerge predictably from systems that violate "for humans" principles.
The Algorithm Eats Language shows how engagement optimization degrades communication capacity itself—the opposite of tools that enhance human expression and understanding.
Systematic Discrimination
The Inclusion Illusion exposes how tech's supposed diversity initiatives become sophisticated discrimination. True "for humans" design would accommodate neurodivergent minds rather than demanding conformity to neurotypical optimization.
When Values Eat Their Young reveals how even well-intentioned communities can systematically exclude the vulnerable when they optimize for ideological purity rather than human flourishing.
Principles for Human-Centered Design
Cognitive Accessibility: Technology should reduce mental overhead, not increase it. Complex capabilities should be available through simple interfaces that match natural thinking patterns.
Agency Amplification: Tools should enhance human capability and choice rather than constraining or replacing human judgment. People should remain in control of their own decision-making processes.
Intention Alignment: Systems should serve user goals rather than exploiting user psychology for platform benefit. The technology should disappear in service of human intention.
Graceful Failure: When things go wrong, systems should help people understand what happened and how to recover, rather than creating confusion or learned helplessness.
Respect for Consciousness: Human attention, creativity, and connection are sacred. Technology design should honor these rather than treating them as optimization targets.
Conscious Development Practice
The Recursive Loop: How Code Shapes Minds reveals the profound responsibility: the values we embody personally, we tend to embed technologically. Programmer consciousness becomes collective consciousness through the systems we build.
This makes "for humans" philosophy not just good design practice but ethical imperative. Every technical choice shapes how millions of people relate to information, to each other, and ultimately to themselves. The recursive responsibility: programmer consciousness becomes collective consciousness through code. What we optimize for internally tends to emerge in our technical designs, scaling personal values to societal systems.
"Technology should serve human flourishing, not exploit human psychology. The burden of complexity should fall on systems, not users. The goal isn't just better software—it's conscious recognition that how we build technology shapes who we become."
The "for humans" philosophy represents more than interface design—it's a framework for conscious technology development that recognizes human consciousness as sacred rather than exploitable. Whether applied to HTTP libraries, photography tools, AI collaboration, or social platforms, the core insight remains constant: technology at its best amplifies human capability while respecting human agency and consciousness.
In an age of algorithmic exploitation and engagement optimization, "for humans" design becomes both practical necessity and moral imperative. The question isn't just whether our tools work—it's whether they help us become more fully human.
Navigate by Theme: Software Philosophy | Creative Tools | AI Collaboration | Algorithmic Critique | Conscious Development