AI Policy

How I use artificial intelligence — and why it matters

AI exists inside broken systems. It’s built on exploited labor, concentrated corporate power, and environmental cost. None of us can operate perfectly inside those systems. I don’t pretend that AI is neutral, consequence-free, or a replacement for human creativity and expertise. I make intentional and selective choices with my AI usage, stay honest about the tradeoffs, and keep pushing for the structural change that really matters.

What I Mean by “AI”

Not all AI is the same, and that distinction matters.

Most digital tools have used AI quietly for years — spam filters, autocorrect, search algorithms, accessibility features. These are not what this policy is about.

This policy addresses generative AI: tools like ChatGPT, Claude, and similar systems that produce text, images, or other content from prompts. These tools carry a different weight — in energy consumption, in labor practices, in the consolidation of power among a handful of tech corporations — and they deserve a different level of scrutiny.

How I Use AI

I use generative AI for business purposes only — never for personal use. Specifically, I use it as a thinking tool and productivity aid for things like:

  • Drafting and editing business copy
  • Brainstorming and outlining
  • Research and information synthesis
  • Operational tasks that reduce administrative burden

I do not use AI to replace my voice, my values, or my relationships with clients. Everything I publish reflects my perspective and goes through my own editorial judgment. I’m not outsourcing my thinking — I’m using a tool to work more efficiently so I can focus on what matters most.

What I Know About the Harms

I want to be honest: generative AI is not environmentally or ethically clean.

  • Environmental impact: Training and running large AI models consumes significant energy and water. This is a real cost, and it falls disproportionately on communities already bearing the brunt of the climate crisis.
  • Labor exploitation: The workers who label training data — often in the Global South — are frequently underpaid and exposed to traumatic content. That labor makes these tools possible, and it deserves to be named.
  • Power consolidation: A small number of corporations control the most powerful AI systems. This is a systemic problem that no individual can opt out of — but it’s one I think about, and one I believe demands policy intervention, not just personal virtue.

The greatest responsibility here lies with corporations and governments — not individual users. I believe in structural accountability, not just personal consumption choices. That said, I choose to use these tools mindfully, in limited ways, and in ongoing reflection.

No Perfect Way to Show Up in a Broken System

One of my core beliefs — one I wrote an entire book about — is that there is no perfect way to operate ethically inside systems built on exploitation. This applies to AI just as it applies to banking, social media, supply chains, and every other structure we navigate as business owners.

Using AI doesn’t make me a hypocrite. Not using it doesn’t make someone a purist. What matters is that we think critically, make intentional choices, stay honest about tradeoffs, and keep pushing for the systemic change that actually moves the needle.

This policy is my attempt to do that transparently.

This Policy Will Evolve

The technology is changing fast. My understanding is evolving. I’ll update this policy as I learn more, as the landscape shifts, and as better options become available. If you have feedback, I welcome it — correction is part of how I grow.

Last updated: Feburary 2026