How to Argue Better About AI: Lessons from the Safety-Ethics Divide
Whether you work on existential risk, safety, or other AI concerns, practical advice for engaging productively across AI's most contentious debates.

The AI safety community has an insularity problem—and it's limiting its impact.
After years working on AI policy from safety, ethics, and innovation angles and serving on boards focused on AI governance, I've noticed too few people are building bridges between AI communities.
If you're deep in AI safety work (or another “niche”) and want to influence broader AI discourse, these are my recommendations.
Spend at least 10% of your reading time on the best sources outside of your community.
I recommend Golden Gate Institute's "Second Thoughts" (just a few posts monthly) or following Arvind Narayanan from "AI Snake Oil" on Substack or LinkedIn.
Many safety researchers dismiss AI Snake Oil (book) and its "AI as a Normal Technology" discourse as hostile. This creates blind spots. Even if you disagree fundamentally, understanding these perspectives prevents you from being blindsided when talking to those who treat these sources as authoritative. Worst case scenario: you get annoyed or confused—which should trigger curiosity, not avoidance.
Assume the other person is brilliant. Act accordingly.
This transforms how you listen. You'll be a lot more curious!
This is a much faster way to make yourself ask the right questions. You'll search for the coherent framework underlying their views. After hours of interviews with people skeptical of AI safety work, I discovered I'd been missing an entire worldview that is highly knowledgeable about gradual corporate power accumulation.
AI safety researchers share these corporate power concerns—but frame them as "gradual disempowerment" or other dystopian scenarios. You only discover these shared foundations by digging deeper.
Notice when you're finding it hard to remember someone's argument
This confusion signals you've encountered a genuinely different worldview. Your brain is flagging ideas that don't fit your existing framework. Instead of dismissing the discomfort, lean into it. Chase the confusion.
Find ways to collaborate, even minimally.
One way to do this: write up your specific disagreements with a paper like "AI as a Normal Technology." Before publishing, send it with a brief summary to the authors requesting quick feedback. Knowing you'll do this forces you to write more kindly and with them in mind. Other options: interviewing them for your research.
This makes AI discourse less insular while improving your own arguments through good-faith and curiosity-led engagement with different communities.
By engaging seriously with critics, we stop talking past each other and start identifying the actual cruxes. We discover which disagreements are substantive versus which are just different framings of shared concerns. Most importantly, we build public discourse that can handle the messy realities of governing AI. The alternative is communities making AI policy based on internal consensus rather than stress-tested ideas.