AI Researchers Should Study Real-World Barriers, Not Just Capabilities
We need more attention on "societal frictions," the bureaucratic, physical, and institutional barriers shaping AI diffusion, not just AI capabilities.
Misinformation was supposed to destroy our information environment. It didn't. Anyone can create fake news, but most people still prefer truth. Supply increased, but demand stayed the same.
This pattern applies everywhere: society has friction.
The Missing Variable in Some AI Risk Models
AI safety researchers have gotten very good at modeling capability development, but have spent less time on the harder question of how those capabilities interact with real institutions.
This is probably because studying societal barriers might complicate some AI risk arguments. Studying these frictions might reveal that some risks are less likely.
That's exactly why we need this research. To focus on the right risks.
A Taxonomy of Societal Frictions
Here are some categories worth studying. This isn't comprehensive—there are probably many more.
Physical Bottlenecks in Biotech
AI can generate molecular design but can't test them in the real world. In biology and chemistry, AI might suggest new drug compounds or toxic molecules, but the larger bottleneck is physical experimental validation. Can the molecule be synthesized? Is it stable? Does it work in living systems?
Development and manufacturing depend on physical infrastructure. Building prototypes, testing durability, and scaling production of molecules or compounds requires physical iterative refinement.
Real-world validation takes time. Testing toxin transmission methods or drug safety is a long process.
AI helps most with pattern recognition and hypothesis generation. So far, it helps least with the messy, physical work of actually building and testing things.
Legal and Bureaucratic Barriers
Some regulations explicitly require human involvement. Truck drivers must physically place safety triangles on the road. Elevators imported to the US are bought assembled, then disassembled to meet regulations requiring US workers to assemble them.
Industries with audit or liability rules might see slower adoption. Finance and law sectors might struggle with full AI adoption, especially if models remain unpredictable or unexplainable.
Environmental review and zoning slow AI infrastructure. New data centers and energy projects need approvals that take 2-4+ years.
Government purchasing rules can add 18+ months to AI adoption decisions. AI use programs might stall due to contracting rules.
Information and Access Limits
AI can't train on classified information, internal corporate strategies, or proprietary research data. This creates gaps for frontier AI system’s capabilities in world-changing domains like national security and corporate strategy.
Organizational knowledge stays trapped in relationships. DC policy circles run on the information that lives in informal networks.
In-person trust as a resistance layer to nefarious uses of “AI remote workers”. High-stakes domains (startup funding, security, major deals) often depend on in-person interaction as a way to work decisively and to prevent remote deception. This preference may strengthen if AI deception improves.
Institution Trust as a Defense Layer
Most people prefer information and news sources seen as trusted by themselves and their peers. These sources have a strong incentive to maintain credibility and thus do some level of fact-checking.
The number of people who “demand” non-credible news doesn't change based on supply. Demand for non-credible news seems influenced by larger societal factors, like levels of public trust in all institutions.
Human Attention Doesn't Scale
AI might create millions of articles or movies, but people only have 24 hours a day. Impact depends on attention capture, not production volume.
People increasingly follow individual journalists and creators rather than institutions, suggesting preference for some human element. People might prefer the feeling of human authenticity or lived experience.
Misaligned Incentives in Science
AI might boost individual productivity while hurting collective progress. AI Snake Oil author Sayash Kapoor concludes AI might slow science. For example, scientific papers have increased 500-fold since 1900 while measures of actual scientific progress stagnate.
If AI helps everyone produce more content, the important work gets buried. Novel research struggles to rise above the noise when human attention stays fixed, but content volume explodes.
Why This Research Matters
Some of this work might feel like it undermines urgency around AI. But ignoring these barriers weakens the field by making predictions that sound disconnected from how institutions actually work.
Consider AI's economic impact. Current labor estimates often assume tasks can be automated as soon as AI matches human performance. But most valuable work happens over extended timeframes where current systems struggle with sustained attention and context maintenance. The difference between two-minute tasks and hour-long projects might not scale simply. Breaking down and understanding what labor actually is helps with predictions.
I'm not arguing that societal frictions are all that matters. I expect that some frictions are weaker than they appear. Others might be overwhelmed by sufficiently capable systems.
But right now, AI risk models are uneven. We have detailed capability projections but a rough institutional understanding.
The researchers best positioned to fill this gap might not be traditional AI safety people. We need political scientists who understand regulatory capture, economists who study technology diffusion, and sociologists who research institutional change.
We need to understand how that transformation happens—friction and all.
I feel like I keep saying this, but I’m very glad you keep posting stuff like this. There’s a time and place for pie in the sky doom forecasting, but even if you’re a true believer, someone has to hammer out the in-between.
And if you’re not a true believer (like myself), this is arguably more important than the long-term alignment research or pouring a million man-hours into mech interpretation or whatever. I feel like EAs often overlook the governance side of things (they say it’s “core priority” but in practice I see far more engineers than public servants) so I really appreciate the work you’re doing!