An AI Policy Expert Told Us That Non-Programmers Shouldn't Work on AI. Here's Why He's Wrong
Technical skill helps in AI discussions, but some of the most important questions about AI deployment are the kind of complex social problems that generalists and domain experts are trained to tackle.
I sat in a room where an AI policy expert told a group of social science experts that he didn't think people without a computer science background should work on AI.
Because we would not know how to decipher hype from reality.
This attitude is wrong.
This gatekeeping attitude is rarely stated so bluntly, but I've seen its effects. People, especially women, filter themselves out of AI work because they've internalized the idea that a computer science background is required to contribute meaningfully.
Hype or Pessimism Doesn't Stem from Technical Skill:
The biggest names on the pro-acceleration and the "opposing" pro-safety side tend to be computer scientists (examples: Marc Andreesen, Yoshua Bengio).
Where someone will fall on their attitude toward new tech depends on other things. The pattern is consistent in other domains. Economists with deep mathematical training disagreed on housing bubble risks. MDs disagree on the value of various public health interventions.
This Is About Comparative Advantage, Not Absolute Advantage:
AI is impacting many different sectors simultaneously, creating different knowledge demands. A CS background gives you comparative advantage in understanding model architectures and training processes. But we also need people with comparative advantage in business adoption patterns, labor market analysis, regulatory implementation, psychology around technology adoption, democracy, and institutional design.
Just as we don't expect software engineers to also be the best economists, we shouldn't expect them to be the best at every aspect of AI governance.
Technical Credibility Is Not Always the Missing Factor:
When someone casually references transformer architectures, they tend to get taken more seriously, and others do find it intimidating. This might steer conversations toward "what can this model do on tests?" rather than "how does this technology actually work when it's trying to do real-world tasks?” or more broadly, "what happens when it interacts with existing institutions and human behavior?"
Capability questions feel more rigorous because you can point to benchmarks and papers. The latter questions are harder to measure but often more persuasive and actionable for policy discussions.
We Need Deep Willingness To Dig Past Headlines:
To save time, people tend to take attention-grabbing AI-related headlines at face-value if it aligns with their understanding of the world.
Recently, a brilliant software engineer told me about the massive privacy concerns with a change in Amazon's Alexa data sharing. After I told him how the change was smaller than represented in that headline, he fact-checked me and realized the headline was overblown. A CS background on its own doesn't combat bait-y headlines. Technical experts are also susceptible to hype in adjacent areas outside their core expertise.
Breakthroughs in Understanding Come from Multiple Domains:
Generalists and social science professionals often must differentiate themselves by deep knowledge in a wide array of subjects tangentially related to their work.
For me, this has meant staying up-to-date with experts I trust on hype-prone topics. AI evaluations? I read Steve Newman. Misinformation and persuasion? Dan Williams. Economic or political trends? Matt Yglesias. Computer scientists do this, too! But their expertise doesn't require this breadth to the same extent.
Some problems genuinely require deep CS knowledge—like evaluating claims about model capabilities or training efficiency. Others require CS knowledge to be helpful—like understanding AI labor displacement. But many core governance questions primarily need other expertise: adoption patterns across industries, regulatory compliance costs, international coordination mechanisms, or public opinion formation.
If you're on the fence about working on AI and don't have a CS degree, go for it.
It's great to have a CS background but not every person needs one to improve the AI discourse. We need teams with a mix of comparative advantages. In this space where trust is built by explaining where hype is deserved or not, a wide variety of expertise is needed.
If you have expertise in economics, psychology, public policy, or organizational behavior, AI discourse needs you. Start by following the technical discussions enough to ask informed questions, but don't let impostor syndrome keep you out of conversations where your expertise matters.
If you have a list of grounded authors on topics prone to hype, please share them!
