Why Americans Are More Pessimistic About AI Than the Rest of the World
Over the past years, tech skepticism became a marker of being informed and progressive. When beliefs signal group membership, they can trump even direct experience.
Noah Smith recently asked a question that’s been nagging at me: Why do Americans hate AI so much?
Noah had always wanted something like AI, a tool that could help us think, create, and solve problems. Yet, surveys show Americans are far more pessimistic about AI than people in China, Korea, and most other countries.

I think I know why.
When Beliefs Become Signals
Over the past decade, tech skepticism has become strongly associated with the American left, especially among educated progressives. And when a belief becomes a marker of being informed and progressive, people anchor toward that belief to signal group membership. This effect can be so strong that it trumps even their own experience.
For example, when a Republican is president, Democrats report that economic conditions in the country are worse. Once a Democrat comes to power, economic conditions get better very quickly, according to surveys. When a Democrat is president, Republicans do the same. People’s real-life spending and employment experiences don’t match what they report on surveys.

On climate change, an Avaaz survey that polled 10,000 people between the ages of 16 and 25 found that over half of respondents expect climate change to “doom” humanity. Yet Gen Z are still enrolling in universities and planning careers. And despite being framed as humanity’s greatest threat, climate change consistently fails to crack the top ten voting issues, particularly among lower-income Democrats and non-Democrats.
Researchers call this “expressive responding” or “partisan cheerleading” where people signal loyalty to their political team on surveys rather than reporting their actual beliefs. It’s most common among highly engaged partisans who know which answers mark them as informed members of their tribe.
And I suspect something similar is influencing how people respond about AI.
Beliefs as Identity Markers
Americans express deep concerns about AI in surveys. But like climate change, AI is also not a top-ten issue for voters. Expressive responding could contribute to this if the left has an anti-AI or anti-tech stance. Since roughly half of Americans lean left, a shift in how the left talks about tech can reshape national sentiment.
The Opinions Editor of The Guardian notes that the Left is increasingly defined as “fearful, agnostic, or hostile to technology.” This mindset fuels, for example, the “degrowth” movement, which views technology as the cause of the climate crisis rather than a source of green energy solutions.
This shift has been particularly visible in media coverage. Matt Yglesias, reflecting on the New York Times' shift, said on X that the paper made a deliberate editorial decision to cover tech with “a very tough investigative lens” that was “highly oppositional at all times and occasionally unfair.” Journalist Kelsey Piper confirmed she’d heard directly from Times reporters that there was a top-down directive that tech could not be covered positively, even when there was a true, newsworthy, and positive story.

You can see this dynamic in how some concerns about AI spread. Progressive circles have embraced the idea that individuals using ChatGPT is environmentally irresponsible because of water consumption. When researcher Andy Masley pointed out that 2,500 ChatGPT prompts use roughly the same water as making a single sheet of paper, he faced some backlash. Some people weren’t relieved the problem was smaller than reported. They were defensive.
To me, this was a sign that this had become an identity belief, meant to be protected rather than examined.
What Gets Lost
When skepticism becomes identity, it’s harder to see the full picture. AI is delivering benefits its millions of users, although these are hard to track in GDP.
I’ve experienced AI’s intangible welfare improvements. AI makes me a more confident writer. I red-team my drafts from different perspectives before posting. When I was confused why my sore throat turned into a cough last week, ChatGPT explained that coughs often follow as inflammation clears. Google’s SEO-optimized medical results weren’t useful.
Researchers have tried to measure these values. One study found the average US user would need to be paid $98 to give up ChatGPT for a month. This would point to up to $97 billion in consumer surplus, based on 2024 models. That’s likely an overestimate because people demand more to prevent losing something than they’d pay to get it (loss aversion bias). But even accounting for that bias, there’s clear value above the $0-$20 users actually pay.
To be clear, I’m not saying AI’s downsides don’t matter. AI can pose challenges around labor displacement, concentration of power, cyber and bioweapons risks, etc. I care about them enough to spend my time working on those risks.
But I want our discourse to be nuanced and high-quality, so we don’t get sidetracked prioritizing the wrong risks. Or thinking that you have to be all in on AI or against it. We can be excited about new tools while pushing for them to be developed responsibly.
And we do that best when we can talk about AI openly, honestly, and in good faith.
EDIT: To be clear, I think this dynamic explains 15%-50% of the reason why Americans are responding to surveys with such pessimism. It explains why we’re outliers but not the entirety of the pessimism.

For something like climate change, It's partly partisan signaling, but I think people also have high ambient anxiety that they need to attach to something. On the left, it could be climate or inequality. On the right, it might be immigration or social decline. Are people just more neurotic than they used to be? It seems like in the past and in developing countries, people have more concrete problems that they need to focus on, and they don't have the mental space and energy to worry about such diffuse issues.
It's interesting to wonder why opposition to technology became a left-leaning signal in the first place. Maybe it ties back to environmentalism since the 70's, but it seems like it's experienced a resurgence in the last 10 years. I'm not sure why.
There's definitely the ingroup signaling aspect you describe, plus overlap with longrunning leftist skepticism of big business of any kind, and sensitivity to "exploitation," etc. And I agree that some of these narratives (ex: environmental harms or antitrust) are reflexive and tribal and barking up the wrong tree. I use AI regularly and think it's impressive and exciting in many ways.
I also agree with a few others here that there's something more to it than that. One big distinction compared to other countries is that in the US, the emergence of big tech has coincided with a period of political dysfunction and national decline (ex: relative to China). The internet era began at the peak of American power and self esteem, and since then, a lot of depressing trends have intertwined with plausibly tech-related explanations.
Phone addiction fried our attention spans and made it harder to live in the moment. A fragmented media made it harder to tell what is true and probably increased polarization. There's been a rise in anxiety and mental health issues. The digital economy reduced face to face interactions; digital entertainment reduced in-person hangouts. That accelerated the breakdown of community described in Bowling Alone, and increased rates of loneliness and being single. There's been a concentration of economic opportunity on the coasts and in white collar sectors using tech; globalization hollowing out the heartland and leading to deaths of despair, etc.
How much of these trends is actually attributable to tech is debatable, but the impression is enough to embed the skepticism. A lot of Americans would probably rather live in 1995, and see AI as an acceleration of trends that are changing too much too fast. Whereas that's less so in many other countries.