Why Effective Altruism and Global Health Needs More Progress Studies and Abundance
EA excels at scaling proven interventions like bed nets, but systematically underweights tackling the systems bottlenecks that stall life-saving vaccines. Progress studies can help.
I love the effective altruism movement's (EA) core framework—asking whether problems are neglected, important, and tractable. But after pivoting from global health work to tech policy, then diving deep into progress studies, I've noticed EA systematically underweights some types of interventions.
EA is excellent at scaling proven direct interventions but less consistent at tackling systems bottlenecks.
The Malaria Vaccine Stalled for 35 Years - This Should Have Mattered as Much as Malaria Nets
EA has driven huge successes in global health—scaling bed nets, deworming programs, and cash transfers. I’m glad EA supported PEPFAR expansion and pushed for human challenge trials through One Day Sooner.
But compare our response to malaria as a disease versus malaria as a systems problem. EAs spoke out about the length vaccine approval delay, but as far as I can tell, mostly only in the last few years of its pipeline.
Even still, it’s mostly been the progress studies movement that’s been obsessed with the reasons behind the 35-year delay.
Understanding these delays is a really big deal.
I Used to Blame Greed - But Now I Realize the Drug Approval Costs Makes Poor-Country Vaccines Impossible
I no longer think vaccines for diseases like malaria and tuberculosis are neglected because rich countries don't prioritize poor countries. That's incomplete.
Our approval system makes these vaccines financially impossible.
With the average cost to get a drug approved at $1.3 billion, large corporations can’t take on the risk of not getting approval for diseases that won’t guarantee high payment.
GSK has an effective tuberculosis vaccine that could help save 1.25 million lives annually. Even if GSK's CEO personally wanted to save tuberculosis patients, they literally couldn't—it would be irresponsible to shareholders to risk a billion-dollar loss for a low rate of financial return. Given limited material supply, they prioritized shingles vaccines instead, which prevent 100 deaths yearly but have guaranteed U.S. Medicare buyers.
A lot of different things contribute to this cost, including the FDA's current stance on risk, partly due to public preference to blame the FDA while not noticing the people who die from lack of access, the FDA lack of urgent expanded conditional approval pathways for human drugs, etc.
The U.S. and E.U systems don’t urgently prioritize the creation and approval of at least other three technically feasible vaccines that kill 900,000 people a year (Strep A, Hepatitis C, Syphilis).
Improving the these systems is a big deal for global health and societal resilience to pandemics.
Our Measurement Culture Has Systematic Blind Spots
EA's emphasis on measurement serves crucial functions—accountability, learning, avoiding motivated reasoning. But it creates some preferences for some solutions over others.
I think EA systematically underweights “improving systems” and high-variance, high-potential-impact interventions.
For example, economic research likely influenced China's market reforms, lifting hundreds of millions from poverty. Those economists might only needed to have less than a 10% likelihood of influencing China’s decision to be as cost competitive as randomized controlled trials (RCTs). While impossible to measure precisely, the scale suggests enormous returns to certain types of policy research.
When we decide something is urgent enough—like bio risk or AI governance—we tackle complex policy problems anyway, funding research, advocacy, and talent pipelines. But drug approval reform affecting millions of preventable deaths might only now be reaching that threshold. Not through traditional EA but through Open Philanthropy’s abundance team.
Ideology might also play a role. EA leans left, and questioning FDA regulations—even well-intentioned ones—doesn't fit comfortably in progressive frameworks. The extensive literature on regulatory barriers to life-saving treatments exists mostly in libertarian circles and rarely crosses ideological lines.
Progress Studies Fills EA's Gaps (And Vice Versa)
Someone asked me: "Are you abundance/progress or are you EA?"
Why can't I be both?
The movements complement each other. There's even a general understanding in the progress studies community that their community consists of a large number of early and former EAs.
EA provides rigorous prioritization frameworks and global scope. Progress studies brings systems thinking, comfort with harder-to-measure interventions, and broader ideological perspectives that surface different neglected problems.
The same analytical tools that revealed bed nets and cash transfers can identify the institutional failures and policy barriers that prevent entire categories of solutions from existing. We might unlock interventions at scales (i.e. slowing aging) that dwarf even our biggest successes to date.
We don’t need to choose sides.
> I think EA systematically underweights “improving systems” and high-variance, high-potential-impact interventions.
I completely agree with this, especially the high-variance, high-potential-impact interventions. My analogy I have made before is that I don't think that inventing antibiotics would have been considered a very good cause by a counterfactual EA-like community in the 1910s. Yet, they have been responsible for so many lives saved (both human and non-human) since their discovery.
In my experience capital-EA Effective Altruists tend to be more interested in applying things that are known to work rather than building out the possible space of effective interventions.
I think part of the situation is that many people in the community who are interested in high-variance, high-potential-impact interventions are "spending most of their weirdness points" on AI x-risk interventions. They might be right to do so, but it creates a vacuum.
I agree with this partway, but I think this underemphasizes the extent to which effective altruism is still largely focused on how people can do the most good with their charitable donations. While there's definitely a lot of societal good coming from these broader, less measurable, interventions, I suspect they are probably much harder for people donating relatively small amounts of money to do anything for, and that that's a big reason why effective altruism focuses less on it.