EA excels at scaling proven interventions like bed nets, but systematically underweights tackling the systems bottlenecks that stall life-saving vaccines. Progress studies can help.
> I think EA systematically underweights “improving systems” and high-variance, high-potential-impact interventions.
I completely agree with this, especially the high-variance, high-potential-impact interventions. My analogy I have made before is that I don't think that inventing antibiotics would have been considered a very good cause by a counterfactual EA-like community in the 1910s. Yet, they have been responsible for so many lives saved (both human and non-human) since their discovery.
In my experience capital-EA Effective Altruists tend to be more interested in applying things that are known to work rather than building out the possible space of effective interventions.
I think part of the situation is that many people in the community who are interested in high-variance, high-potential-impact interventions are "spending most of their weirdness points" on AI x-risk interventions. They might be right to do so, but it creates a vacuum.
I agree with this partway, but I think this underemphasizes the extent to which effective altruism is still largely focused on how people can do the most good with their charitable donations. While there's definitely a lot of societal good coming from these broader, less measurable, interventions, I suspect they are probably much harder for people donating relatively small amounts of money to do anything for, and that that's a big reason why effective altruism focuses less on it.
That sounds reasonable, but it does make me wonder if we are missing charitable organizations that focus on research and perhaps activism for ways to solve these sorts of problems. But it’s not really clear what the best actual actions and particularly useful of funds would be in that regard.
I don’t think EA or Progress Studies ever existed as discrete entities with rigidly defined boundaries, and so discussing this becomes hard without caricaturing either side. I think most EAs are happy to subscribe to most of progress studies, only slightly less so the other way around. EA has been moving in this direction for multiple years now — as it became more institutionally and politically competent what became “tractable” expanded quite a lot. If EA has a weakness it’s that some people subscribe to it as more of a totalizing identity (again, less true in progress studies) which leads to edge case but loud arguments. but eg open phil is obviously pluralistic in this sense and is now funding abundance, etc. so I think it’s a false dichotomy. there’s work that’s more or less coded as such, but thought leaders and those making key decisions pay attention to both
I think EA is mostly correct to avoid politics, because EA's proper role is to focus on existence problems (in the sense of problems that can be solved by any one org caring about them rather than requiring a majority
FDA issues are politicized and are thus (mostly, with some exceptions we should be very careful about) majority problems. These problems do need to be solved, but they shouldn't be a focus for EA as EA (as you point out, you can subscribe to multiple things).
I'm not sure I buy this. To the degree EA started as grass roots it makes no sense to talk about huge systemic change with individual giving. Then the big money in the movement comes predominantly from tech, where there's a strong element of libertarianism. It seems not surprising that the systemic issue those funders focused on was AI.
I admit I have some bias here. True believer libertarianism where the government is the only source of power with the risk of catastrophic negative consequences is so deeply silly. And it dominates it or least deeply shapes broader libertarian thought. It's like right wing Marxism in that sense.
So I'm highly sympathetic to the stance that one should assume what libertarians focus on -- where regulation is a top concern -- are not likely candidates as contenders for maximally improving human welfare. So I'm sympathetic to the process of getting to smart versions of some of those things only in the second stage of growth of s movement that's only ~15 years old.
Thanks Matt! I love the idea of second-stage of growth for a movement. Do you have any recs for further reading on how movements shape over time?
In terms of your point about grassroots: I think EA in its current form and earliest form probably could have promoted research on systems and complexity. Even though it’s grassroots, it does now have semi-centralized funding and has an open discourse (i.e. the EA forum) where big ideas like “funding research on XYZ for the greater good is important” or “econ research helped China” can spread and influence funding. Perhaps people did promote systems-probmes but it didn’t catch on.
> I think EA systematically underweights “improving systems” and high-variance, high-potential-impact interventions.
I completely agree with this, especially the high-variance, high-potential-impact interventions. My analogy I have made before is that I don't think that inventing antibiotics would have been considered a very good cause by a counterfactual EA-like community in the 1910s. Yet, they have been responsible for so many lives saved (both human and non-human) since their discovery.
In my experience capital-EA Effective Altruists tend to be more interested in applying things that are known to work rather than building out the possible space of effective interventions.
I think part of the situation is that many people in the community who are interested in high-variance, high-potential-impact interventions are "spending most of their weirdness points" on AI x-risk interventions. They might be right to do so, but it creates a vacuum.
I agree with this partway, but I think this underemphasizes the extent to which effective altruism is still largely focused on how people can do the most good with their charitable donations. While there's definitely a lot of societal good coming from these broader, less measurable, interventions, I suspect they are probably much harder for people donating relatively small amounts of money to do anything for, and that that's a big reason why effective altruism focuses less on it.
That sounds reasonable, but it does make me wonder if we are missing charitable organizations that focus on research and perhaps activism for ways to solve these sorts of problems. But it’s not really clear what the best actual actions and particularly useful of funds would be in that regard.
I don’t think EA or Progress Studies ever existed as discrete entities with rigidly defined boundaries, and so discussing this becomes hard without caricaturing either side. I think most EAs are happy to subscribe to most of progress studies, only slightly less so the other way around. EA has been moving in this direction for multiple years now — as it became more institutionally and politically competent what became “tractable” expanded quite a lot. If EA has a weakness it’s that some people subscribe to it as more of a totalizing identity (again, less true in progress studies) which leads to edge case but loud arguments. but eg open phil is obviously pluralistic in this sense and is now funding abundance, etc. so I think it’s a false dichotomy. there’s work that’s more or less coded as such, but thought leaders and those making key decisions pay attention to both
I think EA is mostly correct to avoid politics, because EA's proper role is to focus on existence problems (in the sense of problems that can be solved by any one org caring about them rather than requiring a majority
https://open.substack.com/pub/shakeddown/p/the-three-kinds-of-social-problems ).
FDA issues are politicized and are thus (mostly, with some exceptions we should be very careful about) majority problems. These problems do need to be solved, but they shouldn't be a focus for EA as EA (as you point out, you can subscribe to multiple things).
I'm not sure I buy this. To the degree EA started as grass roots it makes no sense to talk about huge systemic change with individual giving. Then the big money in the movement comes predominantly from tech, where there's a strong element of libertarianism. It seems not surprising that the systemic issue those funders focused on was AI.
I admit I have some bias here. True believer libertarianism where the government is the only source of power with the risk of catastrophic negative consequences is so deeply silly. And it dominates it or least deeply shapes broader libertarian thought. It's like right wing Marxism in that sense.
So I'm highly sympathetic to the stance that one should assume what libertarians focus on -- where regulation is a top concern -- are not likely candidates as contenders for maximally improving human welfare. So I'm sympathetic to the process of getting to smart versions of some of those things only in the second stage of growth of s movement that's only ~15 years old.
Thanks Matt! I love the idea of second-stage of growth for a movement. Do you have any recs for further reading on how movements shape over time?
In terms of your point about grassroots: I think EA in its current form and earliest form probably could have promoted research on systems and complexity. Even though it’s grassroots, it does now have semi-centralized funding and has an open discourse (i.e. the EA forum) where big ideas like “funding research on XYZ for the greater good is important” or “econ research helped China” can spread and influence funding. Perhaps people did promote systems-probmes but it didn’t catch on.