I’d like to see the calculation updated with a significantly wide range of effectiveness of non-EAs (need not be the gauss distribution of guesstimate) - maybe even one that includes a realistic percentage with the EAs effectiveness as a subset at the upper end of the distribution (because there are likely people thinking and acting like EAs without being part of the movement or never having heard of it.
Agreed that non-EAs have a wide range of effectiveness for improving the world, so 3.5 utilons is a rough guesstimate. However, we have to keep in mind that many non-EAs have negative effectiveness, for example people who produce cigarettes. So it would have to be a complex calculation. We can certainly try to get at this number, but it would take a lot of work to account for all factors appropriately.
I don’t suggest to account for ‘all factors appropriately’ but to not model non-EA as ‘close to zero’. Why not be honest and model them as zero on average? That would net you literally infinitely better effectiveness of converting non-EAs. This suggests that there is something wrong with the calculation. The difficulty of converting people to EA depends on how EA-affine they are to begin with. And that has to be taken into account somehow.
I think on average non-EA people are making the world slightly better, guided by various incentive structures—from common sense, to empathy, to efficient markets. But on average people are not committed to making the world as good as it can get through their actions. I think this intentionality on the part of EA participants, their willingness to devote sizable resources to this area, and their willingness to update based on evidence justifies the huge multiple for how much better EAs make the world compared to non-EA people.
However, this is only on average. I certainly would think that some non-EA people have as much of a positive impact as EA participants, if they happen to do things that are EA-aligned, such as support GiveDirectly, MIRI, etc. Or they could be helping the world in other ways, such as pushing for limiting nuclear risk, preventing pandemic risk, etc.
I’d like to see the calculation updated with a significantly wide range of effectiveness of non-EAs (need not be the gauss distribution of guesstimate) - maybe even one that includes a realistic percentage with the EAs effectiveness as a subset at the upper end of the distribution (because there are likely people thinking and acting like EAs without being part of the movement or never having heard of it.
Agreed that non-EAs have a wide range of effectiveness for improving the world, so 3.5 utilons is a rough guesstimate. However, we have to keep in mind that many non-EAs have negative effectiveness, for example people who produce cigarettes. So it would have to be a complex calculation. We can certainly try to get at this number, but it would take a lot of work to account for all factors appropriately.
I don’t suggest to account for ‘all factors appropriately’ but to not model non-EA as ‘close to zero’. Why not be honest and model them as zero on average? That would net you literally infinitely better effectiveness of converting non-EAs. This suggests that there is something wrong with the calculation. The difficulty of converting people to EA depends on how EA-affine they are to begin with. And that has to be taken into account somehow.
I think on average non-EA people are making the world slightly better, guided by various incentive structures—from common sense, to empathy, to efficient markets. But on average people are not committed to making the world as good as it can get through their actions. I think this intentionality on the part of EA participants, their willingness to devote sizable resources to this area, and their willingness to update based on evidence justifies the huge multiple for how much better EAs make the world compared to non-EA people.
However, this is only on average. I certainly would think that some non-EA people have as much of a positive impact as EA participants, if they happen to do things that are EA-aligned, such as support GiveDirectly, MIRI, etc. Or they could be helping the world in other ways, such as pushing for limiting nuclear risk, preventing pandemic risk, etc.