When I hear the words “intelligence” and “wisdom”, I think of things that are necessarily properties of individual humans, not groups of humans. Yet some of the specifics you list seem to be clearly about groups. So at the very least I would use a different word for that, though I’m not sure which one. I also suspect that work on optimizing group decision making will look rather different from work on optimizing individual decision making, possibly to the point that we should think of them as separate cause areas.
When I think about some of humanities greatest advances in this area, I think of things like probability theory and causal inference and expected values—things that I associate with academic departments of mathematics and economics (and not philosophy). This makes me wonder how nascent this really is?
When I hear the words “intelligence” and “wisdom”, I think of things that are necessarily properties of individual humans, not groups of humans. Yet some of the specifics you list seem to be clearly about groups.
I tried to make it clear that I was referring to groups with the phrase, “of humanity”, as in, “as a whole”, but I could see how that could be confusing.
the wisdom and intelligence[1] of humanity
For those interested in increasing humanity’s long-term wisdom and intelligence[1]
I also suspect that work on optimizing group decision making will look rather different from work on optimizing individual decision making, possibly to the point that we should think of them as separate cause areas.
I imagine there’s a lot of overlap. I’d also be fine with multiple prioritization research projects, but think it’s early to decide that.
This makes me wonder how nascent this really is?
I’m not arguing that people haven’t made successes in the entire field (I think there’s been a ton of progress over the last few hundred years, and that’s terrific). I would argue though that there’s very little formal prioritization of such progress. Similar to how EA has helped formalize the prioritization of global health and longtermism, we have yet to have similar efforts for “humanity’s wisdom and intelligence”.
I think that there are likely still strong marginal gains in at least some of the intervention areas.
When I hear the words “intelligence” and “wisdom”, I think of things that are necessarily properties of individual humans, not groups of humans. Yet some of the specifics you list seem to be clearly about groups. So at the very least I would use a different word for that, though I’m not sure which one. I also suspect that work on optimizing group decision making will look rather different from work on optimizing individual decision making, possibly to the point that we should think of them as separate cause areas.
When I think about some of humanities greatest advances in this area, I think of things like probability theory and causal inference and expected values—things that I associate with academic departments of mathematics and economics (and not philosophy). This makes me wonder how nascent this really is?
I tried to make it clear that I was referring to groups with the phrase, “of humanity”, as in, “as a whole”, but I could see how that could be confusing.
I imagine there’s a lot of overlap. I’d also be fine with multiple prioritization research projects, but think it’s early to decide that.
I’m not arguing that people haven’t made successes in the entire field (I think there’s been a ton of progress over the last few hundred years, and that’s terrific). I would argue though that there’s very little formal prioritization of such progress. Similar to how EA has helped formalize the prioritization of global health and longtermism, we have yet to have similar efforts for “humanity’s wisdom and intelligence”.
I think that there are likely still strong marginal gains in at least some of the intervention areas.