First off, I’d like to apologise, I’ve only now read the OP, and I was talking past you. I still think we have some genuine disagreements however, so I’ll try to clarify those.
I was surprised by how much I liked the post. It separates the following approaches for improving the state of the world:
Further our understanding of what matters
Improve governance
Improve prediction-making & foresight
Reduce existential risk
Increase the number of well-intentioned, highly capable people
My ordering of tractability on these is roughly as follows: 5 > 4 > 3 > 2 > 1. There is then a question of importance and neglectedness. I basically think they’re all fairly neglected, and I don’t have strong opinions on scope except that, probably, number 4 is slightly higher than the rest.
Understanding what matters seems really hard. For the other problems (2-5) I can see strong feedback loops based in math for people to learn (e.g. in governance, people can study microeconomic models, play with them, make predictions and learn). I don’t see this for problem 1.
Sure, for all 1-5 there will be steps where you have to step sideways—notice a key variable that everyone has been avoiding even thinking about, due to various incentives on what thoughts you can think—but there’s more scope for practice, and for a lot of good work to be done that isn’t 80% deep philosophical insight / deep rationality abilities.
Luke Muehlhauser tried really hard and found the tractability surprisingly low (1, 2).
There are worlds consistent with what I’ve said above, where I would nonetheless want to devote significant resources to 1, if (say) we were making plans on a 100-200 year horizon. However, I believe that we’re in a world where we’re very soon going to have to score perfectly on number 4, and furthermore that scoring perfectly on number 4 will cause the other problems to get much easier—including problem number 1.
Summary: As I hadn’t read the OP, I read your comment as claiming your approach to 1 was the only way to do good altruistic work. I then responded with reasons to think that other—more technical approaches—were just as good (especially for the other problems). I now pivot my response to reasons to think that working on things that aren’t 1 are more important, which I think we may disagree on.
I think I basically agree with all of this, except that maybe I think 1 is somewhat more tractable than you do. What I wrote was mostly a response to the OP’s listing of organizations working on 1, and my sense that the OP thought that these organizations were / are making positive progress, which is far from clear to me.
First off, I’d like to apologise, I’ve only now read the OP, and I was talking past you. I still think we have some genuine disagreements however, so I’ll try to clarify those.
I was surprised by how much I liked the post. It separates the following approaches for improving the state of the world:
Further our understanding of what matters
Improve governance
Improve prediction-making & foresight
Reduce existential risk
Increase the number of well-intentioned, highly capable people
My ordering of tractability on these is roughly as follows: 5 > 4 > 3 > 2 > 1. There is then a question of importance and neglectedness. I basically think they’re all fairly neglected, and I don’t have strong opinions on scope except that, probably, number 4 is slightly higher than the rest.
Understanding what matters seems really hard. For the other problems (2-5) I can see strong feedback loops based in math for people to learn (e.g. in governance, people can study microeconomic models, play with them, make predictions and learn). I don’t see this for problem 1.
Sure, for all 1-5 there will be steps where you have to step sideways—notice a key variable that everyone has been avoiding even thinking about, due to various incentives on what thoughts you can think—but there’s more scope for practice, and for a lot of good work to be done that isn’t 80% deep philosophical insight / deep rationality abilities.
Luke Muehlhauser tried really hard and found the tractability surprisingly low (1, 2).
There are worlds consistent with what I’ve said above, where I would nonetheless want to devote significant resources to 1, if (say) we were making plans on a 100-200 year horizon. However, I believe that we’re in a world where we’re very soon going to have to score perfectly on number 4, and furthermore that scoring perfectly on number 4 will cause the other problems to get much easier—including problem number 1.
Summary: As I hadn’t read the OP, I read your comment as claiming your approach to 1 was the only way to do good altruistic work. I then responded with reasons to think that other—more technical approaches—were just as good (especially for the other problems). I now pivot my response to reasons to think that working on things that aren’t 1 are more important, which I think we may disagree on.
I think I basically agree with all of this, except that maybe I think 1 is somewhat more tractable than you do. What I wrote was mostly a response to the OP’s listing of organizations working on 1, and my sense that the OP thought that these organizations were / are making positive progress, which is far from clear to me.