Judging by the overwhelmingly favorable response, it certainly came out as we-need-two-Stalins criticism, whether or not I “intended” it that way. (One of the less expected side effects of this post was to cause me to update towards devoting more time to things that, unlike writing, don’t give me a constant dribble of social reinforcement.)
I think my criticism includes yours, in the following sense: if we solve the “we fail to converge on truth because too much satisficing” problem, we will presumably stop saying things like “but global poverty could totally be the best thing for the far future!” (which has been argued) and start to find the things that are actually the best thing for the far future without privileging certain hypotheses.
This is my main problem with the idea that we should have a far-future focus. I just have no idea at all how to get a grip on far-future predictions, and so it seems absurdly unlikely that my predictions will be correct, making it therefore also absurdly unlikely that I (or even most people) will be able to make a difference except in a very few cases by pure luck.
It seems easier to evaluate “is trying to be relevant” than “has XYZ important long-term consequence”. For instance, investing in asteroid detection may not be the most important long-term thing, but it’s at least plausibly related to x-risk (and would be confusing for it to be actively harmful), whereas third-world health has confusing long-term repercussions, but is definitely not directly related to x-risk.
Even if third world health is important to x-risk through secondary effects, it still seems that any effect on x-risk it has will necessarily be mediated through some object-level x-risk intervention. It doesn’t matter what started the chain of events that leads to decreased asteroid risk, but it has to go through some relatively small family of interventions that deal with it on an object level.
Insofar as current society isn’t involved in object-level x-risk interventions, it seems weird to think that bringing third-world living standards closer to our own will lead to more involvement in x-risk intervention without there being some sort of wider-spread availability of object-level x-risk intervention.
(Not that I care particularly much about asteroids, but it’s a particularly easy example to think about.)
investing in asteroid detection may not be the most important long-term thing, but it’s at least plausibly related to x-risk (and would be confusing for it to be actively harmful), whereas third-world health has confusing long-term repercussions, but is definitely not directly related to x-risk.
I’m inclined to agree. A possible counterargument does come to mind, but I don’t know how seriously to take it:
Global pandemics are an existential risk. (Even if they don’t kill everyone, they might serve as civilizational defeaters that prevent us from escaping Earth or the solar system before something terminal obliterates humanity.)
Such a pandemic is much more likely to emerge and become a threat in less developed countries, because of worse general health and other conditions more conducive to disease transmission.
Funding health improvements in less developed countries would improve their level of general health and impede disease transmission.
From the above, investing in the health of less developed countries may well be related to x-risk.
Point 4 seems to follow from points 1-3. To me point 2 seems plausible; point 3 seems qualitatively correct, but I don’t know whether it’s quantitatively strong enough for the argument’s conclusion to follow; and point 1 feels a bit strained. (I don’t care so much about point 5 because you were just using asteroids as an easy example.)
Any given asteroid will either be detected and deflected in time, or not. There, to my understanding at least, no mediocre level of asteroid impact risk management which makes the situation worse, in the sense of outright increasing the chance of an extinction event. More resources could be invested for further marginal improvements, with no obvious upper bound.
Poverty and disease are more complicated problems. Incautious use of antibiotics leads to disease-resistant strains, or you give a man a fish and he spends the day figuring out how to ask you for another instead of repairing his net. Sufficient resources need to be committed to solve the problem completely, or it just becomes even more of a mess. Once it’s solved, it tends to stay solved, and then there are more resources available for everything else because the population of healthy, adequately-capitalized humans has increased.
In a situation like that, my preferred strategy is to focus on the end-in-sight problem first, and compare the various bottomless pits afterward.
I would have to disagree that there is no mediocre way to make asteroid risk worse through poor impact risk management, but perhaps it depends on what we mean by this. If we’re strictly talking about the risk of some unmitigated asteroid hitting Earth, there is indeed likely nothing we can do to increase this risk. However, a poorly construed detection, characterisation and deflection process could deflect an otherwise harmless asteroid into Earth. Further, developing deflection techniques could make it easier for people with malicious intent to deflect an otherwise harmless asteroid into Earth on purpose. Given how low the natural risk of a catastrophic asteroid impact is, I would argue that the chances of a man-made asteroid impact (either on purpose or by accident) is much higher than the chances of a natural one occurring in the next 100 years.
Yes, most x-risk reduction will have to come about through explicit work on x-risk reduction at some point.
It could still easily be the case that working on improving the living standards of the world’s poorest people is an effective route to x-risk reduction. In practice, scarcely anyone is going to work on x-risk as long as their own life is precarious, and scarcely anyone is going to do useful work on x-risk reduction if they are living somewhere that doesn’t have the resources to do serious scientific or engineering work. So interventions that aim, in the longish term, to bring the whole world up to something like current affluent-West living standards seem likely to produce a much larger population of people who might be interested in reducing x-risk and better conditions for them to do such work in.
See the point about why its weird to think that new affluent populations will work more on x-risk if current affluent populations don’t do so at a particularly high rate.
Also, it’s easier to move specific people to a country than it is to raise the standard of living of entire countries. If you’re doing raising-living-standards as an x-risk strategy, are you sure you shouldn’t be spending money on locating people interested in x-risk instead?
I quite agree that if all you care about is x-risk then trying to address that by raising everyone’s living standards is using a nuclear warhead to crack a nut. I was addressing the following thing you said:
it seems weird to think that bringing third-world living standards closer to our own will lead to more involvement in x-risk intervention without there being some sort of wider-spread availability of object-level x-risk intervention.
which I think is clearly wrong: bringing everyone’s living standards up will increase the pool of people who have the motive and opportunity to work on x-risk, and since the number of people working on x-risk isn’t zero that number will likely increase (say, by 2x) if the size of that pool increases (say, by 2x) as a result of making everyone better off.
I wasn’t claiming (because it would be nuts) that the way to get the most x-risk bang per buck is to reduce poverty and disease in the poorest parts of the world. It surely isn’t, by a large factor. But you seemed to be saying it would have zero x-risk impact (beyond effects like reducing pandemic risk by reducing overall disease levels). That’s all I was disagreeing with.
Judging by the overwhelmingly favorable response, it certainly came out as we-need-two-Stalins criticism, whether or not I “intended” it that way. (One of the less expected side effects of this post was to cause me to update towards devoting more time to things that, unlike writing, don’t give me a constant dribble of social reinforcement.)
I think my criticism includes yours, in the following sense: if we solve the “we fail to converge on truth because too much satisficing” problem, we will presumably stop saying things like “but global poverty could totally be the best thing for the far future!” (which has been argued) and start to find the things that are actually the best thing for the far future without privileging certain hypotheses.
I have strong doubts about your (not personal but generic) ability to evaluate the far-future consequences of most anything.
This is my main problem with the idea that we should have a far-future focus. I just have no idea at all how to get a grip on far-future predictions, and so it seems absurdly unlikely that my predictions will be correct, making it therefore also absurdly unlikely that I (or even most people) will be able to make a difference except in a very few cases by pure luck.
It seems easier to evaluate “is trying to be relevant” than “has XYZ important long-term consequence”. For instance, investing in asteroid detection may not be the most important long-term thing, but it’s at least plausibly related to x-risk (and would be confusing for it to be actively harmful), whereas third-world health has confusing long-term repercussions, but is definitely not directly related to x-risk.
Even if third world health is important to x-risk through secondary effects, it still seems that any effect on x-risk it has will necessarily be mediated through some object-level x-risk intervention. It doesn’t matter what started the chain of events that leads to decreased asteroid risk, but it has to go through some relatively small family of interventions that deal with it on an object level.
Insofar as current society isn’t involved in object-level x-risk interventions, it seems weird to think that bringing third-world living standards closer to our own will lead to more involvement in x-risk intervention without there being some sort of wider-spread availability of object-level x-risk intervention.
(Not that I care particularly much about asteroids, but it’s a particularly easy example to think about.)
I’m inclined to agree. A possible counterargument does come to mind, but I don’t know how seriously to take it:
Global pandemics are an existential risk. (Even if they don’t kill everyone, they might serve as civilizational defeaters that prevent us from escaping Earth or the solar system before something terminal obliterates humanity.)
Such a pandemic is much more likely to emerge and become a threat in less developed countries, because of worse general health and other conditions more conducive to disease transmission.
Funding health improvements in less developed countries would improve their level of general health and impede disease transmission.
From the above, investing in the health of less developed countries may well be related to x-risk.
Optional: asteroid detection, meanwhile, is mostly a solved problem.
Point 4 seems to follow from points 1-3. To me point 2 seems plausible; point 3 seems qualitatively correct, but I don’t know whether it’s quantitatively strong enough for the argument’s conclusion to follow; and point 1 feels a bit strained. (I don’t care so much about point 5 because you were just using asteroids as an easy example.)
Any given asteroid will either be detected and deflected in time, or not. There, to my understanding at least, no mediocre level of asteroid impact risk management which makes the situation worse, in the sense of outright increasing the chance of an extinction event. More resources could be invested for further marginal improvements, with no obvious upper bound.
Poverty and disease are more complicated problems. Incautious use of antibiotics leads to disease-resistant strains, or you give a man a fish and he spends the day figuring out how to ask you for another instead of repairing his net. Sufficient resources need to be committed to solve the problem completely, or it just becomes even more of a mess. Once it’s solved, it tends to stay solved, and then there are more resources available for everything else because the population of healthy, adequately-capitalized humans has increased.
In a situation like that, my preferred strategy is to focus on the end-in-sight problem first, and compare the various bottomless pits afterward.
I would have to disagree that there is no mediocre way to make asteroid risk worse through poor impact risk management, but perhaps it depends on what we mean by this. If we’re strictly talking about the risk of some unmitigated asteroid hitting Earth, there is indeed likely nothing we can do to increase this risk. However, a poorly construed detection, characterisation and deflection process could deflect an otherwise harmless asteroid into Earth. Further, developing deflection techniques could make it easier for people with malicious intent to deflect an otherwise harmless asteroid into Earth on purpose. Given how low the natural risk of a catastrophic asteroid impact is, I would argue that the chances of a man-made asteroid impact (either on purpose or by accident) is much higher than the chances of a natural one occurring in the next 100 years.
Yes, most x-risk reduction will have to come about through explicit work on x-risk reduction at some point.
It could still easily be the case that working on improving the living standards of the world’s poorest people is an effective route to x-risk reduction. In practice, scarcely anyone is going to work on x-risk as long as their own life is precarious, and scarcely anyone is going to do useful work on x-risk reduction if they are living somewhere that doesn’t have the resources to do serious scientific or engineering work. So interventions that aim, in the longish term, to bring the whole world up to something like current affluent-West living standards seem likely to produce a much larger population of people who might be interested in reducing x-risk and better conditions for them to do such work in.
See the point about why its weird to think that new affluent populations will work more on x-risk if current affluent populations don’t do so at a particularly high rate.
Also, it’s easier to move specific people to a country than it is to raise the standard of living of entire countries. If you’re doing raising-living-standards as an x-risk strategy, are you sure you shouldn’t be spending money on locating people interested in x-risk instead?
I quite agree that if all you care about is x-risk then trying to address that by raising everyone’s living standards is using a nuclear warhead to crack a nut. I was addressing the following thing you said:
which I think is clearly wrong: bringing everyone’s living standards up will increase the pool of people who have the motive and opportunity to work on x-risk, and since the number of people working on x-risk isn’t zero that number will likely increase (say, by 2x) if the size of that pool increases (say, by 2x) as a result of making everyone better off.
I wasn’t claiming (because it would be nuts) that the way to get the most x-risk bang per buck is to reduce poverty and disease in the poorest parts of the world. It surely isn’t, by a large factor. But you seemed to be saying it would have zero x-risk impact (beyond effects like reducing pandemic risk by reducing overall disease levels). That’s all I was disagreeing with.