Why does that measure matter? You care about the risk of any existential threat. The fact that it happened by grey goo rather than Friendliness failure is little consolation.
It may matter because, if many scenarios have costly solutions that are very specific and don’t help at all with other scenarios, and you can only afford to build a few solutions, you don’t know which ones to choose.
Yes, I know that reasons exist to distinguish them, but I was asking for a reason relevant to the present discussion, which was discussing how to assess total existential risk.
Well, it has to do more with the original discussion. If you’re going to discount doomsday scenarios by putting them in appropriate reference classes and so forth, then either you automatically discount all predictions of collapse (which seems dangerous and foolish); or you have to explain very well indeed why you’re treating one scenario a bit seriously after dismissing ten others out of hand.
Or, if the reference class is “science-y Doomsday predictors”, then they’re almost certainly completely wrong. See Paul Ehrlich (overpopulation), and Matt Simmons (peak oil) for some examples, both treated extremely seriously by mainstream media at time. So far in spite of countless cases of science predicting doom and gloom, not a single one of them turned out to be true, usually not just barely enough to be discounted by anthropic principle, but spectacularly so. Cornucopians were virtually always right.
taw was saying that you should discount existential risk as such because it (the entire class of scenarios) is historically wrong. So it is the existential risk across all scenarios that was relevant.
We’d see the exact same type of evidence today if a doomsday (of any kind) were coming, so this kind of evidence is not sufficient.
I thought I addressed this with “usually not just barely enough to be discounted by anthropic principle, but spectacularly so” part. Anthropic principle style of reasoning can only be applied to disasters that have binary distributions—wipe out every observer in the universe (or at least on Earth), or don’t happen at all—or at least extremely skewed power law distributions.
I don’t see any evidence that most disasters would follow such distribution. I expect any non-negligible chance of destruction of humanity by nuclear warfare implying an almost certainty of limited scale nuclear warfare with millions dying every couple of years.
I think anthropic principle reasoning is so overused here, and so sloppily, that we’d be better off throwing it away completely.
It may matter because, if many scenarios have costly solutions that are very specific and don’t help at all with other scenarios, and you can only afford to build a few solutions, you don’t know which ones to choose.
This is a good point. Fortunately as it happens we can just create an FAI and pray unto Him to ‘deliver us from evil’.
Why does that measure matter? You care about the risk of any existential threat. The fact that it happened by grey goo rather than Friendliness failure is little consolation.
It may matter because, if many scenarios have costly solutions that are very specific and don’t help at all with other scenarios, and you can only afford to build a few solutions, you don’t know which ones to choose.
Yes, I know that reasons exist to distinguish them, but I was asking for a reason relevant to the present discussion, which was discussing how to assess total existential risk.
Well, it has to do more with the original discussion. If you’re going to discount doomsday scenarios by putting them in appropriate reference classes and so forth, then either you automatically discount all predictions of collapse (which seems dangerous and foolish); or you have to explain very well indeed why you’re treating one scenario a bit seriously after dismissing ten others out of hand.
The original discussion was on this point:
taw was saying that you should discount existential risk as such because it (the entire class of scenarios) is historically wrong. So it is the existential risk across all scenarios that was relevant.
We’d see the exact same type of evidence today if a doomsday (of any kind) were coming, so this kind of evidence is not sufficient.
I thought I addressed this with “usually not just barely enough to be discounted by anthropic principle, but spectacularly so” part. Anthropic principle style of reasoning can only be applied to disasters that have binary distributions—wipe out every observer in the universe (or at least on Earth), or don’t happen at all—or at least extremely skewed power law distributions.
I don’t see any evidence that most disasters would follow such distribution. I expect any non-negligible chance of destruction of humanity by nuclear warfare implying an almost certainty of limited scale nuclear warfare with millions dying every couple of years.
I think anthropic principle reasoning is so overused here, and so sloppily, that we’d be better off throwing it away completely.
This is a good point. Fortunately as it happens we can just create an FAI and pray unto Him to ‘deliver us from evil’.