In the cancer diagnosis example, part of the reason that I would think it’s less clear that Sylvanus is being an idiot is that you really might be able to get some evidence about the presence of cancer by paying close attention to the affected organ.
I think I see where you’re coming from, though. The importance of a cancer diagnosis (compared to a news addiction) does mean that trying out various apparently dumb ways of getting at the truth becomes a lot more sane. But I don’t think I understand what you’re saying in the first sentence. What needs to be assumed when reasoning about existential risk, and how are the high stakes responsible for forcing us to assume it?
(For context, my knowledge about the various existential risks humanity might face is pretty shallow, but I’m on board with the idea that they can exist and are important to think about and act upon.)
What needs to be assumed when reasoning about existential risk, and how are the high stakes responsible for forcing us to assume it?
I guess I opted for too much brevity. By their very nature, we don’t* have any examples of existential threats actually happening, so we have to rely very heavily on counterfactuals, which aren’t the most reliable kind of reasoning. How can we reason about what conditions lead up to a nuclear war, for example? We have no data about what led up to one in the past, so we have to rely on abstractions like game theory and reasoning about how close to nuclear war we were in the past. But we need to develop some sort of policy to make sure it doesn’t kill us all either way.
*at a global scale at least. There are civilizations which completely died off (Rapa Nui is an example), but we have few of these, and they’re only vaguely relevant, even as far as climate change goes.
Ah, I see what you’re saying now. So it is analogous to the cancer example: higher stakes make less-likely-to-succeed-efforts more worth doing. (When compared with lower stakes, not when compared with efforts more likely to succeed, of course.) That makes sense.
In the cancer diagnosis example, part of the reason that I would think it’s less clear that Sylvanus is being an idiot is that you really might be able to get some evidence about the presence of cancer by paying close attention to the affected organ.
I think I see where you’re coming from, though. The importance of a cancer diagnosis (compared to a news addiction) does mean that trying out various apparently dumb ways of getting at the truth becomes a lot more sane. But I don’t think I understand what you’re saying in the first sentence. What needs to be assumed when reasoning about existential risk, and how are the high stakes responsible for forcing us to assume it?
(For context, my knowledge about the various existential risks humanity might face is pretty shallow, but I’m on board with the idea that they can exist and are important to think about and act upon.)
I guess I opted for too much brevity. By their very nature, we don’t* have any examples of existential threats actually happening, so we have to rely very heavily on counterfactuals, which aren’t the most reliable kind of reasoning. How can we reason about what conditions lead up to a nuclear war, for example? We have no data about what led up to one in the past, so we have to rely on abstractions like game theory and reasoning about how close to nuclear war we were in the past. But we need to develop some sort of policy to make sure it doesn’t kill us all either way.
*at a global scale at least. There are civilizations which completely died off (Rapa Nui is an example), but we have few of these, and they’re only vaguely relevant, even as far as climate change goes.
Ah, I see what you’re saying now. So it is analogous to the cancer example: higher stakes make less-likely-to-succeed-efforts more worth doing. (When compared with lower stakes, not when compared with efforts more likely to succeed, of course.) That makes sense.