Often the issue is that what you’re trying to predict is sufficiently important that you need to assume *something*, even if the tools you have available are insufficient. Existential risks generally fall in this category. Replacing the news with an upcoming cancer diagnosis, and telepathy with paying very careful attention to that organ, and whether Sylvanus is being an idiot is much less clear.
On the other hand, if someone is taking even odds on an extremely specific series of events, yeah, they’re kind of dumb. And I wouldn’t be surprised to find pundits doing this.
As a side note, I wonder if I should have had him bet on a less specific series of events. The way the story is currently makes it almost sound like I’m just rehashing the “burdensome details” sequence, but what I was really trying to call out was the fairly specific fallacy of “X is all the information I have access too, therefore X is enough information to make the decision”.
Overall I wish I had put more thought into this story. I did let it simmer in mind for a few days after writing it but before posting it, but the decision to finally publish was kind of impulsive, and I didn’t try very hard to determine if it was comprehensible before doing so. Oops! I’ve updated towards “I need to do more work to make my writing clear”.
In the cancer diagnosis example, part of the reason that I would think it’s less clear that Sylvanus is being an idiot is that you really might be able to get some evidence about the presence of cancer by paying close attention to the affected organ.
I think I see where you’re coming from, though. The importance of a cancer diagnosis (compared to a news addiction) does mean that trying out various apparently dumb ways of getting at the truth becomes a lot more sane. But I don’t think I understand what you’re saying in the first sentence. What needs to be assumed when reasoning about existential risk, and how are the high stakes responsible for forcing us to assume it?
(For context, my knowledge about the various existential risks humanity might face is pretty shallow, but I’m on board with the idea that they can exist and are important to think about and act upon.)
What needs to be assumed when reasoning about existential risk, and how are the high stakes responsible for forcing us to assume it?
I guess I opted for too much brevity. By their very nature, we don’t* have any examples of existential threats actually happening, so we have to rely very heavily on counterfactuals, which aren’t the most reliable kind of reasoning. How can we reason about what conditions lead up to a nuclear war, for example? We have no data about what led up to one in the past, so we have to rely on abstractions like game theory and reasoning about how close to nuclear war we were in the past. But we need to develop some sort of policy to make sure it doesn’t kill us all either way.
*at a global scale at least. There are civilizations which completely died off (Rapa Nui is an example), but we have few of these, and they’re only vaguely relevant, even as far as climate change goes.
Ah, I see what you’re saying now. So it is analogous to the cancer example: higher stakes make less-likely-to-succeed-efforts more worth doing. (When compared with lower stakes, not when compared with efforts more likely to succeed, of course.) That makes sense.
Often the issue is that what you’re trying to predict is sufficiently important that you need to assume *something*, even if the tools you have available are insufficient. Existential risks generally fall in this category. Replacing the news with an upcoming cancer diagnosis, and telepathy with paying very careful attention to that organ, and whether Sylvanus is being an idiot is much less clear.
On the other hand, if someone is taking even odds on an extremely specific series of events, yeah, they’re kind of dumb. And I wouldn’t be surprised to find pundits doing this.
As a side note, I wonder if I should have had him bet on a less specific series of events. The way the story is currently makes it almost sound like I’m just rehashing the “burdensome details” sequence, but what I was really trying to call out was the fairly specific fallacy of “X is all the information I have access too, therefore X is enough information to make the decision”.
Overall I wish I had put more thought into this story. I did let it simmer in mind for a few days after writing it but before posting it, but the decision to finally publish was kind of impulsive, and I didn’t try very hard to determine if it was comprehensible before doing so. Oops! I’ve updated towards “I need to do more work to make my writing clear”.
Writing well is really hard. Thanks for sharing.
In the cancer diagnosis example, part of the reason that I would think it’s less clear that Sylvanus is being an idiot is that you really might be able to get some evidence about the presence of cancer by paying close attention to the affected organ.
I think I see where you’re coming from, though. The importance of a cancer diagnosis (compared to a news addiction) does mean that trying out various apparently dumb ways of getting at the truth becomes a lot more sane. But I don’t think I understand what you’re saying in the first sentence. What needs to be assumed when reasoning about existential risk, and how are the high stakes responsible for forcing us to assume it?
(For context, my knowledge about the various existential risks humanity might face is pretty shallow, but I’m on board with the idea that they can exist and are important to think about and act upon.)
I guess I opted for too much brevity. By their very nature, we don’t* have any examples of existential threats actually happening, so we have to rely very heavily on counterfactuals, which aren’t the most reliable kind of reasoning. How can we reason about what conditions lead up to a nuclear war, for example? We have no data about what led up to one in the past, so we have to rely on abstractions like game theory and reasoning about how close to nuclear war we were in the past. But we need to develop some sort of policy to make sure it doesn’t kill us all either way.
*at a global scale at least. There are civilizations which completely died off (Rapa Nui is an example), but we have few of these, and they’re only vaguely relevant, even as far as climate change goes.
Ah, I see what you’re saying now. So it is analogous to the cancer example: higher stakes make less-likely-to-succeed-efforts more worth doing. (When compared with lower stakes, not when compared with efforts more likely to succeed, of course.) That makes sense.