That requires a precise meaning of expected value in this context that includes only certain varieties of uncertainty. It would take into account the actual probability that, for example, a comet exists which is on a collision course with the Earth, but could not include the state of our knowledge about whether that is the case.
If it did include states of knowledge, then going from ‘low probability that a comet strikes the Earth and wipes out all or most human life’ to ‘Barring our action to avoid it, near-certainty that a comet will strike the Earth and wipe out all or most human life’ is itself a catastrophic event and should be avoided.
That requires a precise meaning of expected value in this context that includes only certain varieties of uncertainty.
Kind-of? You assess past expected values in light of information you have now, not just the information you had then. That way, finding out bad news isn’t the catastrophe.
The line seems ambiguous, and I don’t like this talk of “objective probabilities” used to explain it. But you seem to be talking about E(V) as calculated by a hypothetical future agent after updating. Presumably the present agent looking at this future possibility only cares about its present calculated E(V) given that hypothetical, which need not be the same (if it deals with counterfactuals in a sensible way). To the extent that they are equal, it means the future agent is correct—in other words, the “catastrophic event” has already occurred—and finding this out would actually raise E(V) given that assumption.
When someone is ignorant of the actual chance of a catastrophic event happening, even if they consider it possible, they will have fairly high EV. When they update significantly toward the chance of that event happening, their EV will drop very significantly. This change itself meets the definition of ‘existential catastrophe’.
I don’t know what you think you’re saying—the definition no longer says that if you consider it to refer to E(V) as calculated by the agent at the first time (conditional on the “catastrophe”).
ETA: “An existential catastrophe is an event which causes the loss of most expected value.”
That value wasn’t lost; they would have updated to reassess their expected value.
That requires a precise meaning of expected value in this context that includes only certain varieties of uncertainty. It would take into account the actual probability that, for example, a comet exists which is on a collision course with the Earth, but could not include the state of our knowledge about whether that is the case.
If it did include states of knowledge, then going from ‘low probability that a comet strikes the Earth and wipes out all or most human life’ to ‘Barring our action to avoid it, near-certainty that a comet will strike the Earth and wipe out all or most human life’ is itself a catastrophic event and should be avoided.
Kind-of? You assess past expected values in light of information you have now, not just the information you had then. That way, finding out bad news isn’t the catastrophe.
The line seems ambiguous, and I don’t like this talk of “objective probabilities” used to explain it. But you seem to be talking about E(V) as calculated by a hypothetical future agent after updating. Presumably the present agent looking at this future possibility only cares about its present calculated E(V) given that hypothetical, which need not be the same (if it deals with counterfactuals in a sensible way). To the extent that they are equal, it means the future agent is correct—in other words, the “catastrophic event” has already occurred—and finding this out would actually raise E(V) given that assumption.
When someone is ignorant of the actual chance of a catastrophic event happening, even if they consider it possible, they will have fairly high EV. When they update significantly toward the chance of that event happening, their EV will drop very significantly. This change itself meets the definition of ‘existential catastrophe’.
Sounds like evidential decision theory again. According to that argument, you should maintain high EV by avoiding looking into existential risks.
Yes, that’s my issue with the paper; it doesn’t distinguish that from actual catastrophes.
I don’t know what you think you’re saying—the definition no longer says that if you consider it to refer to E(V) as calculated by the agent at the first time (conditional on the “catastrophe”).
ETA: “An existential catastrophe is an event which causes the loss of most expected value.”