When someone is ignorant of the actual chance of a catastrophic event happening, even if they consider it possible, they will have fairly high EV. When they update significantly toward the chance of that event happening, their EV will drop very significantly. This change itself meets the definition of ‘existential catastrophe’.
I don’t know what you think you’re saying—the definition no longer says that if you consider it to refer to E(V) as calculated by the agent at the first time (conditional on the “catastrophe”).
ETA: “An existential catastrophe is an event which causes the loss of most expected value.”
When someone is ignorant of the actual chance of a catastrophic event happening, even if they consider it possible, they will have fairly high EV. When they update significantly toward the chance of that event happening, their EV will drop very significantly. This change itself meets the definition of ‘existential catastrophe’.
Sounds like evidential decision theory again. According to that argument, you should maintain high EV by avoiding looking into existential risks.
Yes, that’s my issue with the paper; it doesn’t distinguish that from actual catastrophes.
I don’t know what you think you’re saying—the definition no longer says that if you consider it to refer to E(V) as calculated by the agent at the first time (conditional on the “catastrophe”).
ETA: “An existential catastrophe is an event which causes the loss of most expected value.”