Not sure I like the (iii) definition (” the loss of most expected value”). It just transfers all the burden onto the word “value” which is opaque, slippery, and subject to wildly different interpretation.
Consider that e.g. for all the Christians an irrefutable discovery that the whole Jesus thing was a fake and a hoax would count as an existential catastrophe.
It just transfers all the burden onto the word “value” which is opaque, slippery, and subject to wildly different interpretation.
People can certainly value different things, and value the same things differently. But as long as everyone correctly communicates what they value to everyone else, we can talk about expected value unambiguously and usefuly.
Consider that e.g. for all the Christians an irrefutable discovery that the whole Jesus thing was a fake and a hoax would count as an existential catastrophe.
If true, and if this is much more value than would be gained elsewhere (by me or them or someone else) from them learning the truth, then I as a non-Christian would try to prevent Christians from learning this. What is ambiguous about this?
It’s not one for me, but it might be for somebody else. You presented the counterfactual that it is one to Christians, and I didn’t want to deny it.
I’m not sure what your point is. Is it that saying anything might be a existential catastrophe to someone with the right values, dismisses the literal meaning of “existential”?
It’s not one for me, but it might be for somebody else.
That’s a pretty important point. Are we willing to define an existential catastrophe subjectively?
If you define existential risk as e.g. a threat of extinction, that definition has some problems but it does not depend on someone’s state of mind—it is within the realm of reality (defined as what doesn’t go away when you stop believing in it). Once you start talking about expected value, it’s all in the eye of the beholder.
This is true—these are two completely different things. And I assume from the comments on this post that the OP does indeed define it subjectively, i.e. via loss of (expected) value. Each is worthy of discussion, and I think the two discussions do mostly overlap, but we should be clear as to what we’re discussing.
Cases of extinction that aren’t existential risk for some people: rapture / afterlife / end of the world religious scenarios; uploading and consequent extinction of biological humanity (most people today would not accept uploading as substiute to their ‘real’ life); being replaced by our non-human descendants.
Cases of existential risk (for some peoples’ values) that don’t involve extinction: scenarios where all remaining humans hold values dramatically different from your own; revelation that one’s religion or deeply held morality is objectively wrong; humanity fails to populatte/influence the universe; and many others.
Cases of extinction that aren’t existential risk for some people
These are not cases of extinction. Christians wouldn’t call the Second Coming “extinction”—after all, you are getting eternal life :-/ I wouldn’t call total uploading “extinction” either.
That requires a precise meaning of expected value in this context that includes only certain varieties of uncertainty. It would take into account the actual probability that, for example, a comet exists which is on a collision course with the Earth, but could not include the state of our knowledge about whether that is the case.
If it did include states of knowledge, then going from ‘low probability that a comet strikes the Earth and wipes out all or most human life’ to ‘Barring our action to avoid it, near-certainty that a comet will strike the Earth and wipe out all or most human life’ is itself a catastrophic event and should be avoided.
That requires a precise meaning of expected value in this context that includes only certain varieties of uncertainty.
Kind-of? You assess past expected values in light of information you have now, not just the information you had then. That way, finding out bad news isn’t the catastrophe.
The line seems ambiguous, and I don’t like this talk of “objective probabilities” used to explain it. But you seem to be talking about E(V) as calculated by a hypothetical future agent after updating. Presumably the present agent looking at this future possibility only cares about its present calculated E(V) given that hypothetical, which need not be the same (if it deals with counterfactuals in a sensible way). To the extent that they are equal, it means the future agent is correct—in other words, the “catastrophic event” has already occurred—and finding this out would actually raise E(V) given that assumption.
When someone is ignorant of the actual chance of a catastrophic event happening, even if they consider it possible, they will have fairly high EV. When they update significantly toward the chance of that event happening, their EV will drop very significantly. This change itself meets the definition of ‘existential catastrophe’.
I don’t know what you think you’re saying—the definition no longer says that if you consider it to refer to E(V) as calculated by the agent at the first time (conditional on the “catastrophe”).
ETA: “An existential catastrophe is an event which causes the loss of most expected value.”
We specified objective probabilities to avoid such discoveries being the catastrophes (but value is deliberately subjective). There may be interesting versions of the idea which use subjective probabilities.
Exactly how to cash out objective probabilities is a tricky problem which is the subject of a substantial literature. We didn’t want to tie our definition to any particular version, believing that it’s better to parcel off that problem. But my personal view is that roughly speaking you can get an objective probability by taking something like an average of subjective probabilities of many hypothetical observers.
Sorry, still not making any sense to me. “Taking something like an average of subjective probabilities of many hypothetical observers” looks precisely like GIGO and I don’t understand how do you get something objective out of subjective perceptions of hypotheticals(!).
If you don’t think the concept of “objective probability” is salvageable I agree that you wouldn’t want to use it for defining other things.
I don’t want to go into detail of my personal account of objective probability here, not least because I haven’t spent enough time working it out to be happy it works! The short answer to your question is you need to define an objective measure over possible observers. For the purposes of defining existential risk, you might be better to stop worrying about the word “objective” and just imagine I’m talking about the subjective probabilities assigned by an external observer who is well-informed but not perfectly informed.
Consider that e.g. for all the Christians an irrefutable discovery that the whole Jesus thing was a fake and a hoax would count as an existential catastrophe.
This seems to conflate people’s values with their asserted values. Because of belief-in-belief and similar effects, we can’t assume those to be the same when modeling other people. We should also expect that people’s values are more complex than the values that they will assert (or even admit).
I had a college roommate who went through a phase where he wanted to die and go to heaven as soon as possible, but believed that committing suicide was a mortal sin.
So he would do dangerous things — like take walks in the middle of the (ill-lit, semi-rural) road from campus to town, wearing dark clothing, at night — to increase (or so he said) his chances of being accidentally killed.
Most Christians don’t do that sort of thing. Most Christians behave approximately as sensibly as *humanists do with regard to obvious risks to life. This suggests that they actually do possess values very similar to *humanist values, and that their assertions otherwise are tribal cheering.
(It may be that my roommate was just signaling extreme devotion in a misguided attempt to impress his crush, who was the leader of the college Christian club.)
Note that one can be a religious Christian and still act that way. Catholics consider taking deliberately risky behavior like that to itself be sinful for example.
Not sure I like the (iii) definition (” the loss of most expected value”). It just transfers all the burden onto the word “value” which is opaque, slippery, and subject to wildly different interpretation.
Consider that e.g. for all the Christians an irrefutable discovery that the whole Jesus thing was a fake and a hoax would count as an existential catastrophe.
People can certainly value different things, and value the same things differently. But as long as everyone correctly communicates what they value to everyone else, we can talk about expected value unambiguously and usefuly.
If true, and if this is much more value than would be gained elsewhere (by me or them or someone else) from them learning the truth, then I as a non-Christian would try to prevent Christians from learning this. What is ambiguous about this?
Would you call this “an existential catastrophe”?
It’s not one for me, but it might be for somebody else. You presented the counterfactual that it is one to Christians, and I didn’t want to deny it.
I’m not sure what your point is. Is it that saying anything might be a existential catastrophe to someone with the right values, dismisses the literal meaning of “existential”?
That’s a pretty important point. Are we willing to define an existential catastrophe subjectively?
If you define existential risk as e.g. a threat of extinction, that definition has some problems but it does not depend on someone’s state of mind—it is within the realm of reality (defined as what doesn’t go away when you stop believing in it). Once you start talking about expected value, it’s all in the eye of the beholder.
This is true—these are two completely different things. And I assume from the comments on this post that the OP does indeed define it subjectively, i.e. via loss of (expected) value. Each is worthy of discussion, and I think the two discussions do mostly overlap, but we should be clear as to what we’re discussing.
Cases of extinction that aren’t existential risk for some people: rapture / afterlife / end of the world religious scenarios; uploading and consequent extinction of biological humanity (most people today would not accept uploading as substiute to their ‘real’ life); being replaced by our non-human descendants.
Cases of existential risk (for some peoples’ values) that don’t involve extinction: scenarios where all remaining humans hold values dramatically different from your own; revelation that one’s religion or deeply held morality is objectively wrong; humanity fails to populatte/influence the universe; and many others.
These are not cases of extinction. Christians wouldn’t call the Second Coming “extinction”—after all, you are getting eternal life :-/ I wouldn’t call total uploading “extinction” either.
I would call Armageddon (as part of the Second Coming) extinction. And Christians would call forced total uploading extinction (as a form of death).
That value wasn’t lost; they would have updated to reassess their expected value.
That requires a precise meaning of expected value in this context that includes only certain varieties of uncertainty. It would take into account the actual probability that, for example, a comet exists which is on a collision course with the Earth, but could not include the state of our knowledge about whether that is the case.
If it did include states of knowledge, then going from ‘low probability that a comet strikes the Earth and wipes out all or most human life’ to ‘Barring our action to avoid it, near-certainty that a comet will strike the Earth and wipe out all or most human life’ is itself a catastrophic event and should be avoided.
Kind-of? You assess past expected values in light of information you have now, not just the information you had then. That way, finding out bad news isn’t the catastrophe.
The line seems ambiguous, and I don’t like this talk of “objective probabilities” used to explain it. But you seem to be talking about E(V) as calculated by a hypothetical future agent after updating. Presumably the present agent looking at this future possibility only cares about its present calculated E(V) given that hypothetical, which need not be the same (if it deals with counterfactuals in a sensible way). To the extent that they are equal, it means the future agent is correct—in other words, the “catastrophic event” has already occurred—and finding this out would actually raise E(V) given that assumption.
When someone is ignorant of the actual chance of a catastrophic event happening, even if they consider it possible, they will have fairly high EV. When they update significantly toward the chance of that event happening, their EV will drop very significantly. This change itself meets the definition of ‘existential catastrophe’.
Sounds like evidential decision theory again. According to that argument, you should maintain high EV by avoiding looking into existential risks.
Yes, that’s my issue with the paper; it doesn’t distinguish that from actual catastrophes.
I don’t know what you think you’re saying—the definition no longer says that if you consider it to refer to E(V) as calculated by the agent at the first time (conditional on the “catastrophe”).
ETA: “An existential catastrophe is an event which causes the loss of most expected value.”
We specified objective probabilities to avoid such discoveries being the catastrophes (but value is deliberately subjective). There may be interesting versions of the idea which use subjective probabilities.
I don’t understand that sentence. Where do you “objective probabilities” come from?
Exactly how to cash out objective probabilities is a tricky problem which is the subject of a substantial literature. We didn’t want to tie our definition to any particular version, believing that it’s better to parcel off that problem. But my personal view is that roughly speaking you can get an objective probability by taking something like an average of subjective probabilities of many hypothetical observers.
Sorry, still not making any sense to me. “Taking something like an average of subjective probabilities of many hypothetical observers” looks precisely like GIGO and I don’t understand how do you get something objective out of subjective perceptions of hypotheticals(!).
If you don’t think the concept of “objective probability” is salvageable I agree that you wouldn’t want to use it for defining other things.
I don’t want to go into detail of my personal account of objective probability here, not least because I haven’t spent enough time working it out to be happy it works! The short answer to your question is you need to define an objective measure over possible observers. For the purposes of defining existential risk, you might be better to stop worrying about the word “objective” and just imagine I’m talking about the subjective probabilities assigned by an external observer who is well-informed but not perfectly informed.
This seems to conflate people’s values with their asserted values. Because of belief-in-belief and similar effects, we can’t assume those to be the same when modeling other people. We should also expect that people’s values are more complex than the values that they will assert (or even admit).
So replace “Christians” with “people who truly believe in the coming Day of Judgement and hope for eternal life”.
I had a college roommate who went through a phase where he wanted to die and go to heaven as soon as possible, but believed that committing suicide was a mortal sin.
So he would do dangerous things — like take walks in the middle of the (ill-lit, semi-rural) road from campus to town, wearing dark clothing, at night — to increase (or so he said) his chances of being accidentally killed.
Most Christians don’t do that sort of thing. Most Christians behave approximately as sensibly as *humanists do with regard to obvious risks to life. This suggests that they actually do possess values very similar to *humanist values, and that their assertions otherwise are tribal cheering.
(It may be that my roommate was just signaling extreme devotion in a misguided attempt to impress his crush, who was the leader of the college Christian club.)
Note that one can be a religious Christian and still act that way. Catholics consider taking deliberately risky behavior like that to itself be sinful for example.