60%(extinction by nuclear war) + 30%(extinction by asteroids) + 10%(nothing)?
If you prefer the latter enough to make some comparable sacrifice in the «nothing» term, and you’re rational, I’d say say you assign a higher UExistential to «extinction by asteroids» than to «extinction by nuclear war» (hopefully both negative numbers).
Under the system you describe it is possible to weight each of the below scenarios equally:
0.6(extinction by nuclear war) + (0.3 + 2e)(extinction by asteroids) + (0.1 − 2e)(nothing)?
0.01(extinction by nuclear war) + (0.89 + 1e)(extinction by asteroids) + (0.1 - e)(nothing)?
0.9(extinction by nuclear war) + 0.1(nothing)
You have allowed for agents to assign different weights for different probabilities of certain stuff happening. This does more than allow for risk-aversion. It allows risk-seeking and arbitrary risk appreciation such as preferring a any split of two kinds of existential catastrophe to 90% of either. You can conclude nothing about whether you prefer asteroids to nuclear war based on preference between the scenarios that either of us just listed.You can not shut up and multiply (except when the agents whims it as such).
In the VNM model, the probabilities are assumed to be from a source which is indifferent to the agent, and numerically agreed upon. That’s how the proof of the theorem works. We already discussed this above, but thanks for writing out an example to explicitly demonstrate.
The discussion you linked to does not appear to be related to my comment. Unless for some reason you have rejected parts of your original post this claim is mistaken:
If you prefer the latter enough to make some comparable sacrifice in the «nothing» term, and you’re rational, I’d say say you assign a higher UExistential to «extinction by asteroids» than to «extinction by nuclear war» (hopefully both negative numbers).
I mean that it would consider you modifying the probabilities as simply presenting a different problem.
I didn’t modify the probabilities. Unless you consider making the ‘e’ that you expressed in words explicit or converting percentage to decimal modification ‘modificaiton’. I also added a third scenario to demonstrate a compatible alternative.
I have concluded that the opening post contains a trivial error that is unlikely to be corrected. The worst place for errors is in an official looking context with otherwise high quality content, so I have reversed my upvote.
Unless you consider making the ‘e’ that you expressed in words explicit …
Ah! I see what you mean, and now agree that you didn’t modify the scenario. So I can better understand, when you say
You can not shut up and multiply (except when the agents whims it as such).
does the “except” clause mean “except when the agent is VNM rational”, or some similar assumption?
If your point is that I can’t deduce you are VNM-rational from your response to the extinction question alone, then you’re definitely right. If I came across to the contrary, I should reword the post.
What I meant to point out in that post paragraph is just that
1) there is no need to be “freaked out” by assigning finite utilities to huge events, because
2) even if you insist on treating them as “infinitely more important” than everyday events, you can just use Hausner utility to break ties in more trivial decisions (where the EUexistential values are equal, or “noisily indistinguishable” due for example to a precise time limit on a decision, after which time you’ve failed to distinguish existential risks from noise). I consider this an argument in favor of using in-context VNM utility normatively, since it is simpler than Hausner, and differs from it rarely, over relatively unimportant matters in the context.
I want to convince as many people as possible to use VNM utility confidently. I’d value any suggestions you have to rephrase the post to that effect, since (I think) you noticed the above unclarity in the first place.
If your point is that I can’t deduce you are VNM-rational from your response to the extinction question (and further questions) alone? Then you’re definitely right. If I came across to the contrary, I should reword the post.
Exactly! Perhaps just change “I’d say you assign a higher” to “then maybe you assign a higher”. That would be enough to stop people (me) twisting their minds in a knot trying to work out whether there is an oversight or whether they just don’t understand the whole VNM thing. I would also add something like “(or perhaps you just value unpredictability in your impending doom!)”, which would help the curious mind (or, me again!) to confirm their understanding of now the VNM structure allows for valuing uncertain events.
I think I agree with your point. I am dubious about the whole ‘infinitely more important’ thing but if VNM handles that in a sane way then the problem of beliefs about infinity rests on the agent and his map and VNM can just handle whatever values and expectations the agent happens to have.
(Thanks for taking the time to take another look at what I was trying to say. It isn’t often that I see conversations that go “No → huh? Actually. → wtf? No really” resolve themselves to agreement. So my prediction is one I am glad to be mistaken on!)
Under the system you describe it is possible to weight each of the below scenarios equally:
0.6(extinction by nuclear war) + (0.3 + 2e)(extinction by asteroids) + (0.1 − 2e)(nothing)?
0.01(extinction by nuclear war) + (0.89 + 1e)(extinction by asteroids) + (0.1 - e)(nothing)?
0.9(extinction by nuclear war) + 0.1(nothing)
You have allowed for agents to assign different weights for different probabilities of certain stuff happening. This does more than allow for risk-aversion. It allows risk-seeking and arbitrary risk appreciation such as preferring a any split of two kinds of existential catastrophe to 90% of either. You can conclude nothing about whether you prefer asteroids to nuclear war based on preference between the scenarios that either of us just listed. You can not shut up and multiply (except when the agents whims it as such).
In the VNM model, the probabilities are assumed to be from a source which is indifferent to the agent, and numerically agreed upon. That’s how the proof of the theorem works. We already discussed this above, but thanks for writing out an example to explicitly demonstrate.
The discussion you linked to does not appear to be related to my comment. Unless for some reason you have rejected parts of your original post this claim is mistaken:
When I say
I mean that it would consider you modifying the probabilities as simply presenting a different problem.
Maybe you’re asserting that, in principle, the assumption that we be sure of the agent’s estimation of the probabilities can’t be met. I would agree.
I didn’t modify the probabilities. Unless you consider making the ‘e’ that you expressed in words explicit or converting percentage to decimal modification ‘modificaiton’. I also added a third scenario to demonstrate a compatible alternative.
I have concluded that the opening post contains a trivial error that is unlikely to be corrected. The worst place for errors is in an official looking context with otherwise high quality content, so I have reversed my upvote.
Ah! I see what you mean, and now agree that you didn’t modify the scenario. So I can better understand, when you say
does the “except” clause mean “except when the agent is VNM rational”, or some similar assumption?
If your point is that I can’t deduce you are VNM-rational from your response to the extinction question alone, then you’re definitely right. If I came across to the contrary, I should reword the post.
What I meant to point out in that post paragraph is just that
1) there is no need to be “freaked out” by assigning finite utilities to huge events, because
2) even if you insist on treating them as “infinitely more important” than everyday events, you can just use Hausner utility to break ties in more trivial decisions (where the EUexistential values are equal, or “noisily indistinguishable” due for example to a precise time limit on a decision, after which time you’ve failed to distinguish existential risks from noise). I consider this an argument in favor of using in-context VNM utility normatively, since it is simpler than Hausner, and differs from it rarely, over relatively unimportant matters in the context.
I want to convince as many people as possible to use VNM utility confidently. I’d value any suggestions you have to rephrase the post to that effect, since (I think) you noticed the above unclarity in the first place.
Exactly! Perhaps just change “I’d say you assign a higher” to “then maybe you assign a higher”. That would be enough to stop people (me) twisting their minds in a knot trying to work out whether there is an oversight or whether they just don’t understand the whole VNM thing. I would also add something like “(or perhaps you just value unpredictability in your impending doom!)”, which would help the curious mind (or, me again!) to confirm their understanding of now the VNM structure allows for valuing uncertain events.
I think I agree with your point. I am dubious about the whole ‘infinitely more important’ thing but if VNM handles that in a sane way then the problem of beliefs about infinity rests on the agent and his map and VNM can just handle whatever values and expectations the agent happens to have.
(Thanks for taking the time to take another look at what I was trying to say. It isn’t often that I see conversations that go “No → huh? Actually. → wtf? No really” resolve themselves to agreement. So my prediction is one I am glad to be mistaken on!)
Okay, I rewrote the ending, and added a footnote about “valuing uncertainty”. Thanks for all the input!
Maybe it helps that we’re both more concerned with figuring stuff out than “winning an argument” :)