I mean that it would consider you modifying the probabilities as simply presenting a different problem.
I didn’t modify the probabilities. Unless you consider making the ‘e’ that you expressed in words explicit or converting percentage to decimal modification ‘modificaiton’. I also added a third scenario to demonstrate a compatible alternative.
I have concluded that the opening post contains a trivial error that is unlikely to be corrected. The worst place for errors is in an official looking context with otherwise high quality content, so I have reversed my upvote.
Unless you consider making the ‘e’ that you expressed in words explicit …
Ah! I see what you mean, and now agree that you didn’t modify the scenario. So I can better understand, when you say
You can not shut up and multiply (except when the agents whims it as such).
does the “except” clause mean “except when the agent is VNM rational”, or some similar assumption?
If your point is that I can’t deduce you are VNM-rational from your response to the extinction question alone, then you’re definitely right. If I came across to the contrary, I should reword the post.
What I meant to point out in that post paragraph is just that
1) there is no need to be “freaked out” by assigning finite utilities to huge events, because
2) even if you insist on treating them as “infinitely more important” than everyday events, you can just use Hausner utility to break ties in more trivial decisions (where the EUexistential values are equal, or “noisily indistinguishable” due for example to a precise time limit on a decision, after which time you’ve failed to distinguish existential risks from noise). I consider this an argument in favor of using in-context VNM utility normatively, since it is simpler than Hausner, and differs from it rarely, over relatively unimportant matters in the context.
I want to convince as many people as possible to use VNM utility confidently. I’d value any suggestions you have to rephrase the post to that effect, since (I think) you noticed the above unclarity in the first place.
If your point is that I can’t deduce you are VNM-rational from your response to the extinction question (and further questions) alone? Then you’re definitely right. If I came across to the contrary, I should reword the post.
Exactly! Perhaps just change “I’d say you assign a higher” to “then maybe you assign a higher”. That would be enough to stop people (me) twisting their minds in a knot trying to work out whether there is an oversight or whether they just don’t understand the whole VNM thing. I would also add something like “(or perhaps you just value unpredictability in your impending doom!)”, which would help the curious mind (or, me again!) to confirm their understanding of now the VNM structure allows for valuing uncertain events.
I think I agree with your point. I am dubious about the whole ‘infinitely more important’ thing but if VNM handles that in a sane way then the problem of beliefs about infinity rests on the agent and his map and VNM can just handle whatever values and expectations the agent happens to have.
(Thanks for taking the time to take another look at what I was trying to say. It isn’t often that I see conversations that go “No → huh? Actually. → wtf? No really” resolve themselves to agreement. So my prediction is one I am glad to be mistaken on!)
I didn’t modify the probabilities. Unless you consider making the ‘e’ that you expressed in words explicit or converting percentage to decimal modification ‘modificaiton’. I also added a third scenario to demonstrate a compatible alternative.
I have concluded that the opening post contains a trivial error that is unlikely to be corrected. The worst place for errors is in an official looking context with otherwise high quality content, so I have reversed my upvote.
Ah! I see what you mean, and now agree that you didn’t modify the scenario. So I can better understand, when you say
does the “except” clause mean “except when the agent is VNM rational”, or some similar assumption?
If your point is that I can’t deduce you are VNM-rational from your response to the extinction question alone, then you’re definitely right. If I came across to the contrary, I should reword the post.
What I meant to point out in that post paragraph is just that
1) there is no need to be “freaked out” by assigning finite utilities to huge events, because
2) even if you insist on treating them as “infinitely more important” than everyday events, you can just use Hausner utility to break ties in more trivial decisions (where the EUexistential values are equal, or “noisily indistinguishable” due for example to a precise time limit on a decision, after which time you’ve failed to distinguish existential risks from noise). I consider this an argument in favor of using in-context VNM utility normatively, since it is simpler than Hausner, and differs from it rarely, over relatively unimportant matters in the context.
I want to convince as many people as possible to use VNM utility confidently. I’d value any suggestions you have to rephrase the post to that effect, since (I think) you noticed the above unclarity in the first place.
Exactly! Perhaps just change “I’d say you assign a higher” to “then maybe you assign a higher”. That would be enough to stop people (me) twisting their minds in a knot trying to work out whether there is an oversight or whether they just don’t understand the whole VNM thing. I would also add something like “(or perhaps you just value unpredictability in your impending doom!)”, which would help the curious mind (or, me again!) to confirm their understanding of now the VNM structure allows for valuing uncertain events.
I think I agree with your point. I am dubious about the whole ‘infinitely more important’ thing but if VNM handles that in a sane way then the problem of beliefs about infinity rests on the agent and his map and VNM can just handle whatever values and expectations the agent happens to have.
(Thanks for taking the time to take another look at what I was trying to say. It isn’t often that I see conversations that go “No → huh? Actually. → wtf? No really” resolve themselves to agreement. So my prediction is one I am glad to be mistaken on!)
Okay, I rewrote the ending, and added a footnote about “valuing uncertainty”. Thanks for all the input!
Maybe it helps that we’re both more concerned with figuring stuff out than “winning an argument” :)