If your point is that I can’t deduce you are VNM-rational from your response to the extinction question (and further questions) alone? Then you’re definitely right. If I came across to the contrary, I should reword the post.
Exactly! Perhaps just change “I’d say you assign a higher” to “then maybe you assign a higher”. That would be enough to stop people (me) twisting their minds in a knot trying to work out whether there is an oversight or whether they just don’t understand the whole VNM thing. I would also add something like “(or perhaps you just value unpredictability in your impending doom!)”, which would help the curious mind (or, me again!) to confirm their understanding of now the VNM structure allows for valuing uncertain events.
I think I agree with your point. I am dubious about the whole ‘infinitely more important’ thing but if VNM handles that in a sane way then the problem of beliefs about infinity rests on the agent and his map and VNM can just handle whatever values and expectations the agent happens to have.
(Thanks for taking the time to take another look at what I was trying to say. It isn’t often that I see conversations that go “No → huh? Actually. → wtf? No really” resolve themselves to agreement. So my prediction is one I am glad to be mistaken on!)
Exactly! Perhaps just change “I’d say you assign a higher” to “then maybe you assign a higher”. That would be enough to stop people (me) twisting their minds in a knot trying to work out whether there is an oversight or whether they just don’t understand the whole VNM thing. I would also add something like “(or perhaps you just value unpredictability in your impending doom!)”, which would help the curious mind (or, me again!) to confirm their understanding of now the VNM structure allows for valuing uncertain events.
I think I agree with your point. I am dubious about the whole ‘infinitely more important’ thing but if VNM handles that in a sane way then the problem of beliefs about infinity rests on the agent and his map and VNM can just handle whatever values and expectations the agent happens to have.
(Thanks for taking the time to take another look at what I was trying to say. It isn’t often that I see conversations that go “No → huh? Actually. → wtf? No really” resolve themselves to agreement. So my prediction is one I am glad to be mistaken on!)
Okay, I rewrote the ending, and added a footnote about “valuing uncertainty”. Thanks for all the input!
Maybe it helps that we’re both more concerned with figuring stuff out than “winning an argument” :)