Maybe the problem comes from my understanding of what the “alternative”, “choice” or “act” in the VNM axioms is.
To me it’s a single, atomic real-world choice you have to make: you’re offered a clear choice between options, and you’ve to select one. Like you’re offered a lottery ticket, and you can decide to buy it or not. Or to make my original example A = “in two months you’ll be given a voucher to go to Ecuador”, B = “in two months you’ll be given a laptop” and C = “in two months you’ll given a voucher to go to Iceland”. And the independence axiom that, over those choices, if I chose B over C, then I must chose (0.5A, 0.5B) over (0.5A, 0.5C). In my original understanding, things like “preparation” or “what I would do with the money if I win the lottery” are things I’m free to evaluate to chose A, B or C, but aren’t part of A, B or C.
The “world histories” view of benelliott seem to fix the problem at first glance, but to me it makes it even worse. If what you’re choosing is not individual actions, but whole “world histories”, then the independence axiom isn’t false, but doesn’t even make sense to me. Because the whole “world history” is necessarily different—the whole world history when offered to chose between B and C is in fact B’ = “B and knowing you had to chose between B and C” vs C’ = “C and knowing you had to chose between B and C”, while when offered to chose between D=(0.5A, 0.5B) vs E=(0.5A, 0.5C) is in fact (0.5A² = “A and knowing you had to chose between D and E”, 0.5B² = “B and knowing you had to chose between D and E”) vs (0.5A², 0.5C² = “C and knowing you had to chose between D and E”).
So, how do you define those (A, B, C) in the independence axiom (and the other axioms) so it doesn’t fall to the first problem, without making them factor the whole state of the world, in which case you can’t even formulate it?
To me it’s a single, atomic real-world choice you have to make:
To you it may be this, but the fact that this leads to an obvious absurdity suggests that this is not how most proponents think of it, or how its inventors thought of it.
I agree that things get complicated. In the worst case, you really do have to take the entire state of the world into consideration, including your own memory. For the sake of simple toy models, you can pretend that your memory is wiped after you make the choice so you don’t remember making it.
Maybe the problem comes from my understanding of what the “alternative”, “choice” or “act” in the VNM axioms is.
To me it’s a single, atomic real-world choice you have to make: you’re offered a clear choice between options, and you’ve to select one. Like you’re offered a lottery ticket, and you can decide to buy it or not. Or to make my original example A = “in two months you’ll be given a voucher to go to Ecuador”, B = “in two months you’ll be given a laptop” and C = “in two months you’ll given a voucher to go to Iceland”. And the independence axiom that, over those choices, if I chose B over C, then I must chose (0.5A, 0.5B) over (0.5A, 0.5C). In my original understanding, things like “preparation” or “what I would do with the money if I win the lottery” are things I’m free to evaluate to chose A, B or C, but aren’t part of A, B or C.
The “world histories” view of benelliott seem to fix the problem at first glance, but to me it makes it even worse. If what you’re choosing is not individual actions, but whole “world histories”, then the independence axiom isn’t false, but doesn’t even make sense to me. Because the whole “world history” is necessarily different—the whole world history when offered to chose between B and C is in fact B’ = “B and knowing you had to chose between B and C” vs C’ = “C and knowing you had to chose between B and C”, while when offered to chose between D=(0.5A, 0.5B) vs E=(0.5A, 0.5C) is in fact (0.5A² = “A and knowing you had to chose between D and E”, 0.5B² = “B and knowing you had to chose between D and E”) vs (0.5A², 0.5C² = “C and knowing you had to chose between D and E”).
So, how do you define those (A, B, C) in the independence axiom (and the other axioms) so it doesn’t fall to the first problem, without making them factor the whole state of the world, in which case you can’t even formulate it?
To you it may be this, but the fact that this leads to an obvious absurdity suggests that this is not how most proponents think of it, or how its inventors thought of it.
I agree that things get complicated. In the worst case, you really do have to take the entire state of the world into consideration, including your own memory. For the sake of simple toy models, you can pretend that your memory is wiped after you make the choice so you don’t remember making it.