General rationality comment: from where I’m standing, Nisan, gwern, and benelliott have all correctly pointed out the mistake you’re making, and you don’t seem to be updating at all. Why is this? (For the record, I agree with them.)
Because in my view they did not correct any mistake I made, but they’re avoiding the core problem, using rhetoric tricks such as playing on words, irony, strawman or ad hominem instead. And I’m very disappointed to see the conversion go on this way, I wasn’t expecting that from LW. I was expecting people to disagree with me (most people here think NVM is justified) but I was expecting a constructive discussion, not such a bashing.
I don’t see Nisan or benelliott’s first comment doing any of that. (gwern could stand to be more civil.) What do you think the core problem is, and in what sense are the other comments avoiding it?
Perhaps I should elaborate on what I think the mistake is. First, let me tell you about when I made the same mistake. I once tried to point out that a rational agent may not want to do what it believes is rational if there are other agents around because it may not want to reveal to other agents information about itself. For example, if I were trying to decide between A = a trip to Ecuador and B = a trip to Iceland, I might prefer A to B but decide on B if I thought I was being watched by spies who were trying to ascertain my travel patterns and use this information against me in some way.
Someone else correctly pointed out that in this scenario I was not choosing between A and B, but between A’ = “a trip to Ecuador that spies know about” and B’ = “a trip to Iceland that spies know about,” which is a different choice. I tried to introduce a new element into the scenario without thinking about whether that element affected the outcomes I was choosing between.
I think you’re making the same kind of mistake. You start with a statement about your preferences regarding trips to Ecuador and Iceland and a new laptop, but then you introduce a new element, namely preparation time, without considering how it affects the outcomes you’re choosing between. As Nisan points out, as an agent in a timeline, you’re choosing between different world-histories, and those world-histories are more than just the instants where you get the trip or laptop. As benelliott points out, once you’ve introduced preparation time, you need to re-specify your outcomes in more detail to accommodate that, e.g. you might re-specify option E as E’ = “preparation time for a trip + 50% chance the trip is to Iceland and 50% chance the trip is to Ecuador.” And as gwern points out, any preparation you make for a trip when choosing option D will still pay off with probability 50%; you just need to consider whether that’s worth what will happen if you prepare for a trip that doesn’t happen, which is a perfectly ordinary expected value calculation.
Maybe the problem comes from my understanding of what the “alternative”, “choice” or “act” in the VNM axioms is.
To me it’s a single, atomic real-world choice you have to make: you’re offered a clear choice between options, and you’ve to select one. Like you’re offered a lottery ticket, and you can decide to buy it or not. Or to make my original example A = “in two months you’ll be given a voucher to go to Ecuador”, B = “in two months you’ll be given a laptop” and C = “in two months you’ll given a voucher to go to Iceland”. And the independence axiom that, over those choices, if I chose B over C, then I must chose (0.5A, 0.5B) over (0.5A, 0.5C). In my original understanding, things like “preparation” or “what I would do with the money if I win the lottery” are things I’m free to evaluate to chose A, B or C, but aren’t part of A, B or C.
The “world histories” view of benelliott seem to fix the problem at first glance, but to me it makes it even worse. If what you’re choosing is not individual actions, but whole “world histories”, then the independence axiom isn’t false, but doesn’t even make sense to me. Because the whole “world history” is necessarily different—the whole world history when offered to chose between B and C is in fact B’ = “B and knowing you had to chose between B and C” vs C’ = “C and knowing you had to chose between B and C”, while when offered to chose between D=(0.5A, 0.5B) vs E=(0.5A, 0.5C) is in fact (0.5A² = “A and knowing you had to chose between D and E”, 0.5B² = “B and knowing you had to chose between D and E”) vs (0.5A², 0.5C² = “C and knowing you had to chose between D and E”).
So, how do you define those (A, B, C) in the independence axiom (and the other axioms) so it doesn’t fall to the first problem, without making them factor the whole state of the world, in which case you can’t even formulate it?
To me it’s a single, atomic real-world choice you have to make:
To you it may be this, but the fact that this leads to an obvious absurdity suggests that this is not how most proponents think of it, or how its inventors thought of it.
I agree that things get complicated. In the worst case, you really do have to take the entire state of the world into consideration, including your own memory. For the sake of simple toy models, you can pretend that your memory is wiped after you make the choice so you don’t remember making it.
The collision I’m seeing is that between formal, mathematical axioms, and English language usage. Its clear that Benelliot is thinking of the axiom in mathematical terms: dry, inarguable, much like the independence axioms of probability: some statements about abstract sets. This is correct—the proper formulation of VNM is abstract, mathematical.
Kilobug is right in noting that information has value, ignorance has cost. But that doesn’t subvert the axiom, as the axioms are mathematically, by definition, correct; the way they were mapped to the example was incorrect: the choices aren’t truly independent.
Its also become clear that risk-aversion is essentially the same idea as “information has value”: people who are risk-averse are people who value certainty. This observation alone may well be enough to ‘explain’ the Allais paradox: the certainty of the ‘sure thing’ is worth something. All that the Allais experiment does is measure the value of certainty.
General rationality comment: from where I’m standing, Nisan, gwern, and benelliott have all correctly pointed out the mistake you’re making, and you don’t seem to be updating at all. Why is this? (For the record, I agree with them.)
Because in my view they did not correct any mistake I made, but they’re avoiding the core problem, using rhetoric tricks such as playing on words, irony, strawman or ad hominem instead. And I’m very disappointed to see the conversion go on this way, I wasn’t expecting that from LW. I was expecting people to disagree with me (most people here think NVM is justified) but I was expecting a constructive discussion, not such a bashing.
I don’t see Nisan or benelliott’s first comment doing any of that. (gwern could stand to be more civil.) What do you think the core problem is, and in what sense are the other comments avoiding it?
Perhaps I should elaborate on what I think the mistake is. First, let me tell you about when I made the same mistake. I once tried to point out that a rational agent may not want to do what it believes is rational if there are other agents around because it may not want to reveal to other agents information about itself. For example, if I were trying to decide between A = a trip to Ecuador and B = a trip to Iceland, I might prefer A to B but decide on B if I thought I was being watched by spies who were trying to ascertain my travel patterns and use this information against me in some way.
Someone else correctly pointed out that in this scenario I was not choosing between A and B, but between A’ = “a trip to Ecuador that spies know about” and B’ = “a trip to Iceland that spies know about,” which is a different choice. I tried to introduce a new element into the scenario without thinking about whether that element affected the outcomes I was choosing between.
I think you’re making the same kind of mistake. You start with a statement about your preferences regarding trips to Ecuador and Iceland and a new laptop, but then you introduce a new element, namely preparation time, without considering how it affects the outcomes you’re choosing between. As Nisan points out, as an agent in a timeline, you’re choosing between different world-histories, and those world-histories are more than just the instants where you get the trip or laptop. As benelliott points out, once you’ve introduced preparation time, you need to re-specify your outcomes in more detail to accommodate that, e.g. you might re-specify option E as E’ = “preparation time for a trip + 50% chance the trip is to Iceland and 50% chance the trip is to Ecuador.” And as gwern points out, any preparation you make for a trip when choosing option D will still pay off with probability 50%; you just need to consider whether that’s worth what will happen if you prepare for a trip that doesn’t happen, which is a perfectly ordinary expected value calculation.
Maybe the problem comes from my understanding of what the “alternative”, “choice” or “act” in the VNM axioms is.
To me it’s a single, atomic real-world choice you have to make: you’re offered a clear choice between options, and you’ve to select one. Like you’re offered a lottery ticket, and you can decide to buy it or not. Or to make my original example A = “in two months you’ll be given a voucher to go to Ecuador”, B = “in two months you’ll be given a laptop” and C = “in two months you’ll given a voucher to go to Iceland”. And the independence axiom that, over those choices, if I chose B over C, then I must chose (0.5A, 0.5B) over (0.5A, 0.5C). In my original understanding, things like “preparation” or “what I would do with the money if I win the lottery” are things I’m free to evaluate to chose A, B or C, but aren’t part of A, B or C.
The “world histories” view of benelliott seem to fix the problem at first glance, but to me it makes it even worse. If what you’re choosing is not individual actions, but whole “world histories”, then the independence axiom isn’t false, but doesn’t even make sense to me. Because the whole “world history” is necessarily different—the whole world history when offered to chose between B and C is in fact B’ = “B and knowing you had to chose between B and C” vs C’ = “C and knowing you had to chose between B and C”, while when offered to chose between D=(0.5A, 0.5B) vs E=(0.5A, 0.5C) is in fact (0.5A² = “A and knowing you had to chose between D and E”, 0.5B² = “B and knowing you had to chose between D and E”) vs (0.5A², 0.5C² = “C and knowing you had to chose between D and E”).
So, how do you define those (A, B, C) in the independence axiom (and the other axioms) so it doesn’t fall to the first problem, without making them factor the whole state of the world, in which case you can’t even formulate it?
To you it may be this, but the fact that this leads to an obvious absurdity suggests that this is not how most proponents think of it, or how its inventors thought of it.
I agree that things get complicated. In the worst case, you really do have to take the entire state of the world into consideration, including your own memory. For the sake of simple toy models, you can pretend that your memory is wiped after you make the choice so you don’t remember making it.
The collision I’m seeing is that between formal, mathematical axioms, and English language usage. Its clear that Benelliot is thinking of the axiom in mathematical terms: dry, inarguable, much like the independence axioms of probability: some statements about abstract sets. This is correct—the proper formulation of VNM is abstract, mathematical.
Kilobug is right in noting that information has value, ignorance has cost. But that doesn’t subvert the axiom, as the axioms are mathematically, by definition, correct; the way they were mapped to the example was incorrect: the choices aren’t truly independent.
Its also become clear that risk-aversion is essentially the same idea as “information has value”: people who are risk-averse are people who value certainty. This observation alone may well be enough to ‘explain’ the Allais paradox: the certainty of the ‘sure thing’ is worth something. All that the Allais experiment does is measure the value of certainty.