What does it mean for a decision algorithm to fail?
When it makes decisions that are undesirable. There is no point deciding to run a decision algorithm which is perfectly consistent but results in outcomes you don’t want.
In the case of the Omega’s-life-tickets scenario, one could argue it fails in an objective sense since it will never stop buying tickets until it dies. But that wasn’t even the point I was trying to make.
And an unbelievable premise leads to an unbelievable conclusion.
I don’t know if there is a name for this fallacy but there should be. Where someone objects to the premises of a hypothetical situation intended just to demonstrate a point. E.g. people who refuse to answer the trolley dilemma and instead say “but that will probably never happen!” It’s very frustrating.
EU is not “required” to trade away huge amounts of outcome-space for really good but improbable outcomes. EU applies preference models to novel situations, not to produce preferences but to preserve them. If you gave EU a preference model that matched your preferences, it will preserve the match and give you actions that best satisfy your preferences in underneath the uncertainty model of the universe you gave it.
This is very subtle circular reasoning. If you assume your goal is to maximize the expected value some utility function, then maximizing expected utility can do that if you specify the right utility function.
What I’ve been saying from the very beginning is that there isn’t any reason to believe there is any utility function that will produce desirable outcomes if fed to an expected utility maximizer.
I think a decision algorithm fails if it makes you predictably worse off than an alternative algorithm
Even if you are an EU maximizer, EU will make you “predictably” worse off, as in the majority of cases you will be worse off. A true EU maximizer doesn’t care so long as the utility of the very low probability outcomes is high enough.
There are good and bad reasons to fight the hypothetical. When it comes to these particular problems, though, the objections I’ve given are my true objections. The reason I’d only pay a tiny amount of money for the gamble in the St. Petersburg Paradox is that there is only so much financial value that the house can give up. One of the reasons I’m sure this is my true objection is because the richer the house, the more I would pay for such a gamble. (Because there are no infinitely rich houses, there is no one I would pay an infinite amount to for such a gamble.)
This is very subtle circular reasoning. If you assume your goal is to maximize the expected value some utility function, then maximizing expected utility can do that if you specify the right utility function.
I’m not sure why you think it’s subtle—I started off this conversation with:
This might sound silly, but it’s deeper than it looks: the reason why we use the expected value of utility (i.e. means) to determine the best of a set of gambles is because utility is defined as the thing that you maximize the expected value of.
But I don’t think it’s quite right to call it “circular,” for roughly the same reasons I don’t think it’s right to call logic “circular.”
What I’ve been saying from the very beginning is that there isn’t any reason to believe there is any utility function that will produce desirable outcomes if fed to an expected utility maximizer.
To make sure we’re talking about the same thing, I think an expected utility maximizer (EUM) is something that takes both a function u(O) that maps outcomes to utilities, a function p(A->O) that maps actions to probabilities of outcomes, and a set of possible actions, and then finds the action out of all possible A that has the maximum weighted sum of u(O)p(A->O) over all possible O.
So far, you have not been arguing that every possible EUM leads to pathological outcomes; you have been exhibiting particular combinations of u(O) and p(A->O) that lead to pathological outcomes, and I have been responding with “have you tried not using those u(O)s and p(A->O)s?”.
It doesn’t seem to me that this conversation is producing value for either of us, which suggests that we should either restart the conversation, take it to PMs, or drop it.
When it makes decisions that are undesirable. There is no point deciding to run a decision algorithm which is perfectly consistent but results in outcomes you don’t want.
In the case of the Omega’s-life-tickets scenario, one could argue it fails in an objective sense since it will never stop buying tickets until it dies. But that wasn’t even the point I was trying to make.
I don’t know if there is a name for this fallacy but there should be. Where someone objects to the premises of a hypothetical situation intended just to demonstrate a point. E.g. people who refuse to answer the trolley dilemma and instead say “but that will probably never happen!” It’s very frustrating.
This is very subtle circular reasoning. If you assume your goal is to maximize the expected value some utility function, then maximizing expected utility can do that if you specify the right utility function.
What I’ve been saying from the very beginning is that there isn’t any reason to believe there is any utility function that will produce desirable outcomes if fed to an expected utility maximizer.
Even if you are an EU maximizer, EU will make you “predictably” worse off, as in the majority of cases you will be worse off. A true EU maximizer doesn’t care so long as the utility of the very low probability outcomes is high enough.
One name is fighting the hypothetical, and it’s worth taking a look at the least convenient possible world and the true rejection as well.
There are good and bad reasons to fight the hypothetical. When it comes to these particular problems, though, the objections I’ve given are my true objections. The reason I’d only pay a tiny amount of money for the gamble in the St. Petersburg Paradox is that there is only so much financial value that the house can give up. One of the reasons I’m sure this is my true objection is because the richer the house, the more I would pay for such a gamble. (Because there are no infinitely rich houses, there is no one I would pay an infinite amount to for such a gamble.)
I’m not sure why you think it’s subtle—I started off this conversation with:
But I don’t think it’s quite right to call it “circular,” for roughly the same reasons I don’t think it’s right to call logic “circular.”
To make sure we’re talking about the same thing, I think an expected utility maximizer (EUM) is something that takes both a function u(O) that maps outcomes to utilities, a function p(A->O) that maps actions to probabilities of outcomes, and a set of possible actions, and then finds the action out of all possible A that has the maximum weighted sum of u(O)p(A->O) over all possible O.
So far, you have not been arguing that every possible EUM leads to pathological outcomes; you have been exhibiting particular combinations of u(O) and p(A->O) that lead to pathological outcomes, and I have been responding with “have you tried not using those u(O)s and p(A->O)s?”.
It doesn’t seem to me that this conversation is producing value for either of us, which suggests that we should either restart the conversation, take it to PMs, or drop it.