I guess the key question here is whether the weights ought to logically depend on the actual shape of the Pareto frontier.
Yes, whether a set of weights leads to Pareto-dominance depends logically on the shape of the Pareto frontier. So the theorem does not help with the computational part of figuring out what one’s values are.
I have a choice between (A) a solution known to be optimal along some dimensions not including considerations of logical uncertainty and dynamical consistency, or (B) a very imperfectly optimized solution that nevertheless probably does take them into account to some degree (i.e., the native decision making machinery that evolution gave me). Sticking with B for now doesn’t seem unreasonable to me
Sticking with B by default sounds reasonable except when we know something about the ways in which B falls short of optimality and the ways in which B takes dynamical consistency issues into account. E.g., I can pretty confidently recommend that minor philanthropists donate all their charity to the single best cause, modulo a number of important caveats and exceptions. It’s natural to feel that one should diversify their (altruistic, outcome-oriented) giving; but once one sees the theoretical justification for single-cause giving under ideal conditions and one explains away their intuitions with motives they don’t endorse and heuristics that work okay in the EAA but not on this particular problem, I think they have a good reason to go with choice A.
Even then, the philanthropist still has to decide which cause to donate to. It’s possible that once they believe they should construct a utility function for a particular domain, they’ll be able to use other tools to come up with a utility function they’re happy with. But this theorem doesn’t guarantee that.
I tried not to claim too much in the OP. I hope no one reads this post and makes a really bad decision because of an overly-naive expected-utility calculation.
Yes, whether a set of weights leads to Pareto-dominance depends logically on the shape of the Pareto frontier. So the theorem does not help with the computational part of figuring out what one’s values are.
Do you mean “figuring out what one’s weights are”? Assuming yes, I think my point was a bit stronger than that, namely there’s not necessarily a reason to figure out the weights at all, if in order to figure out the weights, you actually have to first come to a decision using some other procedure.
Sticking with B by default sounds reasonable except when we know something about the ways in which B falls short of optimality and the ways in which B takes dynamical consistency issues into account.
I think there’s probably local Pareto improvements that we can make to B, but that’s very different from switching to A (which is what your OP was arguing for).
E.g., I can pretty confidently recommend that minor philanthropists donate all their charity to the single best cause, modulo a number of important caveats and exceptions. It’s natural to feel that one should diversify their (altruistic, outcome-oriented) giving;
I agree this seems like a reasonable improvement to B, but I’m not sure what relevance your theorem has for it. You may have to write that post you mentioned in the OP to explain.
I tried not to claim too much in the OP. I hope no one reads this post and makes a really bad decision because of an overly-naive expected-utility calculation.
Besides that, I’m concerned about many people seemingly convinced that VNM is rationality and working hard to try to justify it, instead of working on a bunch of open problems that seem very important and interesting to me, one of which is what rationality actually is.
Yes, whether a set of weights leads to Pareto-dominance depends logically on the shape of the Pareto frontier. So the theorem does not help with the computational part of figuring out what one’s values are.
Do you mean “figuring out what one’s weights are”?
Yes
Assuming yes, I think my point was a bit stronger than that, namely there’s not necessarily a reason to figure out the weights at all, if in order to figure out the weights, you actually have to first come to a decision using some other procedure.
I think any disagreement we have here is subsumed by our discussion elsewhere in this thread.
I think there’s probably local Pareto improvements that we can make to B, but that’s very different from switching to A (which is what your OP was arguing for).
Perhaps I will write that philanthropy post, and then we will have a concrete example to discuss.
Besides that, I’m concerned about many people seemingly convinced that VNM is rationality and working hard to try to justify it, instead of working on a bunch of open problems that seem very important and interesting to me, one of which is what rationality actually is.
I appreciate your point.
ETA: Wei_Dai and I determined that part of our apparent disagreement came from the fact that an agent with a policy that happens to optimize a function does not need to use a decision algorithm that computes expected values.
Yes, whether a set of weights leads to Pareto-dominance depends logically on the shape of the Pareto frontier. So the theorem does not help with the computational part of figuring out what one’s values are.
Sticking with B by default sounds reasonable except when we know something about the ways in which B falls short of optimality and the ways in which B takes dynamical consistency issues into account. E.g., I can pretty confidently recommend that minor philanthropists donate all their charity to the single best cause, modulo a number of important caveats and exceptions. It’s natural to feel that one should diversify their (altruistic, outcome-oriented) giving; but once one sees the theoretical justification for single-cause giving under ideal conditions and one explains away their intuitions with motives they don’t endorse and heuristics that work okay in the EAA but not on this particular problem, I think they have a good reason to go with choice A.
Even then, the philanthropist still has to decide which cause to donate to. It’s possible that once they believe they should construct a utility function for a particular domain, they’ll be able to use other tools to come up with a utility function they’re happy with. But this theorem doesn’t guarantee that.
I tried not to claim too much in the OP. I hope no one reads this post and makes a really bad decision because of an overly-naive expected-utility calculation.
Do you mean “figuring out what one’s weights are”? Assuming yes, I think my point was a bit stronger than that, namely there’s not necessarily a reason to figure out the weights at all, if in order to figure out the weights, you actually have to first come to a decision using some other procedure.
I think there’s probably local Pareto improvements that we can make to B, but that’s very different from switching to A (which is what your OP was arguing for).
I agree this seems like a reasonable improvement to B, but I’m not sure what relevance your theorem has for it. You may have to write that post you mentioned in the OP to explain.
Besides that, I’m concerned about many people seemingly convinced that VNM is rationality and working hard to try to justify it, instead of working on a bunch of open problems that seem very important and interesting to me, one of which is what rationality actually is.
Yes
I think any disagreement we have here is subsumed by our discussion elsewhere in this thread.
Perhaps I will write that philanthropy post, and then we will have a concrete example to discuss.
I appreciate your point.
ETA: Wei_Dai and I determined that part of our apparent disagreement came from the fact that an agent with a policy that happens to optimize a function does not need to use a decision algorithm that computes expected values.