Yes, whether a set of weights leads to Pareto-dominance depends logically on the shape of the Pareto frontier. So the theorem does not help with the computational part of figuring out what one’s values are.
Do you mean “figuring out what one’s weights are”? Assuming yes, I think my point was a bit stronger than that, namely there’s not necessarily a reason to figure out the weights at all, if in order to figure out the weights, you actually have to first come to a decision using some other procedure.
Sticking with B by default sounds reasonable except when we know something about the ways in which B falls short of optimality and the ways in which B takes dynamical consistency issues into account.
I think there’s probably local Pareto improvements that we can make to B, but that’s very different from switching to A (which is what your OP was arguing for).
E.g., I can pretty confidently recommend that minor philanthropists donate all their charity to the single best cause, modulo a number of important caveats and exceptions. It’s natural to feel that one should diversify their (altruistic, outcome-oriented) giving;
I agree this seems like a reasonable improvement to B, but I’m not sure what relevance your theorem has for it. You may have to write that post you mentioned in the OP to explain.
I tried not to claim too much in the OP. I hope no one reads this post and makes a really bad decision because of an overly-naive expected-utility calculation.
Besides that, I’m concerned about many people seemingly convinced that VNM is rationality and working hard to try to justify it, instead of working on a bunch of open problems that seem very important and interesting to me, one of which is what rationality actually is.
Yes, whether a set of weights leads to Pareto-dominance depends logically on the shape of the Pareto frontier. So the theorem does not help with the computational part of figuring out what one’s values are.
Do you mean “figuring out what one’s weights are”?
Yes
Assuming yes, I think my point was a bit stronger than that, namely there’s not necessarily a reason to figure out the weights at all, if in order to figure out the weights, you actually have to first come to a decision using some other procedure.
I think any disagreement we have here is subsumed by our discussion elsewhere in this thread.
I think there’s probably local Pareto improvements that we can make to B, but that’s very different from switching to A (which is what your OP was arguing for).
Perhaps I will write that philanthropy post, and then we will have a concrete example to discuss.
Besides that, I’m concerned about many people seemingly convinced that VNM is rationality and working hard to try to justify it, instead of working on a bunch of open problems that seem very important and interesting to me, one of which is what rationality actually is.
I appreciate your point.
ETA: Wei_Dai and I determined that part of our apparent disagreement came from the fact that an agent with a policy that happens to optimize a function does not need to use a decision algorithm that computes expected values.
Do you mean “figuring out what one’s weights are”? Assuming yes, I think my point was a bit stronger than that, namely there’s not necessarily a reason to figure out the weights at all, if in order to figure out the weights, you actually have to first come to a decision using some other procedure.
I think there’s probably local Pareto improvements that we can make to B, but that’s very different from switching to A (which is what your OP was arguing for).
I agree this seems like a reasonable improvement to B, but I’m not sure what relevance your theorem has for it. You may have to write that post you mentioned in the OP to explain.
Besides that, I’m concerned about many people seemingly convinced that VNM is rationality and working hard to try to justify it, instead of working on a bunch of open problems that seem very important and interesting to me, one of which is what rationality actually is.
Yes
I think any disagreement we have here is subsumed by our discussion elsewhere in this thread.
Perhaps I will write that philanthropy post, and then we will have a concrete example to discuss.
I appreciate your point.
ETA: Wei_Dai and I determined that part of our apparent disagreement came from the fact that an agent with a policy that happens to optimize a function does not need to use a decision algorithm that computes expected values.