Yes, whether a set of weights leads to Pareto-dominance depends logically on the shape of the Pareto frontier. So the theorem does not help with the computational part of figuring out what one’s values are.
Do you mean “figuring out what one’s weights are”?
Yes
Assuming yes, I think my point was a bit stronger than that, namely there’s not necessarily a reason to figure out the weights at all, if in order to figure out the weights, you actually have to first come to a decision using some other procedure.
I think any disagreement we have here is subsumed by our discussion elsewhere in this thread.
I think there’s probably local Pareto improvements that we can make to B, but that’s very different from switching to A (which is what your OP was arguing for).
Perhaps I will write that philanthropy post, and then we will have a concrete example to discuss.
Besides that, I’m concerned about many people seemingly convinced that VNM is rationality and working hard to try to justify it, instead of working on a bunch of open problems that seem very important and interesting to me, one of which is what rationality actually is.
I appreciate your point.
ETA: Wei_Dai and I determined that part of our apparent disagreement came from the fact that an agent with a policy that happens to optimize a function does not need to use a decision algorithm that computes expected values.
Yes
I think any disagreement we have here is subsumed by our discussion elsewhere in this thread.
Perhaps I will write that philanthropy post, and then we will have a concrete example to discuss.
I appreciate your point.
ETA: Wei_Dai and I determined that part of our apparent disagreement came from the fact that an agent with a policy that happens to optimize a function does not need to use a decision algorithm that computes expected values.