I think your original post would have been better if it included any arguments against utility functions, such as those you mention under “e.g.” here.
Besides being a more meaningful post, we would also be able to discuss your comments. For example, without more detail, I can’t tell whether your last comment is addressed sufficiently by the standard equivalence of normal-form and extensive-form games.
Essentially every post would have been better if it had included some additional thing. Based on various recent comments I was under the impression that people want more posts in Discussion so I’ve been experimenting with that, and I’m keeping the burden of quality deliberately low so that I’ll post at all.
I appreciate you writing this way—speaking for myself, I’m perfectly happy with a short opening claim and then the subtleties and evidence emerges in the following comments. A dialogue can be a better way to illuminate a topic than a long comprehensive essay.
Let me rephrase: would you like to describe your arguments against utility functions in more detail?
For example, as I mentioned, there’s an obvious mathematical equivalence between making a plan at the beginning of time and planning as you go, which is directly analogous to how one converts games from extensive-form to normal-form. As such, all aspects of acquiring information is handled just fine (from a mathematical standpoint) in the setup of vNM.
The standard response to the discussion of knowing probabilities exactly and to concerns about computational complexity (in essence) is that we may want to throw aside epistemic concerns and simply learn what we can from a theory that is not troubled by them (a la air resistance in physics..)? Is your objection essentially that those factors are more dominant in human morality than LW acknowledges? And if so, is the objection to the normal-form assumption essentially the same?
For example, as I mentioned, there’s an obvious mathematical equivalence between making a plan at the beginning of time and planning as you go, which is directly analogous to how one converts games from extensive-form to normal-form. As such, all aspects of acquiring information is handled just fine (from a mathematical standpoint) in the setup of vNM.
Can you give more details here? I’m not familiar with extensive-form vs. normal-form games.
The standard response to the discussion of knowing probabilities exactly and to concerns about computational complexity (in essence) is that we may want to throw aside epistemic concerns and simply learn what we can from a theory that is not troubled by them (a la air resistance in physics..)? Is your objection essentially that those factors are more dominant in human morality than LW acknowledges?
Something like that. It seems like the computational concerns are extremely important: after all, a theory of morality should ultimately output actions, and to output actions in the context of a utility function-based model you need to be able to actually calculate probabilities and utilities.
Sure. Say you have to make some decision now, and you will be asked to make a decision later about something else. Your decision later may depend on your decision now as well as part of the world that you don’t control, and you may learn new information from the world in the meantime. Then the usual way of rolling all of that up into a single decision now is that you make your current decision as well as a decision about how you would act in the future for all possible changes in the world and possible information gained.
This is vaguely analogous to how you can curry a function of multiple arguments. Taking one argument X and returning (a function of one argument Y that returns Z) is equivalent to taking two arguments X and Y and returning X.
There’s potentially a huge computational complexity blowup here, which is why I stressed mathematical equivalence in my posts.
Then the usual way of rolling all of that up into a single decision now is that you make your current decision as well as a decision about how you would act in the future for all possible changes in the world and possible information gained.
I think your original post would have been better if it included any arguments against utility functions, such as those you mention under “e.g.” here.
Besides being a more meaningful post, we would also be able to discuss your comments. For example, without more detail, I can’t tell whether your last comment is addressed sufficiently by the standard equivalence of normal-form and extensive-form games.
Essentially every post would have been better if it had included some additional thing. Based on various recent comments I was under the impression that people want more posts in Discussion so I’ve been experimenting with that, and I’m keeping the burden of quality deliberately low so that I’ll post at all.
I appreciate you writing this way—speaking for myself, I’m perfectly happy with a short opening claim and then the subtleties and evidence emerges in the following comments. A dialogue can be a better way to illuminate a topic than a long comprehensive essay.
Let me rephrase: would you like to describe your arguments against utility functions in more detail?
For example, as I mentioned, there’s an obvious mathematical equivalence between making a plan at the beginning of time and planning as you go, which is directly analogous to how one converts games from extensive-form to normal-form. As such, all aspects of acquiring information is handled just fine (from a mathematical standpoint) in the setup of vNM.
The standard response to the discussion of knowing probabilities exactly and to concerns about computational complexity (in essence) is that we may want to throw aside epistemic concerns and simply learn what we can from a theory that is not troubled by them (a la air resistance in physics..)? Is your objection essentially that those factors are more dominant in human morality than LW acknowledges? And if so, is the objection to the normal-form assumption essentially the same?
Can you give more details here? I’m not familiar with extensive-form vs. normal-form games.
Something like that. It seems like the computational concerns are extremely important: after all, a theory of morality should ultimately output actions, and to output actions in the context of a utility function-based model you need to be able to actually calculate probabilities and utilities.
Sure. Say you have to make some decision now, and you will be asked to make a decision later about something else. Your decision later may depend on your decision now as well as part of the world that you don’t control, and you may learn new information from the world in the meantime. Then the usual way of rolling all of that up into a single decision now is that you make your current decision as well as a decision about how you would act in the future for all possible changes in the world and possible information gained.
This is vaguely analogous to how you can curry a function of multiple arguments. Taking one argument X and returning (a function of one argument Y that returns Z) is equivalent to taking two arguments X and Y and returning X.
There’s potentially a huge computational complexity blowup here, which is why I stressed mathematical equivalence in my posts.
Thanks for the explanation! It seems pretty clear to me that humans don’t even approximately do this, though.
Sounds not very feasible...