An attempt at rephrasing a shard theory critique of utility function reasoning, while restricting myself to things I basically agree with:
Yes, there are representation theorems that say coherent behaviour is optimizing some utility function. And yes, for the sake of discussion let’s say this extends to reward functions in the setting of sequential decision-making (even tho I don’t remember seeing a theorem for that). But: just because there’s a mapping, doesn’t mean that we can pull back a uniform measure on utility/reward functions to get a reasonable measure on agents—those theorems don’t tell us that we should expect a uniform distribution on utility/reward functions, or even a nice distribution! They would if agents were born with utility functions in their heads represented as tables or something, where you could swap entries in different rows, but that’s not what the theorems say!
An attempt at rephrasing a shard theory critique of utility function reasoning, while restricting myself to things I basically agree with:
Yes, there are representation theorems that say coherent behaviour is optimizing some utility function. And yes, for the sake of discussion let’s say this extends to reward functions in the setting of sequential decision-making (even tho I don’t remember seeing a theorem for that). But: just because there’s a mapping, doesn’t mean that we can pull back a uniform measure on utility/reward functions to get a reasonable measure on agents—those theorems don’t tell us that we should expect a uniform distribution on utility/reward functions, or even a nice distribution! They would if agents were born with utility functions in their heads represented as tables or something, where you could swap entries in different rows, but that’s not what the theorems say!