I’d expect the preference at each point to mostly go in the direction of either axis.
However, this analysis should be interesting in non-cooperative games where the vector might represent a mixed strategy, with amplitude the expected payoff perhaps.
I’d expect the preference at each point to mostly go in the direction of either axis.
Do you mean that you expect that at each point, the preference vector will be almost entirely pointed in the x direction, or almost entirely pointed in the y direction, rather than being pointed in a “mixed direction”? If so, why would that be your expectation? To me, it seems very intuitive that people often care a lot about both more wealth and more security, or often care a lot about both the size and the appearance of their car. And in a great deal of other examples, people care about many dimensions of a particular thing/situation at the same time.
Here’s two things you might mean that I might agree with:
“If we consider literally any possible point in the vector field, and not just those that the agent is relatively likely to find itself in, then at most such points the vector will be almost entirely towards one of the axes. This is because there are diminishing returns to most things, and they come relatively early. So if we’re considering, for example, not just $0 to $100 million, but $0 to infinite $s, at most such points the agent will probably care more about whatever the other thing is, because there’s almost no value in additional money.”
“If we consider literally any possible dimensions over which an agent could theoretically have preferences, and not just those we’d usually bother to think about, then most such dimensions won’t matter to an agent. For example, this would include as many dimensions for the number of specks of dust there are on X planet as there are planets. So the agent’s preferences will largely ignore most possible dimensions, and therefore if we choose a random pair of dimensions, if one of them happens to be meaningful the preference will almost entirely point towards that one.” (It seems less likely that that’s what you meant, though it does seem a somewhat interesting separate point.)
I’d expect the preference at each point to mostly go in the direction of either axis.
However, this analysis should be interesting in non-cooperative games where the vector might represent a mixed strategy, with amplitude the expected payoff perhaps.
And kudos for the neat explanation and an interesting theoretical framework :)
Thanks!
Do you mean that you expect that at each point, the preference vector will be almost entirely pointed in the x direction, or almost entirely pointed in the y direction, rather than being pointed in a “mixed direction”? If so, why would that be your expectation? To me, it seems very intuitive that people often care a lot about both more wealth and more security, or often care a lot about both the size and the appearance of their car. And in a great deal of other examples, people care about many dimensions of a particular thing/situation at the same time.
Here’s two things you might mean that I might agree with:
“If we consider literally any possible point in the vector field, and not just those that the agent is relatively likely to find itself in, then at most such points the vector will be almost entirely towards one of the axes. This is because there are diminishing returns to most things, and they come relatively early. So if we’re considering, for example, not just $0 to $100 million, but $0 to infinite $s, at most such points the agent will probably care more about whatever the other thing is, because there’s almost no value in additional money.”
“If we consider literally any possible dimensions over which an agent could theoretically have preferences, and not just those we’d usually bother to think about, then most such dimensions won’t matter to an agent. For example, this would include as many dimensions for the number of specks of dust there are on X planet as there are planets. So the agent’s preferences will largely ignore most possible dimensions, and therefore if we choose a random pair of dimensions, if one of them happens to be meaningful the preference will almost entirely point towards that one.” (It seems less likely that that’s what you meant, though it does seem a somewhat interesting separate point.)
No, I was simply mistaken. Thanks for correcting my intuitions on the topic!