Barry Schwartz’s The Paradox of Choice—which I haven’t read, though I’ve read some of the research behind it—talks about how offering people more choices can make them less happy.
A simple intuition says this shouldn’t ought to happen to rational agents: If your current choice is X, and you’re offered an alternative Y that’s worse than X, and you know it, you can always just go on doing X. So a rational agent shouldn’t do worse by having more options. The more available actions you have, the more powerful you become—that’s how it should ought to work.
This is false if the agent has to deal with other agents and the agents do not have complete knowledge of each other’s source code (and memory). Agents can learn about how other agents work by observing what choices they make; in particular, if you offer Agent A additional alternatives and Agent A rejects them anyway, then Agent B can learn something about how Agent A works from this. If Agent A values Agent B not knowing how it works, then Agent A should in general value not being given more options even if it would reject them anyway.
That’s only because you’re not just adding an option, you’re changing the choice you already have. You’re not going from X to a choice between X and Y, but rather from X to a choice between X+(B knows A prefers this choice to the other) and Y+(B knows A prefers this choice to the other). This has nothing to do with there being another agent about, it’s just the fact that you’re changing the options available at the same time as you add extra ones.
EDIT: As a general principle, there shouldn’t be any results that hold except when there are other agents about, because other agents are just part of the universe.
This is false if the agent has to deal with other agents and the agents do not have complete knowledge of each other’s source code (and memory). Agents can learn about how other agents work by observing what choices they make; in particular, if you offer Agent A additional alternatives and Agent A rejects them anyway, then Agent B can learn something about how Agent A works from this. If Agent A values Agent B not knowing how it works, then Agent A should in general value not being given more options even if it would reject them anyway.
That’s only because you’re not just adding an option, you’re changing the choice you already have. You’re not going from X to a choice between X and Y, but rather from X to a choice between X+(B knows A prefers this choice to the other) and Y+(B knows A prefers this choice to the other). This has nothing to do with there being another agent about, it’s just the fact that you’re changing the options available at the same time as you add extra ones.
EDIT: As a general principle, there shouldn’t be any results that hold except when there are other agents about, because other agents are just part of the universe.
Aha. Thanks for the correction.