When an agent does something, it does so because it has some goal, and has determined that the thing it does will achieve the goal. Therefore, if you want to change what an agent does, you either change the goal (motivation selection), or change its method of determining stuff (capability control)*. Alternatively, you could make something that isn’t like an agent but still has really good cognitive capabilities. Perhaps this would count as ‘capability control’ relative to what I see as the book’s implicit assumption that smart things are agents.
[*] Note that this argument allows that the desired type of capability control would be to increase capability, perhaps so that the agent realises that doing what you hope it will do is actually a great idea.
I suppose the other alternative is that you don’t change the goal in the agent, but rather change the world in a way that changes which actions achieve the goal, i.e. incentive methods.
When an agent does something, it does so because it has some goal, and has determined that the thing it does will achieve the goal. Therefore, if you want to change what an agent does, you either change the goal (motivation selection), or change its method of determining stuff (capability control)*. Alternatively, you could make something that isn’t like an agent but still has really good cognitive capabilities. Perhaps this would count as ‘capability control’ relative to what I see as the book’s implicit assumption that smart things are agents.
[*] Note that this argument allows that the desired type of capability control would be to increase capability, perhaps so that the agent realises that doing what you hope it will do is actually a great idea.
I suppose the other alternative is that you don’t change the goal in the agent, but rather change the world in a way that changes which actions achieve the goal, i.e. incentive methods.