There would seem to be an obvious parallel with deontological as opposed to consequentialist ethics. (Which suggests the question: is there any interesting analogue of virtue ethics, where the agent attempts to have a utility function its overseer would like?)
I think maximizing versus satisficing is a question orthogonal to whether you pay attention to consequences, to hte actions that produce them, or to the character from which the actions flow. One could make a satisficing consequentialist agent, for instance. (Bostrom, IIRC, remarks that this wouldn’t necessarily avoid the dangers of overzealous optimization: instead of making unboundedly many paperclips because it wants as many as possible, our agent might make unboundedly many paperclips in order to be as sure as possible that it really did make at least 10.)
Boatrom’s point is valid in absence of other goals. A clippy which also values a slightly non-orthogonal goal would stop making paperclips once that other goal is interfered with by the excess of paperclips.
In virtue ethics you don’t maximize anything, you are free to pick any actions compatible with the virtues, so there is no utility function to speak of.
Which suggests the question: is there any interesting analogue of virtue ethics, where the agent attempts to have a utility function its overseer would like?
There would seem to be an obvious parallel with deontological as opposed to consequentialist ethics. (Which suggests the question: is there any interesting analogue of virtue ethics, where the agent attempts to have a utility function its overseer would like?)
I don’t think in virtue ethics you are obligated to maximize virtues, only satisfice them.
I think maximizing versus satisficing is a question orthogonal to whether you pay attention to consequences, to hte actions that produce them, or to the character from which the actions flow. One could make a satisficing consequentialist agent, for instance. (Bostrom, IIRC, remarks that this wouldn’t necessarily avoid the dangers of overzealous optimization: instead of making unboundedly many paperclips because it wants as many as possible, our agent might make unboundedly many paperclips in order to be as sure as possible that it really did make at least 10.)
Boatrom’s point is valid in absence of other goals. A clippy which also values a slightly non-orthogonal goal would stop making paperclips once that other goal is interfered with by the excess of paperclips.
In virtue ethics you don’t maximize anything, you are free to pick any actions compatible with the virtues, so there is no utility function to speak of.
This reminds me of Daniel Dewey’s proposal for an agent that learns its utility function: http://lesswrong.com/lw/560/new_fai_paper_learning_what_to_value_by_daniel/.