“A prototypical example here would be an abstraction-based decision theory. There, the notion of “success” would not be “system achieves the maximum amount of utility”, but rather “system abstracts into a utility-maximizing agent”. The system’s “choices” will be used both to maximize utility and to make sure the abstraction holds. The “supporting infrastructure” part—i.e. making sure the abstraction holds—is what would handle things like e.g. acting as though the agent is deciding for simulations of itself (see the link for more explanation of that).”
isn’t this kind kind of like virtue ethics as opposed to utilitarianism?
isn’t this kind kind of like virtue ethics as opposed to utilitarianism?
Interesting analogy, I hadn’t thought of that.