So lets say that for each person P, there is a specific question Q_P such that:
For a person P, ‘I should X’, means that X answers the question Q_P.
Now how is Q_P generated?
Generated? By that do you mean, causally generated? Q_P is causally generated by evolutionary psychology and memetic history.
Do you mean how would a correctly structured FAI obtain an internal copy of Q_P? By looking/guessing at person P’s empirical brain state.
Do you mean how is Q_P justified? Any particular guess by P at “What is good?” will be justified by appeals to Q_P; if they somehow obtained an exact representation of Q_P then its pieces might or might not all look individually attractive.
These are all distinct concepts!
Is it what P would want were she given access to all the best empirical and moral arguments (what I called being fully informed)? If so, do we have to time index the judgment as well? i.e. if P’s preferences change at some late time T1, then did the person mean something different by ‘I should X’ before and after T1 , or was the person just incorrect at one of those times? What if the change is just through acquiring better information (empirical or moral)?
(Items marked in bold have to be morally evaluated.)
I do believe in moral progress, both as a personal goal and as a concept worth saving; but if you want to talk about moral progress in an ideal sense rather than a historical sense, you have to construe a means of extrapolating it—since it is not guaranteed that our change under moral arguments resolves to a unique value system or even a unique transpersonal value system.
So I regard Q_P as an initial state that includes the specification of how it changes; if you construe a volition therefrom, I would call that EV_Q_P.
If you ask where EV_Q_P comes from causally, it is ev-psych plus memetic history plus your own construal of a specific extrapolation of reactivity to moral arguments.
If you ask how an FAI learns EV_Q_P it is by looking at the person, from within a framework of extrapolation that you (or rather I) defined.
If you ask how one would justify EV_Q_P, it is, like all good things, justified by appeal to Q_P.
If P’s preferences change according to something that was in Q_P or EV_Q_P then they have changed in a good way, committed an act of moral progress, and hence—more or less by definition—stayed within the same “frame of moral reference”, which is how I would refer to what the ancient Greeks and us have in common but a paperclip maximizer does not.
Should P’s preferences change due to some force that was / would-be unwanted, like an Unfriendly AI reprogramming their brain, then as a moral judgment, I should say that they have been harmed, that their moral frame of reference has changed, and that their actions are now being directed by something other than “should”.
Toby Ord:
For a person P, ‘I should X’, means that X answers the question Q_P.
Now how is Q_P generated?
Generated? By that do you mean, causally generated? Q_P is causally generated by evolutionary psychology and memetic history.
Do you mean how would a correctly structured FAI obtain an internal copy of Q_P? By looking/guessing at person P’s empirical brain state.
Do you mean how is Q_P justified? Any particular guess by P at “What is good?” will be justified by appeals to Q_P; if they somehow obtained an exact representation of Q_P then its pieces might or might not all look individually attractive.
These are all distinct concepts!
(Items marked in bold have to be morally evaluated.)
I do believe in moral progress, both as a personal goal and as a concept worth saving; but if you want to talk about moral progress in an ideal sense rather than a historical sense, you have to construe a means of extrapolating it—since it is not guaranteed that our change under moral arguments resolves to a unique value system or even a unique transpersonal value system.
So I regard Q_P as an initial state that includes the specification of how it changes; if you construe a volition therefrom, I would call that EV_Q_P.
If you ask where EV_Q_P comes from causally, it is ev-psych plus memetic history plus your own construal of a specific extrapolation of reactivity to moral arguments.
If you ask how an FAI learns EV_Q_P it is by looking at the person, from within a framework of extrapolation that you (or rather I) defined.
If you ask how one would justify EV_Q_P, it is, like all good things, justified by appeal to Q_P.
If P’s preferences change according to something that was in Q_P or EV_Q_P then they have changed in a good way, committed an act of moral progress, and hence—more or less by definition—stayed within the same “frame of moral reference”, which is how I would refer to what the ancient Greeks and us have in common but a paperclip maximizer does not.
Should P’s preferences change due to some force that was / would-be unwanted, like an Unfriendly AI reprogramming their brain, then as a moral judgment, I should say that they have been harmed, that their moral frame of reference has changed, and that their actions are now being directed by something other than “should”.