I would less concerned if this was used on someone like William MacAskill [...] but a lot of humans have seemingly terrible meta-preferences
In those cases, I’d give more weight to the preferences than the meta-preferences. There is the issue of avoiding ignorant-yet-confident meta-preferences, which I’m working on writing up right now (partially thanks to you very comment here, thanks!)
or at least different meta-preferences which likely lead to different object-level preferences (so they can’t all be right, assuming moral realism).
Moral realism is ill-defined, and some allow that humans and AI would have different types of morally true facts. So it’s not too much of a stretch to assume that different humans might have different morally true facts from each other, so I don’t see this as being necessarily a problem.
Moral realism through acausal trade is the only version of moral realism that seems to be coherent, and to do that, you still have to synthesise individual preferences first. So “one single universal true morality” does not necessarily contradict “contingent choices in figuring out your own preferences”.
There is the issue of avoiding ignorant-yet-confident meta-preferences, which I’m working on writing up right now (partially thanks to you very comment here, thanks!)
I look forward to reading that. In the meantime can you address my parenthetical point in the grand-parent comment: “correctly extracting William MacAskill’s meta-preferences seems equivalent to learning metaphilosophy from William”? If it’s not clear, what I mean is that suppose Will wants to figure out his values by doing philosophy (which I think he actually does), does that mean that under you scheme the AI needs to learn how to do philosophy? If so, how do you plan to get around the problems with applying ML to metaphilosophy that I described in Some Thoughts on Metaphilosophy?
There is one way of doing metaphilosophy this way, which is “run (simulated) William MacAskill until he thinks he’s found a good metaphilosophy” or “find a description of metaphilosophy to which WA would say ‘yes’.”
But what the system I’ve sketched would most likely do is come up with something to which WA would say “yes, I can kinda see why that was built, but it doesn’t really fit together as I’d like and has a some of ad hoc and object level features”. That’s the “adequate” part of the process.
In those cases, I’d give more weight to the preferences than the meta-preferences. There is the issue of avoiding ignorant-yet-confident meta-preferences, which I’m working on writing up right now (partially thanks to you very comment here, thanks!)
Moral realism is ill-defined, and some allow that humans and AI would have different types of morally true facts. So it’s not too much of a stretch to assume that different humans might have different morally true facts from each other, so I don’t see this as being necessarily a problem.
Moral realism through acausal trade is the only version of moral realism that seems to be coherent, and to do that, you still have to synthesise individual preferences first. So “one single universal true morality” does not necessarily contradict “contingent choices in figuring out your own preferences”.
I look forward to reading that. In the meantime can you address my parenthetical point in the grand-parent comment: “correctly extracting William MacAskill’s meta-preferences seems equivalent to learning metaphilosophy from William”? If it’s not clear, what I mean is that suppose Will wants to figure out his values by doing philosophy (which I think he actually does), does that mean that under you scheme the AI needs to learn how to do philosophy? If so, how do you plan to get around the problems with applying ML to metaphilosophy that I described in Some Thoughts on Metaphilosophy?
There is one way of doing metaphilosophy this way, which is “run (simulated) William MacAskill until he thinks he’s found a good metaphilosophy” or “find a description of metaphilosophy to which WA would say ‘yes’.”
But what the system I’ve sketched would most likely do is come up with something to which WA would say “yes, I can kinda see why that was built, but it doesn’t really fit together as I’d like and has a some of ad hoc and object level features”. That’s the “adequate” part of the process.