I think I basically agree with the “embrace existing moral intuitions” bit.
Unpacking my first paragraph in the other post, you might get: I prefer people to have moral intuitions that value their kids equally with others, but if they value their own kids a bit more, that’s not terrible; our values are mostly aligned; I expect optimisation power aplied to those values will typically also satisfy my own values. If they value their kids more than literally everyone else, that is terrible; our values diverge too much; I expect optimisation power appied to their values has a good chance of harming my own.
I think I basically agree with the “embrace existing moral intuitions” bit.
Unpacking my first paragraph in the other post, you might get: I prefer people to have moral intuitions that value their kids equally with others, but if they value their own kids a bit more, that’s not terrible; our values are mostly aligned; I expect optimisation power aplied to those values will typically also satisfy my own values. If they value their kids more than literally everyone else, that is terrible; our values diverge too much; I expect optimisation power appied to their values has a good chance of harming my own.