I seem to recall that there is a strand of philosophy that tries to figure out what unproven axioms would be the minimum necessary foundation on which to build up something like “conventional” morality. They felt the need to do this precisely because of the multi-century failure of philosophers to come up with basis for morality that was unarguable “all the way down” to absolute first principles. I don’t know anything about AI, but it sounds like what Eliezer is talking about here has something of the same flavor.
I seem to recall that there is a strand of philosophy that tries to figure out what unproven axioms would be the minimum necessary foundation on which to build up something like “conventional” morality. They felt the need to do this precisely because of the multi-century failure of philosophers to come up with basis for morality that was unarguable “all the way down” to absolute first principles. I don’t know anything about AI, but it sounds like what Eliezer is talking about here has something of the same flavor.