The revealed human preferences speak otherwise. Subsets of humans have decided that you can’t do that, but I’m not at all certain they really are something humans would converge to if they where wiser, smarter and less crazy.
But I think I agree with the basic premise, we don’t know, so lets not do something that might leave a bad taste in our mouths for eternity. To rephrase that:
I understood this blog post as: Trillions of cheesecake lovers, we care about change the utility pay off we can get in our universe. Us denying them their desire for cheesecakes probably brings us considerable disutilitiy. Which means the most rational thing to do at that point is probably to tile a fraction (perhaps the vast majority of the universe) with cheesecake since that is the highest pay-off available. If we never created Cheesake lovers we cared about the pay-offs available to us would be probably larger.
Note: Can someone please tell me if I’m getting this right?
I think that’s mostly correct, but Eliezer means something stronger than “considerable disutility” when he says “right” (e.g. self-modifying to like killing people and then killing people is not right; The meaning of right.)
Isn’t it a question of physics? Unbirthing seems impossible. You can kill and or destroy children if you want but you can’t unbirth.
I don’t see this as a question of physics. Though we may be arguing about words here.
A > B > C > Child
“You can’t unbirth a child” is just how we say its ok to undo A, B or C but not the child. It is physically impossible to “unbirth” or ” undo” B and C or A in exactly the same material sense as the child. We don’t see that as carrying the moral weight of killing the child so we don’t say you can’t unbrith B or A. In any case child is just a place-holder for “sentient” which seems to be a place-holder for “something we care about”.
A > B > C > Child
A > B > C > D > Something we care about
A > B > Person
Can describe the same exact physical process. By speaking of revealed human preferences I wanted to it be put into consideration that humans have historically used the first, the second and the third description for the same thing. We may in the future use heuristics that are ok with us painlessly erasing the cheescake lovers, just as at one point we decided that abortion is ok, or as we at one point decided that infanticide is not.
But the risk that while we think we wouldn’t care, we would actually end up caring may be enough to swamp the gain. Reliably “non-sentient” AI is probably the better option.
The revealed human preferences speak otherwise. Subsets of humans have decided that you can’t do that, but I’m not at all certain they really are something humans would converge to if they where wiser, smarter and less crazy.
But I think I agree with the basic premise, we don’t know, so lets not do something that might leave a bad taste in our mouths for eternity. To rephrase that:
I understood this blog post as: Trillions of cheesecake lovers, we care about change the utility pay off we can get in our universe. Us denying them their desire for cheesecakes probably brings us considerable disutilitiy. Which means the most rational thing to do at that point is probably to tile a fraction (perhaps the vast majority of the universe) with cheesecake since that is the highest pay-off available. If we never created Cheesake lovers we cared about the pay-offs available to us would be probably larger.
Note: Can someone please tell me if I’m getting this right?
I think that’s mostly correct, but Eliezer means something stronger than “considerable disutility” when he says “right” (e.g. self-modifying to like killing people and then killing people is not right; The meaning of right.)
Isn’t it a question of physics? Unbirthing seems impossible. You can kill and or destroy children if you want but you can’t unbirth.
I don’t see this as a question of physics. Though we may be arguing about words here.
A > B > C > Child
“You can’t unbirth a child” is just how we say its ok to undo A, B or C but not the child. It is physically impossible to “unbirth” or ” undo” B and C or A in exactly the same material sense as the child. We don’t see that as carrying the moral weight of killing the child so we don’t say you can’t unbrith B or A. In any case child is just a place-holder for “sentient” which seems to be a place-holder for “something we care about”.
A > B > C > Child
A > B > C > D > Something we care about
A > B > Person
Can describe the same exact physical process. By speaking of revealed human preferences I wanted to it be put into consideration that humans have historically used the first, the second and the third description for the same thing. We may in the future use heuristics that are ok with us painlessly erasing the cheescake lovers, just as at one point we decided that abortion is ok, or as we at one point decided that infanticide is not.
But the risk that while we think we wouldn’t care, we would actually end up caring may be enough to swamp the gain. Reliably “non-sentient” AI is probably the better option.