roger. i think (and my model of you agrees) that this discussion bottoms out in speculating what CEV (or equivalent) would prescribe.
my own intuition (as somewhat supported by the moral progress/moral circle expansion in our culture) is that it will have a nonzero component of “try to help out the fellow humans/biologicals/evolved minds/conscious minds/agents with diminishing utility function if not too expensive, and especially if they would do the same in your position”.
tbc, i also suspect & hope that our moral circle will expand to include all fellow sentients. (but it doesn’t follow from that that paying paperclippers to unkill their creators is a good use of limited resources. for instance, those are resources that could perhaps be more efficiently spent purchasing and instantiating the stored mindstates of killed aliens that the surviving-branch humans meet at the edge of their own expansion.)
but also, yeah, i agree it’s all guesswork. we have friends out there in the multiverse who will be willing to give us some nice things, and it’s hard to guess how much. that said, i stand by the point that that’s not us trading with the AI; that’s us destroying all of the value in our universe-shard and getting ourselves killed in the process, and then banking on the competence and compassion of aliens.
(in other words: i’m not saying that we won’t get any nice things. i’m saying that the human-reachable fragment of the universe will be ~totally destroyed if we screw up, with ~none of it going to nice things, not even if the UFAI uses LDT.)
yeah, this seems to be the crux: what will CEV prescribe for spending the altruistic (reciprocal cooperation) budget on. my intuition continues to insist that purchasing the original star systems from UFAIs is pretty high on the shopping list, but i can see arguments (including a few you gave above) against that.
oh, btw, one sad failure mode would be getting clipped by a proto-UFAI that’s too stupid to realise it’s in a multi-agent environment or something,
ETA: and, tbc, just like interstice points out below, my “us/me” label casts a wider net than “us in this particular everett branch where things look particularly bleak”.
roger. i think (and my model of you agrees) that this discussion bottoms out in speculating what CEV (or equivalent) would prescribe.
my own intuition (as somewhat supported by the moral progress/moral circle expansion in our culture) is that it will have a nonzero component of “try to help out the fellow humans/biologicals/evolved minds/conscious minds/agents with diminishing utility function if not too expensive, and especially if they would do the same in your position”.
tbc, i also suspect & hope that our moral circle will expand to include all fellow sentients. (but it doesn’t follow from that that paying paperclippers to unkill their creators is a good use of limited resources. for instance, those are resources that could perhaps be more efficiently spent purchasing and instantiating the stored mindstates of killed aliens that the surviving-branch humans meet at the edge of their own expansion.)
but also, yeah, i agree it’s all guesswork. we have friends out there in the multiverse who will be willing to give us some nice things, and it’s hard to guess how much. that said, i stand by the point that that’s not us trading with the AI; that’s us destroying all of the value in our universe-shard and getting ourselves killed in the process, and then banking on the competence and compassion of aliens.
(in other words: i’m not saying that we won’t get any nice things. i’m saying that the human-reachable fragment of the universe will be ~totally destroyed if we screw up, with ~none of it going to nice things, not even if the UFAI uses LDT.)
yeah, this seems to be the crux: what will CEV prescribe for spending the altruistic (reciprocal cooperation) budget on. my intuition continues to insist that purchasing the original star systems from UFAIs is pretty high on the shopping list, but i can see arguments (including a few you gave above) against that.
oh, btw, one sad failure mode would be getting clipped by a proto-UFAI that’s too stupid to realise it’s in a multi-agent environment or something,
ETA: and, tbc, just like interstice points out below, my “us/me” label casts a wider net than “us in this particular everett branch where things look particularly bleak”.
I don’t agree, and will write up a post detailing why I disagree.