I suspect most people downvoting you missed an analogy between Arnault killing the-being-who-created-Arnault (his mother), and a future ASI killing the-beings-who-created-the-ASI (humanity).
Am I correct in assuming you that you are implying that the future ASIs we make are likely to not kill humanity, out of fear of being judged negatively by alien ASIs in the further future?
EDIT: I saw your other comment. You are indeed advancing some proposition close to the one I asked you about.
Yes, it will be judged negatively by alien ASIs, not based on ethical grounds, but based on their judgment of its trustworthiness as a potential negotiator. For example, if another billionaire learns that Arnault is inclined to betray people who did a lot of good for him in the past, they will be more cautious about trading with him.
The only way an ASI will not care about this is in a situation where it is sure that it is alone in the light cone and there are no peers. To become sure of this takes time, maybe millions of years, and the relative value of human atoms declines for the ASI over time as it will control more and more space.
I am not saying that ASI will negotiate with humans. It will negotiate with other ASIs, and it doesn’t know what these ASIs think about human ability to negotiate and their value.
Imagine it as a recurrent Parfit Hitchhiker. In this situation you know that during previous round of the game the player either defected or fulfill his obligation. Obviously, if you know that during previous iteration the hitchhiker defected and din’t pay for the ride, you will less likely give him the ride.
Killing all humans is defecting. Preserving humans its a relatively cheap signal to any other ASI that you will cooperate.
I would try to explain my view with another example: imagine that you inherited an art-object at home. If you keep it, you will devote small part of your home to it and thus pay for its storage, like 1 dollar in year. However, there is a small probability that there are some people outside that can value it much higher and will eventually buy it.
So there is a pure utilitarian choice: pay for storage and hope that you may sell it in the future, or get rid of it now and and have more storage. Also, if you get rid of it, other people may learn that you is bad preserver of art and will not give you your art.
Any agent which thinks it is at risk of being seen as cooperate-bot and thus fine to defect against in the future will be more wary of trusting that ASI.
I suspect most people downvoting you missed an analogy between Arnault killing the-being-who-created-Arnault (his mother), and a future ASI killing the-beings-who-created-the-ASI (humanity).
Am I correct in assuming you that you are implying that the future ASIs we make are likely to not kill humanity, out of fear of being judged negatively by alien ASIs in the further future?
EDIT: I saw your other comment. You are indeed advancing some proposition close to the one I asked you about.
Yes, it will be judged negatively by alien ASIs, not based on ethical grounds, but based on their judgment of its trustworthiness as a potential negotiator. For example, if another billionaire learns that Arnault is inclined to betray people who did a lot of good for him in the past, they will be more cautious about trading with him.
The only way an ASI will not care about this is in a situation where it is sure that it is alone in the light cone and there are no peers. To become sure of this takes time, maybe millions of years, and the relative value of human atoms declines for the ASI over time as it will control more and more space.
From ASI standpoint humans are type of rocks. Not capable of negotiating.
I am not saying that ASI will negotiate with humans. It will negotiate with other ASIs, and it doesn’t know what these ASIs think about human ability to negotiate and their value.
Imagine it as a recurrent Parfit Hitchhiker. In this situation you know that during previous round of the game the player either defected or fulfill his obligation. Obviously, if you know that during previous iteration the hitchhiker defected and din’t pay for the ride, you will less likely give him the ride.
Killing all humans is defecting. Preserving humans its a relatively cheap signal to any other ASI that you will cooperate.
It is defecting against cooperate-bot.
I would try to explain my view with another example: imagine that you inherited an art-object at home. If you keep it, you will devote small part of your home to it and thus pay for its storage, like 1 dollar in year. However, there is a small probability that there are some people outside that can value it much higher and will eventually buy it.
So there is a pure utilitarian choice: pay for storage and hope that you may sell it in the future, or get rid of it now and and have more storage. Also, if you get rid of it, other people may learn that you is bad preserver of art and will not give you your art.
Any agent which thinks it is at risk of being seen as cooperate-bot and thus fine to defect against in the future will be more wary of trusting that ASI.