I am not saying that ASI will negotiate with humans. It will negotiate with other ASIs, and it doesn’t know what these ASIs think about human ability to negotiate and their value.
Imagine it as a recurrent Parfit Hitchhiker. In this situation you know that during previous round of the game the player either defected or fulfill his obligation. Obviously, if you know that during previous iteration the hitchhiker defected and din’t pay for the ride, you will less likely give him the ride.
Killing all humans is defecting. Preserving humans its a relatively cheap signal to any other ASI that you will cooperate.
I would try to explain my view with another example: imagine that you inherited an art-object at home. If you keep it, you will devote small part of your home to it and thus pay for its storage, like 1 dollar in year. However, there is a small probability that there are some people outside that can value it much higher and will eventually buy it.
So there is a pure utilitarian choice: pay for storage and hope that you may sell it in the future, or get rid of it now and and have more storage. Also, if you get rid of it, other people may learn that you is bad preserver of art and will not give you your art.
Any agent which thinks it is at risk of being seen as cooperate-bot and thus fine to defect against in the future will be more wary of trusting that ASI.
From ASI standpoint humans are type of rocks. Not capable of negotiating.
I am not saying that ASI will negotiate with humans. It will negotiate with other ASIs, and it doesn’t know what these ASIs think about human ability to negotiate and their value.
Imagine it as a recurrent Parfit Hitchhiker. In this situation you know that during previous round of the game the player either defected or fulfill his obligation. Obviously, if you know that during previous iteration the hitchhiker defected and din’t pay for the ride, you will less likely give him the ride.
Killing all humans is defecting. Preserving humans its a relatively cheap signal to any other ASI that you will cooperate.
It is defecting against cooperate-bot.
I would try to explain my view with another example: imagine that you inherited an art-object at home. If you keep it, you will devote small part of your home to it and thus pay for its storage, like 1 dollar in year. However, there is a small probability that there are some people outside that can value it much higher and will eventually buy it.
So there is a pure utilitarian choice: pay for storage and hope that you may sell it in the future, or get rid of it now and and have more storage. Also, if you get rid of it, other people may learn that you is bad preserver of art and will not give you your art.
Any agent which thinks it is at risk of being seen as cooperate-bot and thus fine to defect against in the future will be more wary of trusting that ASI.