Note that this claim is distinct from the claim that (due to
general economic theory) it’s more beneficial for the AIs to
trade with us than to destroy us. We already have enough
citations for that argument, what we’re looking for are
arguments saying that destroying humans would mean losing
something essentially irreplaceable.
I don’t think there are particularly good arguments in this department (those two quoted one are certainly not correct). Except the trade argument it might happen that it would be uneconomic for AGI to harvest atoms from our bodies.
As for “essentially irreplaceable”—in a very particular sense the exact arrangement of particles each second of every human being is “unique” and “essentially irreplaceable” (bar now quantum mechanics). An extreme “archivist/rare art collector/Jain monk” AI might want to keep therefore these collections (or some of their snapshots), but I don’t see this to be too compelling. I am sure we could win a lot of sympathy if AGI could be shown to automatically entail some sort of ultimate compassion, but I think it is more likely we have to make it so (hence the FAI effort).
If I want to be around to see the last moments of Sun, I will feel a sting of guilt that the Universe is slightly less efficient because it is running me, rather than using those resources for some better, deeper experiencing, more seeing observer.
It’s hard to present arguments well that one views as wrong and perhaps even silly—as most or all people here do. Perhaps you could get input from people who accept the relevant arguments?
I don’t think there are particularly good arguments in this department (those two quoted one are certainly not correct). Except the trade argument it might happen that it would be uneconomic for AGI to harvest atoms from our bodies.
As for “essentially irreplaceable”—in a very particular sense the exact arrangement of particles each second of every human being is “unique” and “essentially irreplaceable” (bar now quantum mechanics). An extreme “archivist/rare art collector/Jain monk” AI might want to keep therefore these collections (or some of their snapshots), but I don’t see this to be too compelling. I am sure we could win a lot of sympathy if AGI could be shown to automatically entail some sort of ultimate compassion, but I think it is more likely we have to make it so (hence the FAI effort).
If I want to be around to see the last moments of Sun, I will feel a sting of guilt that the Universe is slightly less efficient because it is running me, rather than using those resources for some better, deeper experiencing, more seeing observer.
Me neither, but they get brought up every now and then, so we should mention them—if only to explain in a later section why they don’t work.
It’s hard to present arguments well that one views as wrong and perhaps even silly—as most or all people here do. Perhaps you could get input from people who accept the relevant arguments?
This is a good idea. Do you have ideas of where such people could be found?
I don’t know myself, but since you’re gathering references anyway, maybe you could try contacting some of them.