This worry about the creation and destruction of simulations doesn’t make me rethink the huge ethical implications of super-intelligence at all, it makes me rethink the ethics of death. Why exactly is the creation and (painless) destruction of a sentient intelligence worse than not creating it in the first place? It’s just guilt by association—“ending a simulation is like death, death is bad, therefore simulations are bad”. Yes death is bad, but only for reasons which don’t necessarily apply here.
To me, if anything worrying about the simulations created inside a superintelligent being seems like worrying about the fate of the cells in our own body. Should we really modify ourselves to take the actions which destroy the least of our cells? I realise there’s an argument that this threshold of “sentience” is crossed in one case but not the other, I guess the trouble is I don’t see that as a discrete thing either. At exactly what point in our evolution did we suddenly cross a line and become sentient? If animals are sentient, then which ones? And why don’t we seem to care, ethically, about any of them? (ok I know the answer to that one and it’s similar to why we care, as I say in another admittedly unpopular comment, about human simulations but not the AIs that create them...)
This worry about the creation and destruction of simulations doesn’t make me rethink the huge ethical implications of super-intelligence at all, it makes me rethink the ethics of death. Why exactly is the creation and (painless) destruction of a sentient intelligence worse than not creating it in the first place? It’s just guilt by association—“ending a simulation is like death, death is bad, therefore simulations are bad”. Yes death is bad, but only for reasons which don’t necessarily apply here.
To me, if anything worrying about the simulations created inside a superintelligent being seems like worrying about the fate of the cells in our own body. Should we really modify ourselves to take the actions which destroy the least of our cells? I realise there’s an argument that this threshold of “sentience” is crossed in one case but not the other, I guess the trouble is I don’t see that as a discrete thing either. At exactly what point in our evolution did we suddenly cross a line and become sentient? If animals are sentient, then which ones? And why don’t we seem to care, ethically, about any of them? (ok I know the answer to that one and it’s similar to why we care, as I say in another admittedly unpopular comment, about human simulations but not the AIs that create them...)