Right now, the problem is that UFAI seems easier to program than FAI, so people will probably stumble upon UFAI first.
Create a considerable number of uploads, and what changes? Not much. Building UFAI is still easier to program than FAI; you’ve just increased the speed at which this may happen. Yes, this might eliminate part of the subjective speed advantage of any AIs. But it would still leave open the possibility of e.g. algorithmic enhancements leading to an increased subjective speed. And you’ve given the AIs a rich virtual world in which they could exist even better than in the Internet, full of human brains that can be hacked into directly.
Now certainly, if we could pick the people who became uploads, and police them to make sure they were only developing FAI… but this seems to require that the decision-makers across the world were convinced of an UFAI-triggered Singularity being a Serious Bad Thing. They’d also need to be convinced of it pretty quickly, and to take all the appropriate safeguards to limit the potential damage the uploads of their country could do. They’d need to do this at a time when there might be an upload arms race going on between various countries. All of this seems to me rather unlikely, and it seems more likely that such a scenario would only bring UFAI closer.
Right now, the problem is that UFAI seems easier to program than FAI, so people will probably stumble upon UFAI first. Create a considerable number of uploads, and what changes? Not much.
Well, that’s part of the problem. Another part is that many people—including AI researchers—don’t take the threat of UFAI seriously.
After all, there are plenty of situations where the dangerous thing is easier than the safe thing, and where we still manage to some degree or another to enforce not doing the dangerous thing. It’s just that most of those cases are for dangers we are scared of. (Typically more scared than is justified, which has its own problems.)
And in that context, it’s perhaps worthwhile to think about how uploads might systematically differ from their pre-upload selves. It’s not clear to me that we’d evaluate risks the same way, so it’s not clear to me that “not much” would change.
For example, even assuming a perfect upload (1), I would expect the experience of being uploaded to radically alter the degree to which I expect everything to go on being like it has been for the last couple of thousand years.
Which might lead an uploaded human to apply a different reference class from the same human pre-upload to calculating their priors for UFAI being an existential threat, since the “well, computer science has never been an existential threat before, so it probably isn’t one now either” prior won’t apply as strongly.
Then again, it might not.
===
(1) All this being said, I mostly think the whole idea of a “perfect upload” is an untenable oversimplification. In reality I expect there will be huge differences between pre-upload and post-upload personalities, and that this will cause a lot of metaphysical hand-wringing about whether we’ve “really preserved identity,” and ultimately we will just come to accept uploading as one of those events that changes the way people think (much like we do now about, say, suddenly becoming wealthy, or being abused by a trusted authority figure, or any number of other important life events), without being especially troubled by the implications of that for individual identity.
Right now, the problem is that UFAI seems easier to program than FAI, so people will probably stumble upon UFAI first.
Create a considerable number of uploads, and what changes? Not much. Building UFAI is still easier to program than FAI; you’ve just increased the speed at which this may happen. Yes, this might eliminate part of the subjective speed advantage of any AIs. But it would still leave open the possibility of e.g. algorithmic enhancements leading to an increased subjective speed. And you’ve given the AIs a rich virtual world in which they could exist even better than in the Internet, full of human brains that can be hacked into directly.
Now certainly, if we could pick the people who became uploads, and police them to make sure they were only developing FAI… but this seems to require that the decision-makers across the world were convinced of an UFAI-triggered Singularity being a Serious Bad Thing. They’d also need to be convinced of it pretty quickly, and to take all the appropriate safeguards to limit the potential damage the uploads of their country could do. They’d need to do this at a time when there might be an upload arms race going on between various countries. All of this seems to me rather unlikely, and it seems more likely that such a scenario would only bring UFAI closer.
Well, that’s part of the problem. Another part is that many people—including AI researchers—don’t take the threat of UFAI seriously.
After all, there are plenty of situations where the dangerous thing is easier than the safe thing, and where we still manage to some degree or another to enforce not doing the dangerous thing. It’s just that most of those cases are for dangers we are scared of. (Typically more scared than is justified, which has its own problems.)
And in that context, it’s perhaps worthwhile to think about how uploads might systematically differ from their pre-upload selves. It’s not clear to me that we’d evaluate risks the same way, so it’s not clear to me that “not much” would change.
For example, even assuming a perfect upload (1), I would expect the experience of being uploaded to radically alter the degree to which I expect everything to go on being like it has been for the last couple of thousand years.
Which might lead an uploaded human to apply a different reference class from the same human pre-upload to calculating their priors for UFAI being an existential threat, since the “well, computer science has never been an existential threat before, so it probably isn’t one now either” prior won’t apply as strongly.
Then again, it might not.
===
(1) All this being said, I mostly think the whole idea of a “perfect upload” is an untenable oversimplification. In reality I expect there will be huge differences between pre-upload and post-upload personalities, and that this will cause a lot of metaphysical hand-wringing about whether we’ve “really preserved identity,” and ultimately we will just come to accept uploading as one of those events that changes the way people think (much like we do now about, say, suddenly becoming wealthy, or being abused by a trusted authority figure, or any number of other important life events), without being especially troubled by the implications of that for individual identity.