Excuse my entrance into this discussion so late (I have been away), but I am wondering if you have answered the following questions in previous posts, and if so, which ones.
1) Why do you believe a superintelligence will be necessary for uploading?
2) Why do you believe there possibly ever could be a safe superintelligence of any sort? The more I read about the difficulties of friendly AI, the more hopeless the problem seems, especially considering the large amount of human thought and collaboration that will be necessary. You yourself said there are no non-technical solutions, but I can’t imagine you could possibly believe in a magic bullet that some individial super-genius will eurekia have an epiphany about by himself in his basement. And this won’t be like the cosmology conference to determine how the universe began, where everyone’s testosterone riddled ego battled for a victory of no consequence. It won’t even be a manhattan project, with nuclear weapons tests in barren waste-lands… Basically, if we’re not right the first time, we’re fucked. And how do you expect you’ll get that many minds to be that certain that they’ll agree it’s worth making and starting the… the… whateverthefuck it ends up being. Or do you think it’ll just take one maverick with a cult of loving followers to get it right?
3) But really, why don’t you just focus all your efforts on preventing any superintelligence from being created? Do you really believe it’ll come down to us (the righteously unbiased) versus them (the thoughtlessly fame-hungry computer scientists)? If so, who are they? Who are we for that matter?
4) If fAI will be that great, why should this problem be dealt with immediately by flesh, blood, and flawed humans instead of improved-upoloaded copies in the future?
Eliezer,
Excuse my entrance into this discussion so late (I have been away), but I am wondering if you have answered the following questions in previous posts, and if so, which ones.
1) Why do you believe a superintelligence will be necessary for uploading?
2) Why do you believe there possibly ever could be a safe superintelligence of any sort? The more I read about the difficulties of friendly AI, the more hopeless the problem seems, especially considering the large amount of human thought and collaboration that will be necessary. You yourself said there are no non-technical solutions, but I can’t imagine you could possibly believe in a magic bullet that some individial super-genius will eurekia have an epiphany about by himself in his basement. And this won’t be like the cosmology conference to determine how the universe began, where everyone’s testosterone riddled ego battled for a victory of no consequence. It won’t even be a manhattan project, with nuclear weapons tests in barren waste-lands… Basically, if we’re not right the first time, we’re fucked. And how do you expect you’ll get that many minds to be that certain that they’ll agree it’s worth making and starting the… the… whateverthefuck it ends up being. Or do you think it’ll just take one maverick with a cult of loving followers to get it right?
3) But really, why don’t you just focus all your efforts on preventing any superintelligence from being created? Do you really believe it’ll come down to us (the righteously unbiased) versus them (the thoughtlessly fame-hungry computer scientists)? If so, who are they? Who are we for that matter?
4) If fAI will be that great, why should this problem be dealt with immediately by flesh, blood, and flawed humans instead of improved-upoloaded copies in the future?