Yes, precisely. Therefore if I know you’re creating friendly AI and you’re going to use it to take over the world I’m motivated to stop you, especially if I don’t think actually AGI takeover is a big deal, or if I think that I’d rather die than submit to you. What would the USA do if they knew for sure China was about to deploy generally FAI that is aligned to the CCP’s values?
The notion that it’s just fine to try creating world-conquering FAI for the sake of a pivotal act completely ignores these second-order effects. It’s not fine to create UFAI because it kills everyone, and it’s not fine to create world-conquering FAI because no one wants to be conquered, and many would rather die than be conquered by you, and they will try to kill you before you can. Hence you don’t get to deploy FAI either (also, I’d argue, world conquest by force is just not a very ethical goal to pursue unless the only alternative truly is extinction by UFAI. But if you just made it so that FAI is possible then multiple FAIs are possible, each with different values, therefore extinction is not the only option on the table any more, and the rest is just usual territorial ape business).
Essentially, at some point you gotta do the hard work and cooperate, or play evil overlord I guess. But if your public stance is “I am going to play evil overlord because it’s the only way I know to actually save humanity,” then well, I don’t think you can expect a lot of support. This sort of thing risks actually eroding credibility of the threat of UFAI in the first place, because people will point at you and say “see, he made up this unbelievable threat as an excuse to rule over us all”, and thus believe the threat must be fake and your motives impure.
Sure: so EY arguing for a multinational agreement to just stop AGI altogether and airstrike data centres isn’t that radical at all, it’s actually a formalization of that equilibrium which might both keep the peace and save the world. If there is no creating UFAI without destroying the world, and there is no creating FAI without starting WW3, options are limited.
(that said, I don’t think a “pivotal act” requires what we would call world conquest in every possible world. In a world in which UFAI can indeed kill everyone via nanotech, similarly FAI can disseminate guardian nanomachines ready to short-circuit any nascent UFAI if the need arises and do absolutely nothing else. That would be a pretty peaceful and unobtrusive pivotal act)
Yes, precisely. Therefore if I know you’re creating friendly AI and you’re going to use it to take over the world I’m motivated to stop you, especially if I don’t think actually AGI takeover is a big deal, or if I think that I’d rather die than submit to you. What would the USA do if they knew for sure China was about to deploy generally FAI that is aligned to the CCP’s values?
The notion that it’s just fine to try creating world-conquering FAI for the sake of a pivotal act completely ignores these second-order effects. It’s not fine to create UFAI because it kills everyone, and it’s not fine to create world-conquering FAI because no one wants to be conquered, and many would rather die than be conquered by you, and they will try to kill you before you can. Hence you don’t get to deploy FAI either (also, I’d argue, world conquest by force is just not a very ethical goal to pursue unless the only alternative truly is extinction by UFAI. But if you just made it so that FAI is possible then multiple FAIs are possible, each with different values, therefore extinction is not the only option on the table any more, and the rest is just usual territorial ape business).
Essentially, at some point you gotta do the hard work and cooperate, or play evil overlord I guess. But if your public stance is “I am going to play evil overlord because it’s the only way I know to actually save humanity,” then well, I don’t think you can expect a lot of support. This sort of thing risks actually eroding credibility of the threat of UFAI in the first place, because people will point at you and say “see, he made up this unbelievable threat as an excuse to rule over us all”, and thus believe the threat must be fake and your motives impure.
Yes, so even if you creating friendly AI other will try to airstrike you. I wrote about it here Military AI as a Convergent Goal of Self-Improving AI.
Sure: so EY arguing for a multinational agreement to just stop AGI altogether and airstrike data centres isn’t that radical at all, it’s actually a formalization of that equilibrium which might both keep the peace and save the world. If there is no creating UFAI without destroying the world, and there is no creating FAI without starting WW3, options are limited.
(that said, I don’t think a “pivotal act” requires what we would call world conquest in every possible world. In a world in which UFAI can indeed kill everyone via nanotech, similarly FAI can disseminate guardian nanomachines ready to short-circuit any nascent UFAI if the need arises and do absolutely nothing else. That would be a pretty peaceful and unobtrusive pivotal act)