Thanks Nicholas for raising this issue. I think your framing overcomplicates the crux:
the root cause of an inspiring future with AI won’t be international coordination, but national self-interest.
It’s not in the US self-interest to disempower itself and all its current power centers by allowing a US company to build uncontrollable AGI.
It’s not in the interest of the Chinese Communist Party to disempower itself by allowing a Chinese company to build uncontrollable AGI.
Once the US and Chinese leadership serves their self-interest by preventing uncontrollable AGI at home, they have a shared incentive to coordinate to do the same globally. The reason that the self-interest hasn’t yet played out is that US and Chinese leaders still haven’t fully understood the game theory payout matrix: the well-funded and wishful-thinking-fueled disinformation campaign arguing that Turing, Hinton, Bengio, Russell, Yudkowski et al are wrong (that we’re likely to figure out to control AGI in time if we “scale quickly”) is massively successful. That success is unsurprising, given how successful the disinformation campaigns were for, e.g., tobacco, asbesthos and leaded gasoline – the only difference is that the stakes are much higher now.
Thanks Akash! As I mentioned in my reply to Nicholas, I view it as flawed to think that China or the US would only abstain from AGI because of a Sino-US agreement. Rather, they’d each unilaterally do it out of national self-interest.
It’s not in the US self-interest to disempower itself and all its current power centers by allowing a US company to build uncontrollable AGI.
It’s not in the interest of the Chinese Communist Party to disempower itself by allowing a Chinese company to build uncontrollable AGI.
Once the US and Chinese leadership serves their self-interest by preventing uncontrollable AGI at home, they have a shared incentive to coordinate to do the same globally. The reason that the self-interest hasn’t yet played out is that US and Chinese leaders still haven’t fully understood the game theory payout matrix: the well-funded and wishful-thinking-fueled disinformation campaign arguing that Turing, Hinton, Bengio, Russell, Yudkowski et al are wrong (that we’re likely to figure out to control AGI in time if we “scale quickly”) is massively successful. That success is unsurprising, given how successful the disinformation campaigns were for, e.g., tobacco, asbesthos and leaded gasoline – the only difference is that the stakes are much higher now.