Thanks Akash! As I mentioned in my reply to Nicholas, I view it as flawed to think that China or the US would only abstain from AGI because of a Sino-US agreement. Rather, they’d each unilaterally do it out of national self-interest.
It’s not in the US self-interest to disempower itself and all its current power centers by allowing a US company to build uncontrollable AGI.
It’s not in the interest of the Chinese Communist Party to disempower itself by allowing a Chinese company to build uncontrollable AGI.
Once the US and Chinese leadership serves their self-interest by preventing uncontrollable AGI at home, they have a shared incentive to coordinate to do the same globally. The reason that the self-interest hasn’t yet played out is that US and Chinese leaders still haven’t fully understood the game theory payout matrix: the well-funded and wishful-thinking-fueled disinformation campaign arguing that Turing, Hinton, Bengio, Russell, Yudkowski et al are wrong (that we’re likely to figure out to control AGI in time if we “scale quickly”) is massively successful. That success is unsurprising, given how successful the disinformation campaigns were for, e.g., tobacco, asbesthos and leaded gasoline – the only difference is that the stakes are much higher now.
I think that China and the US would definitely agree to pause if and only if they can confirm the other also committing to a pause. Unfortunately, this is a really hard thing to confirm, much harder than with nuclear.
Thus, I propose that we are trapped in this suicide race unless we can come up with better coordination mechanisms. Lower counterfactual cost to participation, lower entry cost, higher reliability, less eeliance on a centralized authority… Decentralized AI-powered privacy-preserving safety inspections and realtime monitoring.
Components include: if you opt in to the monitoring of a particular risk a, then you get to view the reports of my monitor on myself for risk a. Worried about a? Me too. Let’s both monitor and report to each other. Not worried about b? Fine, then I’ll only tell my fellow b-monitoring participants about the reports from my b monitors.
I think that China and the US would definitely agree to pause if and only if they can confirm the other also committing to a pause. Unfortunately, this is a really hard thing to confirm, much harder than with nuclear.
This seems false to me. Eg Trump for one seems likely to do what the person who pays him the most & is the most loyal to him tells him to do, and AI risk worriers do not have the money or the politics for either of those criteria compared to, for example, Elon Musk.
Ah, I meant, would agree to pause once things came to a head. Pretty sure these political leaders are selfish enough that if they saw clear evidence of their imminent demise, and had a safer option, they’d take the out.
If that’s the situation, then why the “if and only if”, if we magically make then all believe they will die if they make ASI, then they would all individually be incentivized to stop it from happening independent of China’s actions.
Thanks Akash! As I mentioned in my reply to Nicholas, I view it as flawed to think that China or the US would only abstain from AGI because of a Sino-US agreement. Rather, they’d each unilaterally do it out of national self-interest.
It’s not in the US self-interest to disempower itself and all its current power centers by allowing a US company to build uncontrollable AGI.
It’s not in the interest of the Chinese Communist Party to disempower itself by allowing a Chinese company to build uncontrollable AGI.
Once the US and Chinese leadership serves their self-interest by preventing uncontrollable AGI at home, they have a shared incentive to coordinate to do the same globally. The reason that the self-interest hasn’t yet played out is that US and Chinese leaders still haven’t fully understood the game theory payout matrix: the well-funded and wishful-thinking-fueled disinformation campaign arguing that Turing, Hinton, Bengio, Russell, Yudkowski et al are wrong (that we’re likely to figure out to control AGI in time if we “scale quickly”) is massively successful. That success is unsurprising, given how successful the disinformation campaigns were for, e.g., tobacco, asbesthos and leaded gasoline – the only difference is that the stakes are much higher now.
I think that China and the US would definitely agree to pause if and only if they can confirm the other also committing to a pause. Unfortunately, this is a really hard thing to confirm, much harder than with nuclear.
Thus, I propose that we are trapped in this suicide race unless we can come up with better coordination mechanisms. Lower counterfactual cost to participation, lower entry cost, higher reliability, less eeliance on a centralized authority… Decentralized AI-powered privacy-preserving safety inspections and realtime monitoring.
Components include: if you opt in to the monitoring of a particular risk a, then you get to view the reports of my monitor on myself for risk a. Worried about a? Me too. Let’s both monitor and report to each other. Not worried about b? Fine, then I’ll only tell my fellow b-monitoring participants about the reports from my b monitors.
This seems false to me. Eg Trump for one seems likely to do what the person who pays him the most & is the most loyal to him tells him to do, and AI risk worriers do not have the money or the politics for either of those criteria compared to, for example, Elon Musk.
Ah, I meant, would agree to pause once things came to a head. Pretty sure these political leaders are selfish enough that if they saw clear evidence of their imminent demise, and had a safer option, they’d take the out.
If that’s the situation, then why the “if and only if”, if we magically make then all believe they will die if they make ASI, then they would all individually be incentivized to stop it from happening independent of China’s actions.