My take—lots of good analysis, but makes a few crucial mistakes/weaknesses that throw the conclusions into significant doubt:
The USG will be able and willing to either provide or mandate strong infosecurity for multiple projects.
I simply don’t buy that the infosec for multiple such projects will be anywhere near the infosec of a single project because the overall security ends up being that of the weakest link.
Additionally, the more projects there are with a particular capability, the more folk there are who can leak information either by talking or by being spies.
The probability-weighted impacts of AI takeover or the proliferation of world-ending technologies might be high enough to dominate the probability-weighted impacts of power concentration.
Comment: We currently doubt this, but we haven’t modelled it out, and we have lower p(doom) from misalignment than many (<10%).
Seems entirely plausible to me that either one could dominate. Would love to see more analysis around this.
Reducing access to these services will significantly disempower the rest of the world: we’re not talking about whether people will have access to the best chatbots or not, but whether they’ll have access to extremely powerful future capabilities which enable them to shape and improve their lives on a scale that humans haven’t previously been able to.
If you’re worried about this, I don’t think you quite realise the stakes. Capabilities mostly proliferate anyway. People can wait a few more years.
Reducing access to these services will significantly disempower the rest of the world: we’re not talking about whether people will have access to the best chatbots or not, but whether they’ll have access to extremely powerful future capabilities which enable them to shape and improve their lives on a scale that humans haven’t previously been able to.
If you’re worried about this, I don’t think you quite realise the stakes. Capabilities mostly proliferate anyway. People can wait a few more years.
Our worry here isn’t that people won’t get to enjoy AI benefits for a few years. It’s that there will be a massive power imbalance between those with access to AI and those without. And that could have long-term effects
I maintain my position that you’re missing the stakes if you think that’s important. Even limiting ourselves strictly to concentration of power worries, risks of totalitarianism dominate these concerns.
My take—lots of good analysis, but makes a few crucial mistakes/weaknesses that throw the conclusions into significant doubt:
I simply don’t buy that the infosec for multiple such projects will be anywhere near the infosec of a single project because the overall security ends up being that of the weakest link.
Additionally, the more projects there are with a particular capability, the more folk there are who can leak information either by talking or by being spies.
Seems entirely plausible to me that either one could dominate. Would love to see more analysis around this.
If you’re worried about this, I don’t think you quite realise the stakes. Capabilities mostly proliferate anyway. People can wait a few more years.
Thanks for the pushback!
Our worry here isn’t that people won’t get to enjoy AI benefits for a few years. It’s that there will be a massive power imbalance between those with access to AI and those without. And that could have long-term effects
I maintain my position that you’re missing the stakes if you think that’s important. Even limiting ourselves strictly to concentration of power worries, risks of totalitarianism dominate these concerns.
I think that massive power imbalance (even over short periods) significantly increases the risk of totalitarianism