Reducing access to these services will significantly disempower the rest of the world: we’re not talking about whether people will have access to the best chatbots or not, but whether they’ll have access to extremely powerful future capabilities which enable them to shape and improve their lives on a scale that humans haven’t previously been able to.
If you’re worried about this, I don’t think you quite realise the stakes. Capabilities mostly proliferate anyway. People can wait a few more years.
Our worry here isn’t that people won’t get to enjoy AI benefits for a few years. It’s that there will be a massive power imbalance between those with access to AI and those without. And that could have long-term effects
I maintain my position that you’re missing the stakes if you think that’s important. Even limiting ourselves strictly to concentration of power worries, risks of totalitarianism dominate these concerns.
I think massive power imbalance makes it less likely that the post-AGI world is one where many different actors with different beliefs and values can experiment, interact, and reflect. And so I’d expect its long-term future to be worse
Thanks for the pushback!
Our worry here isn’t that people won’t get to enjoy AI benefits for a few years. It’s that there will be a massive power imbalance between those with access to AI and those without. And that could have long-term effects
I maintain my position that you’re missing the stakes if you think that’s important. Even limiting ourselves strictly to concentration of power worries, risks of totalitarianism dominate these concerns.
I think that massive power imbalance (even over short periods) significantly increases the risk of totalitarianism
I think massive power imbalance makes it less likely that the post-AGI world is one where many different actors with different beliefs and values can experiment, interact, and reflect. And so I’d expect its long-term future to be worse