In the past few weeks I’ve noticed a significant change in the Overton window of what seems possible to talk about. I think the broad strokes of this article seem basically right, and I agree with most of the details.
I don’t expect this to immediately cause AI labs or world governments to join hands and execute a sensibly-executed-moratorium. But I’m hopeful about it paving the way for the next steps towards it. I like that this article, while making an extremely huge ask of the world, spells out exactly how huge an ask is actually needed.
Many people on hackernews seemed suspicious of the FLI Open Letter because it looks superficially like the losers in a race trying to gain a local political advantage. I like that Eliezer’s piece makes it more clear that it’s not about that.
I do still plan to sign the FLI Open Letter. If a better open letter comes along, making an ask that is more complete and concrete, I’d sign that as well. I think it’s okay to sign open letters that aren’t exactly the thing you want to help build momentum and common knowledge of what people think. (I think not-signing-the-letter while arguing for what better letter should be written, similar to what Eliezer did here, also seems like a fine strategy for common knowledge building)
I’d be most interested in an open letter for something like a conditional-commitment (i.e. kickstarter mechanic) for shutting down AI programs IFF some critical mass of other countries and companies shut down AI programs, which states something like:
It’d be good if all major governments and AI labs agreed to pause capabilities research indefinitely while we make progress on existential safety issues.
Doing this successfully is a complex operation, and requires solving novel technological and political challenges. We agree it’d be very hard, but nonetheless is one of the most important things for humanity to collectively try to do. Business-as-usual politics will not be sufficient.
This is not claiming it’d necessarily be good for any one lab to pause unilaterally, but we all agree that if there was a major worldwide plan to pause AI development, we would support that plan.
If safe AGI could be developed, it’d be extremely valuable for humanity. We’re not trying to stop progress, we’re just trying to make sure we actually achieve progress, rather than causing catastrophe.
I think that’s something that several leading AI lab leaders seem like they should basically support (given their other stated views)
A 30 year delay is actually needed? Since it’s impossible, doesn’t this collapse to the “we’re doomed regardless” case? Which devolves to “might as well play with AGI while we can...”
In the past few weeks I’ve noticed a significant change in the Overton window of what seems possible to talk about. I think the broad strokes of this article seem basically right, and I agree with most of the details.
I don’t expect this to immediately cause AI labs or world governments to join hands and execute a sensibly-executed-moratorium. But I’m hopeful about it paving the way for the next steps towards it. I like that this article, while making an extremely huge ask of the world, spells out exactly how huge an ask is actually needed.
Many people on hackernews seemed suspicious of the FLI Open Letter because it looks superficially like the losers in a race trying to gain a local political advantage. I like that Eliezer’s piece makes it more clear that it’s not about that.
I do still plan to sign the FLI Open Letter. If a better open letter comes along, making an ask that is more complete and concrete, I’d sign that as well. I think it’s okay to sign open letters that aren’t exactly the thing you want to help build momentum and common knowledge of what people think. (I think not-signing-the-letter while arguing for what better letter should be written, similar to what Eliezer did here, also seems like a fine strategy for common knowledge building)
I’d be most interested in an open letter for something like a conditional-commitment (i.e. kickstarter mechanic) for shutting down AI programs IFF some critical mass of other countries and companies shut down AI programs, which states something like:
It’d be good if all major governments and AI labs agreed to pause capabilities research indefinitely while we make progress on existential safety issues.
Doing this successfully is a complex operation, and requires solving novel technological and political challenges. We agree it’d be very hard, but nonetheless is one of the most important things for humanity to collectively try to do. Business-as-usual politics will not be sufficient.
This is not claiming it’d necessarily be good for any one lab to pause unilaterally, but we all agree that if there was a major worldwide plan to pause AI development, we would support that plan.
If safe AGI could be developed, it’d be extremely valuable for humanity. We’re not trying to stop progress, we’re just trying to make sure we actually achieve progress, rather than causing catastrophe.
I think that’s something that several leading AI lab leaders seem like they should basically support (given their other stated views)
A 30 year delay is actually needed? Since it’s impossible, doesn’t this collapse to the “we’re doomed regardless” case? Which devolves to “might as well play with AGI while we can...”