I said things like: if you can’t get the world to coordinate on banning gain-of-function research, in the wake of a trillions-of-dollars tens-of-millions-of-lives pandemic “warning shot”, then you’re not going to get coordination in the much harder case of AI research.
To be clear I largely agree with you, but I don’t think you’ve really steel-manned (or at least accurately modeled) the government’s decision making process.
We do have an example of a past scenario where:
a new technology of enormous, potentially world-ending impact was first publicized/predicted in science fiction
a scientist actually realized the technology was near-future feasible, convinces others
western governments actually listened to said scientists
instead of coordinating on a global ban of the technology—they instead fast tracked the tech’s development
The tech of course is nuclear weapons, the sci-fi was “The World Set Free” by HG Wells, the first advocate scientist was Szilard, but nobody listened until he recruited Einstein.
So if we apply that historical lesson to AI risk … the failure (so far) seems two fold:
failure on the “convince a majority of the super high status experts”
and perhaps that’s good! Because the predictable reaction is tech acceleration, not coordination on deacceleration
To be clear I largely agree with you, but I don’t think you’ve really steel-manned (or at least accurately modeled) the government’s decision making process.
We do have an example of a past scenario where:
a new technology of enormous, potentially world-ending impact was first publicized/predicted in science fiction
a scientist actually realized the technology was near-future feasible, convinces others
western governments actually listened to said scientists
instead of coordinating on a global ban of the technology—they instead fast tracked the tech’s development
The tech of course is nuclear weapons, the sci-fi was “The World Set Free” by HG Wells, the first advocate scientist was Szilard, but nobody listened until he recruited Einstein.
So if we apply that historical lesson to AI risk … the failure (so far) seems two fold:
failure on the “convince a majority of the super high status experts”
and perhaps that’s good! Because the predictable reaction is tech acceleration, not coordination on deacceleration
AGI is coup-complete.