I’ve been puzzled at the assumption that private corporations will be allowed to make decisions of such consequence that they amount to determining the future, or effectively taking over the world. I think the false dichotomy of nationalization vs autonomy has helped perpetuate this important error in reasoning. Soft nationalization is a great term to introduce.
I recently gave a very brief set of arguments similar to those you’ve fleshed out so well here:
Asserting control over critical decisions seems so easy, so much the governments actual job, and such a no-brainer as capabilities visibly improve that it seems inevitable to me. I think the reason the alignment community hasn’t adopted this view is a historical belief in the the assumption of an inattentive humanity. This made sense when we expected a fast takeoff, but with even a few years worth of slow takeoff before AGI is capable of a relatively certain pivotal act, government control of some sort seems almost certain.
The consequences of government control seem like they’ll include allowing more AGI development by default, creating a scenario in which humans control multiple RSI-capable AGI as they improve to ASI. This raises the question of If we solve alignment, do we die anyway?
I’d definitely agree with the perspective you’re sharing!
Even in a fast-takeoff / total nationalization scenario, I don’t think politicians will be so blindsided that the political discussion will go from “regulation as usual” to “total nationalization” in a couple of months. It’s possible, but unlikely.
I think it’s equally / more likely that the amount of government involvement will scale over 2 − 5 years, and during that time a lot of these policy levers will be attempted. The success / failure of some of these policy levers to achieve US goals will probably determine if involvement proceeds all the way to “total nationalization”.
Very nice!
I’ve been puzzled at the assumption that private corporations will be allowed to make decisions of such consequence that they amount to determining the future, or effectively taking over the world. I think the false dichotomy of nationalization vs autonomy has helped perpetuate this important error in reasoning. Soft nationalization is a great term to introduce.
I recently gave a very brief set of arguments similar to those you’ve fleshed out so well here:
Governments will take control of AGI before it’s ASI, right?
Asserting control over critical decisions seems so easy, so much the governments actual job, and such a no-brainer as capabilities visibly improve that it seems inevitable to me. I think the reason the alignment community hasn’t adopted this view is a historical belief in the the assumption of an inattentive humanity. This made sense when we expected a fast takeoff, but with even a few years worth of slow takeoff before AGI is capable of a relatively certain pivotal act, government control of some sort seems almost certain.
The consequences of government control seem like they’ll include allowing more AGI development by default, creating a scenario in which humans control multiple RSI-capable AGI as they improve to ASI. This raises the question of If we solve alignment, do we die anyway?
I’d definitely agree with the perspective you’re sharing!
Even in a fast-takeoff / total nationalization scenario, I don’t think politicians will be so blindsided that the political discussion will go from “regulation as usual” to “total nationalization” in a couple of months. It’s possible, but unlikely.
I think it’s equally / more likely that the amount of government involvement will scale over 2 − 5 years, and during that time a lot of these policy levers will be attempted. The success / failure of some of these policy levers to achieve US goals will probably determine if involvement proceeds all the way to “total nationalization”.