Note how your optimal response changes a lot based on threat model.
Foom : stop the reaction from being able to begin. Foom is too fast to control. Optimal response : AI pauses (which add the duration of the pause to some living people’s lives, who still may die after the foom). Political action : request AI pauses.
Thump: you need to keep up with the arms race. Dictator upgrading from technicals and ak-47s to hypersonic drone swarms? You need to be developing your own restricted models so you have your own equivalent weapons in greater numbers. The free world has vastly more resources so they can afford less efficient (and more controllable) AI models to r&d and build equivalent weaponry. Political action: request government funded moonshot effort to develop AI.
Whimper: you need to be upgrading humans with neural implants or other methods over time so they remain above some intelligence floor needed to not be scammed out of power. Humans don’t need to stay the smartest creatures around but they need AI assistants they can trust and enough intelligence to double check their work. This let’s humans retain property rights over the solar system, refusing to give AI any rights at all in sol, and they can enjoy utopia. Political action: request FDA overhaul.
Yeah, different regulatory strategies for different scenarios for sure. It’s tricky though that we don’t know which scenario will come to pass. I myself feel quite uncertain.
There is an important distinction around FOOM scenarios. They are too fast to legislate while they are in progress. The others give humanity a chance to see what is happening and change the rules ‘in flight ’.
Preventative legislation for a scenario that has never yet happened and sounds like implausible science fiction is a particularly hard ask. I can see why, if someone thought FOOM was highly likely, they could be pessimistic about governance as a path to safety.
Good point. Some specific narrow-domain superhuman skills, like persuasion, could also prevent in-flight regulation of slower scenarios. Another possible narrow domain would be one which enabled misuse on a scale that disrupted governments substantially, such as bioweapons.
I can see why, if someone thought FOOM was highly likely, they could be pessimistic about governance as a path to safety.
It’s worse than that because foom is so powerful the difference between “no government restricts AI meaningfully” and “9 out of 10 power blocs able to build AI restrict it” is small. Foom for a 90 day takeover implies a doubling time under a week, if all power blocs were equal in starting resources, the “”90 percent ” regulation case vs the “no regulations ” case is 4 doublings or about 4 weeks.
One governance solution proposed to handle this is “nuke em”, but 7 day doubling times imply other things, like some method of building infrastructure that doesn’t need humans current cities and factories and specialists, because by definition humans are not that fast at building anything. Just shipping parts around takes days.
It would be like trying to stop machine cancer. Nukes just buy time.
I personally don’t think the above is possible starting from current technology, I am just trying to take the scenario seriously. (If it’s possible at all I think you would need to bootstrap there through many intermediate stages of technology that take unavoidable amounts of time)
Note how your optimal response changes a lot based on threat model.
Foom : stop the reaction from being able to begin. Foom is too fast to control. Optimal response : AI pauses (which add the duration of the pause to some living people’s lives, who still may die after the foom). Political action : request AI pauses.
Thump: you need to keep up with the arms race. Dictator upgrading from technicals and ak-47s to hypersonic drone swarms? You need to be developing your own restricted models so you have your own equivalent weapons in greater numbers. The free world has vastly more resources so they can afford less efficient (and more controllable) AI models to r&d and build equivalent weaponry. Political action: request government funded moonshot effort to develop AI.
Whimper: you need to be upgrading humans with neural implants or other methods over time so they remain above some intelligence floor needed to not be scammed out of power. Humans don’t need to stay the smartest creatures around but they need AI assistants they can trust and enough intelligence to double check their work. This let’s humans retain property rights over the solar system, refusing to give AI any rights at all in sol, and they can enjoy utopia. Political action: request FDA overhaul.
Yeah, different regulatory strategies for different scenarios for sure. It’s tricky though that we don’t know which scenario will come to pass. I myself feel quite uncertain. There is an important distinction around FOOM scenarios. They are too fast to legislate while they are in progress. The others give humanity a chance to see what is happening and change the rules ‘in flight ’.
Preventative legislation for a scenario that has never yet happened and sounds like implausible science fiction is a particularly hard ask. I can see why, if someone thought FOOM was highly likely, they could be pessimistic about governance as a path to safety.
“The others give humanity a chance to see what is happening and change the rules ‘in flight ’.”
This is possible in non-Foom scenarios, but not a given (e.g. super-human persuasion AIs).
Good point. Some specific narrow-domain superhuman skills, like persuasion, could also prevent in-flight regulation of slower scenarios. Another possible narrow domain would be one which enabled misuse on a scale that disrupted governments substantially, such as bioweapons.
It’s worse than that because foom is so powerful the difference between “no government restricts AI meaningfully” and “9 out of 10 power blocs able to build AI restrict it” is small. Foom for a 90 day takeover implies a doubling time under a week, if all power blocs were equal in starting resources, the “”90 percent ” regulation case vs the “no regulations ” case is 4 doublings or about 4 weeks.
One governance solution proposed to handle this is “nuke em”, but 7 day doubling times imply other things, like some method of building infrastructure that doesn’t need humans current cities and factories and specialists, because by definition humans are not that fast at building anything. Just shipping parts around takes days.
It would be like trying to stop machine cancer. Nukes just buy time.
I personally don’t think the above is possible starting from current technology, I am just trying to take the scenario seriously. (If it’s possible at all I think you would need to bootstrap there through many intermediate stages of technology that take unavoidable amounts of time)