False. It requires only a few events, like smarter-than-human AI being invented, and the control problem not being solved. I don’t think any of these things is very unlikely.
Not solving the control problem isn’t a sufficient condition for AI danger: the AI also needs inimical motivations. So that is a third premise. Also fast takeoff of a singleton AI is being assumed.
ETA: The last two assumptions are so frequently made in AI risk circles that they lack salience—people seem to have ceased to regard them as assumptions at all.
Well the control problem is all about making AIs without “inimical motivations”, so that covers the same thing IMO. And fast takeoff is not at all necessary for AI risk. AI is just as dangerous if it takes it’s time to grow to superintelligence. I guess it gives us somewhat more time to react, at best.
Well the control problem is all about making AIs without “inimical motivations”,
Only if you use language very loosely. If you don’t. the Value Alignment problem is about making an AI without inimical motivations, and the Control Problem is about making an AI you can steer irrespective of its motivations.
And fast takeoff is not at all necessary for AI risk. AI
This is about Skynet scenarios specifically. If you have mutlipolar slow development of ASI, then you can fix the problems as you go along.
I guess it gives us somewhat more time to react, at best.
Which is to say that in order to definitely have a Skynet scenario, you definitely do need things to develop at more than a certain rate. So speed of takeoff is an assumption, however dismsively you phrase it.
False. It requires only a few events, like smarter-than-human AI being invented, and the control problem not being solved. I don’t think any of these things is very unlikely.
Not solving the control problem isn’t a sufficient condition for AI danger: the AI also needs inimical motivations. So that is a third premise. Also fast takeoff of a singleton AI is being assumed.
ETA: The last two assumptions are so frequently made in AI risk circles that they lack salience—people seem to have ceased to regard them as assumptions at all.
Well the control problem is all about making AIs without “inimical motivations”, so that covers the same thing IMO. And fast takeoff is not at all necessary for AI risk. AI is just as dangerous if it takes it’s time to grow to superintelligence. I guess it gives us somewhat more time to react, at best.
Only if you use language very loosely. If you don’t. the Value Alignment problem is about making an AI without inimical motivations, and the Control Problem is about making an AI you can steer irrespective of its motivations.
This is about Skynet scenarios specifically. If you have mutlipolar slow development of ASI, then you can fix the problems as you go along.
Which is to say that in order to definitely have a Skynet scenario, you definitely do need things to develop at more than a certain rate. So speed of takeoff is an assumption, however dismsively you phrase it.