Well the control problem is all about making AIs without “inimical motivations”, so that covers the same thing IMO. And fast takeoff is not at all necessary for AI risk. AI is just as dangerous if it takes it’s time to grow to superintelligence. I guess it gives us somewhat more time to react, at best.
Well the control problem is all about making AIs without “inimical motivations”,
Only if you use language very loosely. If you don’t. the Value Alignment problem is about making an AI without inimical motivations, and the Control Problem is about making an AI you can steer irrespective of its motivations.
And fast takeoff is not at all necessary for AI risk. AI
This is about Skynet scenarios specifically. If you have mutlipolar slow development of ASI, then you can fix the problems as you go along.
I guess it gives us somewhat more time to react, at best.
Which is to say that in order to definitely have a Skynet scenario, you definitely do need things to develop at more than a certain rate. So speed of takeoff is an assumption, however dismsively you phrase it.
Well the control problem is all about making AIs without “inimical motivations”, so that covers the same thing IMO. And fast takeoff is not at all necessary for AI risk. AI is just as dangerous if it takes it’s time to grow to superintelligence. I guess it gives us somewhat more time to react, at best.
Only if you use language very loosely. If you don’t. the Value Alignment problem is about making an AI without inimical motivations, and the Control Problem is about making an AI you can steer irrespective of its motivations.
This is about Skynet scenarios specifically. If you have mutlipolar slow development of ASI, then you can fix the problems as you go along.
Which is to say that in order to definitely have a Skynet scenario, you definitely do need things to develop at more than a certain rate. So speed of takeoff is an assumption, however dismsively you phrase it.