An important distinction that jumps out to me- if we slowed down all technological progress equally, that wouldn’t actually “buy time” for anything in particular- I can’t think of anything we’d want to be doing with that time besides either 1. researching other technologies that might help with avoiding AI (can’t think of any ATM though- one that comes to mind is technologies that would allow downloading or simulating a human mind before we build AI from scratch, which sounds at least somewhat less dangerous from a human perspective than building AI from scratch), or 2. thinking about AI value systems.
The 2 is presumably the reason why anyone would suggest slowing down AI research, but I think a notable obstacle to 2 at present is large numbers of people not being concerned about AI risk because it’s so far away. If we get to the point where people actually expect an AI very soon, then slowing down while we discuss it might make sense.
An important distinction that jumps out to me- if we slowed down all technological progress equally, that wouldn’t actually “buy time” for anything in particular- I can’t think of anything we’d want to be doing with that time besides either 1. researching other technologies that might help with avoiding AI (can’t think of any ATM though- one that comes to mind is technologies that would allow downloading or simulating a human mind before we build AI from scratch, which sounds at least somewhat less dangerous from a human perspective than building AI from scratch), or 2. thinking about AI value systems.
The 2 is presumably the reason why anyone would suggest slowing down AI research, but I think a notable obstacle to 2 at present is large numbers of people not being concerned about AI risk because it’s so far away. If we get to the point where people actually expect an AI very soon, then slowing down while we discuss it might make sense.