I think differential technological development—prioritising some areas over others—is the current approach. It achirves the same result but has a higher chance of working.
Thanks for your response and not to be argumentative, but honest question: doesn’t that mean that you want some forms of AI research to slow down, at least on a relative scale?
I personally don’t see any thing wrong with this stance, but it seems to me like you’re trying to suggest that this trade-off doesn’t exist, and that’s not at all what I took from reading Bostrom’s Superintelligence.
An important distinction that jumps out to me- if we slowed down all technological progress equally, that wouldn’t actually “buy time” for anything in particular- I can’t think of anything we’d want to be doing with that time besides either 1. researching other technologies that might help with avoiding AI (can’t think of any ATM though- one that comes to mind is technologies that would allow downloading or simulating a human mind before we build AI from scratch, which sounds at least somewhat less dangerous from a human perspective than building AI from scratch), or 2. thinking about AI value systems.
The 2 is presumably the reason why anyone would suggest slowing down AI research, but I think a notable obstacle to 2 at present is large numbers of people not being concerned about AI risk because it’s so far away. If we get to the point where people actually expect an AI very soon, then slowing down while we discuss it might make sense.
I think differential technological development—prioritising some areas over others—is the current approach. It achirves the same result but has a higher chance of working.
Thanks for your response and not to be argumentative, but honest question: doesn’t that mean that you want some forms of AI research to slow down, at least on a relative scale?
I personally don’t see any thing wrong with this stance, but it seems to me like you’re trying to suggest that this trade-off doesn’t exist, and that’s not at all what I took from reading Bostrom’s Superintelligence.
An important distinction that jumps out to me- if we slowed down all technological progress equally, that wouldn’t actually “buy time” for anything in particular- I can’t think of anything we’d want to be doing with that time besides either 1. researching other technologies that might help with avoiding AI (can’t think of any ATM though- one that comes to mind is technologies that would allow downloading or simulating a human mind before we build AI from scratch, which sounds at least somewhat less dangerous from a human perspective than building AI from scratch), or 2. thinking about AI value systems.
The 2 is presumably the reason why anyone would suggest slowing down AI research, but I think a notable obstacle to 2 at present is large numbers of people not being concerned about AI risk because it’s so far away. If we get to the point where people actually expect an AI very soon, then slowing down while we discuss it might make sense.
The trade off exists. There are better ways of resolving it than others, and there are better ways of phrasing it than others.