That we want to stop AI research. We don’t. Current AI research is very far from the risky areas and abilities. And it’s risk aware AI researchers that are most likely to figure out how to make safe AI.
Is it really the case that nobody interested in AI risk/safety wants to stop or slow down progress in AI research? It seemed to me there was perhaps at least substantial minority that wanted to do this, to buy time.
I remember that we were joking at the NYC Singularity Summit workshop a few years back that maybe we should provide AI researchers with heroin and philosophers to slow them down.
As far as I have noticed, there are few if any voices in the academic/nearby AI safety community that promote slowing AI research as the best (or even a good) option. People talking about relinquishment or slowing seem to be far outside the main discourse, typically people who have only a passing acquaintance with the topic or a broader technology scepticism.
The best antidote is to start thinking about the details of how one would actually go about it: that generally shows why differential development is sensible.
I think differential technological development—prioritising some areas over others—is the current approach. It achirves the same result but has a higher chance of working.
Thanks for your response and not to be argumentative, but honest question: doesn’t that mean that you want some forms of AI research to slow down, at least on a relative scale?
I personally don’t see any thing wrong with this stance, but it seems to me like you’re trying to suggest that this trade-off doesn’t exist, and that’s not at all what I took from reading Bostrom’s Superintelligence.
An important distinction that jumps out to me- if we slowed down all technological progress equally, that wouldn’t actually “buy time” for anything in particular- I can’t think of anything we’d want to be doing with that time besides either 1. researching other technologies that might help with avoiding AI (can’t think of any ATM though- one that comes to mind is technologies that would allow downloading or simulating a human mind before we build AI from scratch, which sounds at least somewhat less dangerous from a human perspective than building AI from scratch), or 2. thinking about AI value systems.
The 2 is presumably the reason why anyone would suggest slowing down AI research, but I think a notable obstacle to 2 at present is large numbers of people not being concerned about AI risk because it’s so far away. If we get to the point where people actually expect an AI very soon, then slowing down while we discuss it might make sense.
I am one of those proponents of stopping all AI research and I will explain why.
(1) Don’t stand too close to the cliff. We don’t know how AGI will emerge and by the time we are close enough to know, it’s probably too late. Either human error or malfeasance will bring us over the edge.
(2) Friendly AGI might be impossible. Computer scientists cannot even predict the behavior of simple programs. The halting problem, a specific type of prediction, is provably impossible in non-trivial code. I doubt we’ll even grasp why the first AGI we build works.
Neither of these statements seems controversial, so if we are determined to not produce unfriendly AGI, the only safe approach is to stop AI research well before it becomes dangerous. It’s playing with fire in a straw cabin, our only shelter on a deserted island. Things would be different if someday we solve the friendliness problem, build a provably secure “box”, or are well distributed across the galaxy.
Is it really the case that nobody interested in AI risk/safety wants to stop or slow down progress in AI research? It seemed to me there was perhaps at least substantial minority that wanted to do this, to buy time.
I remember that we were joking at the NYC Singularity Summit workshop a few years back that maybe we should provide AI researchers with heroin and philosophers to slow them down.
As far as I have noticed, there are few if any voices in the academic/nearby AI safety community that promote slowing AI research as the best (or even a good) option. People talking about relinquishment or slowing seem to be far outside the main discourse, typically people who have only a passing acquaintance with the topic or a broader technology scepticism.
The best antidote is to start thinking about the details of how one would actually go about it: that generally shows why differential development is sensible.
I think differential technological development—prioritising some areas over others—is the current approach. It achirves the same result but has a higher chance of working.
Thanks for your response and not to be argumentative, but honest question: doesn’t that mean that you want some forms of AI research to slow down, at least on a relative scale?
I personally don’t see any thing wrong with this stance, but it seems to me like you’re trying to suggest that this trade-off doesn’t exist, and that’s not at all what I took from reading Bostrom’s Superintelligence.
An important distinction that jumps out to me- if we slowed down all technological progress equally, that wouldn’t actually “buy time” for anything in particular- I can’t think of anything we’d want to be doing with that time besides either 1. researching other technologies that might help with avoiding AI (can’t think of any ATM though- one that comes to mind is technologies that would allow downloading or simulating a human mind before we build AI from scratch, which sounds at least somewhat less dangerous from a human perspective than building AI from scratch), or 2. thinking about AI value systems.
The 2 is presumably the reason why anyone would suggest slowing down AI research, but I think a notable obstacle to 2 at present is large numbers of people not being concerned about AI risk because it’s so far away. If we get to the point where people actually expect an AI very soon, then slowing down while we discuss it might make sense.
The trade off exists. There are better ways of resolving it than others, and there are better ways of phrasing it than others.
I am one of those proponents of stopping all AI research and I will explain why.
(1) Don’t stand too close to the cliff. We don’t know how AGI will emerge and by the time we are close enough to know, it’s probably too late. Either human error or malfeasance will bring us over the edge.
(2) Friendly AGI might be impossible. Computer scientists cannot even predict the behavior of simple programs. The halting problem, a specific type of prediction, is provably impossible in non-trivial code. I doubt we’ll even grasp why the first AGI we build works.
Neither of these statements seems controversial, so if we are determined to not produce unfriendly AGI, the only safe approach is to stop AI research well before it becomes dangerous. It’s playing with fire in a straw cabin, our only shelter on a deserted island. Things would be different if someday we solve the friendliness problem, build a provably secure “box”, or are well distributed across the galaxy.