Part of the problem stems from different uses of the word “caution”.
There are a range of possible outcomes for the earth’s climate (and the resulting cost in lives and money) over the next century ranging from “everything will be fine” to “catastrophic”; there is also uncertainty over the costs and benefits of any given intervention. So what should we do?
Some say, “Caution! We don’t know what’s going to happen; let’s not change things too fast. Keep our current policies and behaviors until we know more.”
Others say, “Caution! We don’t know what’s going to happen, and we’re already changing things (the atmosphere) very quickly indeed. We need to move quickly politically and economically in order to slow down that change.”
For most people it seems that caution means: assume things will continue on more or less the same and be careful about changing your behavior, rather than seek to avoid a high risk of catastrophic loss.
Discussions about runaway AI often take a similar turn. People will come up with a list of reasons why they think it might not be a problem: maybe the humain brain already operates near the physical limit of computation; maybe there’s some ineffable quantum magic thingy that you need to get “true AI”; maybe economics will continue to work just like it does in econ 101 textbooks and guarantee a soft transition; maybe it’s just a really hard problem and it will be a very long time before we have to worry about it.
Maybe. But there’s no good reason to believe any of those things are true, and if they aren’t, then we have a serious concern.
Personally, I think it’s like we’re driving blindfolded with the accelerator pressed to the floor. There’s a guy in the other seat who says he can see out the window, and he’s yelling “I think there’s a cliff up ahead—slow down!” We’re suggesting he not be too hasty.
But I can see the other side, too: if we radically changed policy every time some crank declared that doom was at hand, we’d be much worse off.
I have proposed things similar to those you have suggested as arguments against runaway AI, mainly to show how little we do actually understand about what it takes to be intelligent with finite resources.
I wouldn’t use these as arguments that it isn’t going to be a problem, just that working to understand real-world intelligence might be a more practical activity than trying to build safe guards against scenarios we don’t have a strong inside view for.
Part of the problem stems from different uses of the word “caution”.
There are a range of possible outcomes for the earth’s climate (and the resulting cost in lives and money) over the next century ranging from “everything will be fine” to “catastrophic”; there is also uncertainty over the costs and benefits of any given intervention. So what should we do?
Some say, “Caution! We don’t know what’s going to happen; let’s not change things too fast. Keep our current policies and behaviors until we know more.”
Others say, “Caution! We don’t know what’s going to happen, and we’re already changing things (the atmosphere) very quickly indeed. We need to move quickly politically and economically in order to slow down that change.”
For most people it seems that caution means: assume things will continue on more or less the same and be careful about changing your behavior, rather than seek to avoid a high risk of catastrophic loss.
Discussions about runaway AI often take a similar turn. People will come up with a list of reasons why they think it might not be a problem: maybe the humain brain already operates near the physical limit of computation; maybe there’s some ineffable quantum magic thingy that you need to get “true AI”; maybe economics will continue to work just like it does in econ 101 textbooks and guarantee a soft transition; maybe it’s just a really hard problem and it will be a very long time before we have to worry about it.
Maybe. But there’s no good reason to believe any of those things are true, and if they aren’t, then we have a serious concern.
Personally, I think it’s like we’re driving blindfolded with the accelerator pressed to the floor. There’s a guy in the other seat who says he can see out the window, and he’s yelling “I think there’s a cliff up ahead—slow down!” We’re suggesting he not be too hasty.
But I can see the other side, too: if we radically changed policy every time some crank declared that doom was at hand, we’d be much worse off.
I have proposed things similar to those you have suggested as arguments against runaway AI, mainly to show how little we do actually understand about what it takes to be intelligent with finite resources.
I wouldn’t use these as arguments that it isn’t going to be a problem, just that working to understand real-world intelligence might be a more practical activity than trying to build safe guards against scenarios we don’t have a strong inside view for.