Re: “The primary issue is that, if one thinks that an AI can engage in recursive self-improvement and can do so quickly, then once there’s an AI that’s at all capable of such improvement, the AI will rapidly move outside our control.”
If its creators are incompetent. Those who think this are essentially betting on the incompetence of the creators.
There are numerous counter-arguments—the shifting moral zeitgeist, the downward trend in deliberate death, the safety record of previous risky tech enterprises.
A stop button seems like a relatively simple and effective safely feature. If you can get the machine to do anything at all, then you can probably get it to turn itself off.
The creators will likely be very smart humans assisted by very smart machines. Betting on their incompetence is not a particularly obvious thing to do.
Missing the point. I wasn’t arguing that there aren’t reasons to think that the bad AI goes FOOM won’t happen. Indeed, I said explicitly that I didn’t think it would occur. My point was that if one is going to make an argument that relies on that here one needs to be aware that the premise is controversial and be clear about that (say giving basic reasoning for it, or even just saying “If one accepts that X then...” etc.).
Re: “The primary issue is that, if one thinks that an AI can engage in recursive self-improvement and can do so quickly, then once there’s an AI that’s at all capable of such improvement, the AI will rapidly move outside our control.”
If its creators are incompetent. Those who think this are essentially betting on the incompetence of the creators.
There are numerous counter-arguments—the shifting moral zeitgeist, the downward trend in deliberate death, the safety record of previous risky tech enterprises.
A stop button seems like a relatively simple and effective safely feature. If you can get the machine to do anything at all, then you can probably get it to turn itself off.
See: http://alife.co.uk/essays/stopping_superintelligence/
The creators will likely be very smart humans assisted by very smart machines. Betting on their incompetence is not a particularly obvious thing to do.
Missing the point. I wasn’t arguing that there aren’t reasons to think that the bad AI goes FOOM won’t happen. Indeed, I said explicitly that I didn’t think it would occur. My point was that if one is going to make an argument that relies on that here one needs to be aware that the premise is controversial and be clear about that (say giving basic reasoning for it, or even just saying “If one accepts that X then...” etc.).