That’s just… wow. That’s frighteningly stupid. That’s about as bad as someone saying they aren’t worried about a nuclear reactor undergoing a meltdown because they had their local clergy bless it although the potential negative pay off of this is orders of magnitude higher. I don’t put a high probability to an AI triggered singularity but this is just… wow. One thing seems pretty clear: if an AGI does do a hard-take off to control its light cone, the result is likely to be really bad, simply because so many people are being stupid about it.
Part of me is worried that the SIAI people are thinking much more carefully about some of these issues maybe should suggest that their estimates for recursively self-improving are much more likely than I estimate.
To me, the frightening thing isn’t the original mistake (though it is egregious), it’s the fact that the response to having it pointed out was “You should not assume such a poor implementation of my idea that it cannot make discriminations that are trivial to current humans” rather than “OOPS!”
A similar specific variant: http://s.wsj.net/public/resources/images/OB-DU671_0604dn_D_20090604122543.jpg
Man, we’re never gonna let Hibbard forget that, are we? You propose one obviously-flawed genocidal AI Overlord...
Sorry, did he ever actually propose a smiley tiler? I thought that was just used as an example of a simple way one could easily go wrong.
Well, he sort of did, actually.
That’s just… wow. That’s frighteningly stupid. That’s about as bad as someone saying they aren’t worried about a nuclear reactor undergoing a meltdown because they had their local clergy bless it although the potential negative pay off of this is orders of magnitude higher. I don’t put a high probability to an AI triggered singularity but this is just… wow. One thing seems pretty clear: if an AGI does do a hard-take off to control its light cone, the result is likely to be really bad, simply because so many people are being stupid about it.
Part of me is worried that the SIAI people are thinking much more carefully about some of these issues maybe should suggest that their estimates for recursively self-improving are much more likely than I estimate.
To me, the frightening thing isn’t the original mistake (though it is egregious), it’s the fact that the response to having it pointed out was “You should not assume such a poor implementation of my idea that it cannot make discriminations that are trivial to current humans” rather than “OOPS!”