The analogy was highlighting what all intelligently designed things have in common. Namely that they don’t magically work perfectly well at doing something they were not designed to do.
a) I’m glad to see you have a magical oracle that tells you true facts about “all intelligently designed things”. Maybe it can tell us how to build a friendly AI.
b) You’re conflating “designed” in the sense of “hey, I should build an AI that maximises human happiness” with “designed” in the sense of what someone actually programs into the utility function or generally goal structure of the AI. It’s very easy to makehugeblunders betwen A and B.
If you are bad enough at programming that when trying to encode the optimization of human happiness your system interprets this as maximizing smiley faces, then you won’t end up with an optimization process that is powerful enough to outsmart humans.
c) You haven’t shown this, just assumed it based on your surface analogies.
d) Even if you had, people will keep trying until one of their programs succeeds at taking over the world, then it’s game over. (Or, if we’re lucky, it succeeds at causing some major destruction, then fails somehow, teaching us all a lesson about AI safety.)
e) Being a bad programmer isn’t even a difficulty if the relevant algorithms have already been worked out by researchers and you can just copy and paste your optimization code from the internet.
a) I’m glad to see you have a magical oracle that tells you true facts about “all intelligently designed things”. Maybe it can tell us how to build a friendly AI.
b) You’re conflating “designed” in the sense of “hey, I should build an AI that maximises human happiness” with “designed” in the sense of what someone actually programs into the utility function or generally goal structure of the AI. It’s very easy to make huge blunders betwen A and B.
c) You haven’t shown this, just assumed it based on your surface analogies.
d) Even if you had, people will keep trying until one of their programs succeeds at taking over the world, then it’s game over. (Or, if we’re lucky, it succeeds at causing some major destruction, then fails somehow, teaching us all a lesson about AI safety.)
e) Being a bad programmer isn’t even a difficulty if the relevant algorithms have already been worked out by researchers and you can just copy and paste your optimization code from the internet.
f) http://lesswrong.com/lw/jao/siren_worlds_and_the_perils_of_overoptimised/awpe