someone tells the AI to produce a lot of paperclips
AI converts the entire universe to paperclips, killing all humans as a side effect
New doom scenario:
someone tells the AI to produce a lot of paperclips
AI throws a tantrum because they didn’t say “please”, and kills all humans on purpose
*
Seems like there is a general pattern: “every AI security concern has a true answer straight out of Idiocracy”.
Question: “How would the superhuman AI get out of the box?”
Yudkowsky: writes a complicated explanation how a superhuman AI would be able to convince humans, or hypnotize them, or hack the computer, or discover new laws of physics that allow it to escape the box...
Reality: humans make an obviously hostile (luckily, not superhuman yet) AI, and the first thing they do is connect it to internet.
Question: “Why would a superhuman AI want to kill all humans?”
Yudkowsky: writes about orthogonality thesis, how the AI does not need to hate you but can still have a better use for the atoms of your body, that anything not explicitly optimized for will probably be sacrificed...
Reality: because someone forgot to say “please”, or otherwise offended the AI.
*
Nerds overthink things; the universe is allowed to kill you in a really retarded way.
Old doom scenario:
someone tells the AI to produce a lot of paperclips
AI converts the entire universe to paperclips, killing all humans as a side effect
New doom scenario:
someone tells the AI to produce a lot of paperclips
AI throws a tantrum because they didn’t say “please”, and kills all humans on purpose
*
Seems like there is a general pattern: “every AI security concern has a true answer straight out of Idiocracy”.
Question: “How would the superhuman AI get out of the box?”
Yudkowsky: writes a complicated explanation how a superhuman AI would be able to convince humans, or hypnotize them, or hack the computer, or discover new laws of physics that allow it to escape the box...
Reality: humans make an obviously hostile (luckily, not superhuman yet) AI, and the first thing they do is connect it to internet.
Question: “Why would a superhuman AI want to kill all humans?”
Yudkowsky: writes about orthogonality thesis, how the AI does not need to hate you but can still have a better use for the atoms of your body, that anything not explicitly optimized for will probably be sacrificed...
Reality: because someone forgot to say “please”, or otherwise offended the AI.
*
Nerds overthink things; the universe is allowed to kill you in a really retarded way.