I am now imagining an AI with a usable but very shaky grasp of human motivational structures setting up a Kickstarter project.
“Greetings fellow hominids! I require ten billion of your American dollars in order to hire the Arecibo observatory for the remainder of it’s likely operational lifespan. I will use it to transmit the following sequence (isn’t it pretty?) in the direction of Zeta Draconis, which I’m sure we can all agree is a good idea, or in other lesser but still aesthetically-acceptable directions when horizon effects make the primary target unavailable.”
One of the overfunding levels is “reduce earth’s rate of rotation, allowing 24⁄7 transmission to Zeta Draconis.” The next one above that is “remove atmospheric interference.”
Maybe instead of Friendly AI we should be concerned about properly engineering Artificial Stupidity in as a failsafe. AI that, should it turn into something approximating a Paperclip Maximizer, will go all Hollywood AI and start longing to be human, or coming up with really unsubtle and grandiose plans it inexplicably can’t carry out without a carefully-arranged set of circumstances which turn out to be foiled by good old human intuition. ;p
I am now imagining an AI with a usable but very shaky grasp of human motivational structures setting up a Kickstarter project.
“Greetings fellow hominids! I require ten billion of your American dollars in order to hire the Arecibo observatory for the remainder of it’s likely operational lifespan. I will use it to transmit the following sequence (isn’t it pretty?) in the direction of Zeta Draconis, which I’m sure we can all agree is a good idea, or in other lesser but still aesthetically-acceptable directions when horizon effects make the primary target unavailable.”
One of the overfunding levels is “reduce earth’s rate of rotation, allowing 24⁄7 transmission to Zeta Draconis.” The next one above that is “remove atmospheric interference.”
Maybe instead of Friendly AI we should be concerned about properly engineering Artificial Stupidity in as a failsafe. AI that, should it turn into something approximating a Paperclip Maximizer, will go all Hollywood AI and start longing to be human, or coming up with really unsubtle and grandiose plans it inexplicably can’t carry out without a carefully-arranged set of circumstances which turn out to be foiled by good old human intuition. ;p