“AI Nanny” does seem even harder than FAI (the usual arguments apply to it with similar strength, but it is additionally asked for a specific wish), and compared to no-worries-AGI this idea has better immunity to arguments about the danger of its development. It’s a sufficiently amorphous proposal to shroud many AGI projects without essentially changing anything about them, including project members’ understanding of AI risk. So on the net, this looks to me like a potentially negative development.
It’s a sufficiently amorphous proposal to shroud many AGI projects without essentially changing anything about them, including project members’ understanding of AI risk. So on the net, this looks to me like a potentially negative development.
Is anyone surprised by this? A few weeks ago I wrote to cousin_it during a chat session:
Wei Dai: FAI seems to have enough momentum now that many future AI projects will at least claim to take Friendliness seriously Wei Dai: or another word, like machine ethics
It’s one of those details that is obviously important for memetic strategies to account for but will still get missed by nine out of ten naive intuitive-implicit models. There are an infinite number of ways for policy-centered thinking to kill a mind, both figuratively and literally, directly and indirectly.
A while ago, when I learned of Abram Demski (of all people!) helping someone to build an AGI, I felt the same surprise as now but apparently didn’t update strongly enough. Optimism seems to be the mind-killer in these matters. In retrospect it should’ve been obvious that people like Goertzel would start giving lip service to friendliness while still failing to get the point.
“AI Nanny” does seem even harder than FAI (the usual arguments apply to it with similar strength, but it is additionally asked for a specific wish), and compared to no-worries-AGI this idea has better immunity to arguments about the danger of its development. It’s a sufficiently amorphous proposal to shroud many AGI projects without essentially changing anything about them, including project members’ understanding of AI risk. So on the net, this looks to me like a potentially negative development.
Is anyone surprised by this? A few weeks ago I wrote to cousin_it during a chat session:
It’s one of those details that is obviously important for memetic strategies to account for but will still get missed by nine out of ten naive intuitive-implicit models. There are an infinite number of ways for policy-centered thinking to kill a mind, both figuratively and literally, directly and indirectly.
Sadly, I am :-(
A while ago, when I learned of Abram Demski (of all people!) helping someone to build an AGI, I felt the same surprise as now but apparently didn’t update strongly enough. Optimism seems to be the mind-killer in these matters. In retrospect it should’ve been obvious that people like Goertzel would start giving lip service to friendliness while still failing to get the point.
Many IT corporations already take their reputations seriously. Robot makers are sometimes close to the line though.