Nastier issue: the harder argument of convincing people UFAI is an avoidable risk. If you can’t convince people they’ve got a realistic chance (ie: one they would gamble on, given the possible benefits of FAI) of winning this issue, then it doesn’t matter how informed they are.
See: Juergen Schmidhuber’s interview on this very website, where we basically says, “We’re damn near AI in my lab, and yes, it is a rational optimization process,” followed by, “We see no way to prevent the paper-clipping of humanity whatsoever, so we stopped giving a damn and just focus on doing our research.”
Nastier issue: the harder argument of convincing people UFAI is an avoidable risk. If you can’t convince people they’ve got a realistic chance (ie: one they would gamble on, given the possible benefits of FAI) of winning this issue, then it doesn’t matter how informed they are.
See: Juergen Schmidhuber’s interview on this very website, where we basically says, “We’re damn near AI in my lab, and yes, it is a rational optimization process,” followed by, “We see no way to prevent the paper-clipping of humanity whatsoever, so we stopped giving a damn and just focus on doing our research.”