It’s hard to tell if this is good or bad. They don’t say anything about extinction risks. This could be because they’ve recognized the possibility of extinction and talking about it is just politically unfashionable, or it could be because they don’t consider that a credible concern, in which case, one possibility is, this would be good in the short term but would probably lure people into a false sense of security in the long term, unless things change more.
It’s hard to tell if this is good or bad. They don’t say anything about extinction risks. This could be because they’ve recognized the possibility of extinction and talking about it is just politically unfashionable, or it could be because they don’t consider that a credible concern, in which case, one possibility is, this would be good in the short term but would probably lure people into a false sense of security in the long term, unless things change more.
Sort of a follow-up post here: http://lesswrong.com/r/discussion/lw/nqp/notes_on_the_safety_in_artificial_intelligence/