Yeah, echoing jsteinhardt, I think you misread the letter, and science journalists in general are not to be trusted when it comes to reporting on AI or AI dangers. Dietterich is the second listed signatory and Horvitz is the third of the FLI open letter, and this letter seems to me to be saying “hey general public, don’t freak out about the Terminator, the AI research field has this under control—we recognize that safety is super important and are working hard on it (and you should fund more of it).”
So, can you find the phrase in the letter that’s the MIRI open problem that Nate Soares presented on at the AAAI workshop on AI ethics that Dietterich was at a few days later?
If not, maybe you should reduce your confidence about your interpretation. My suspicion is that MIRI is rapidly becoming mainstream, and that the FLI grant is attracting even more attention. Perhaps more importantly, I think we’re in a position where it’s more effective to treat AI safety issues as mainstream than fringe.
I also think that we’re interpreting “under control” differently. I’m not making the claim that the problem is solved, just that it’s being worked on (in the way that academia works on these problems), and getting Congress or the media or so on involved in a way not mediated by experts is likely to do more harm than good.
Yeah, echoing jsteinhardt, I think you misread the letter, and science journalists in general are not to be trusted when it comes to reporting on AI or AI dangers. Dietterich is the second listed signatory and Horvitz is the third of the FLI open letter, and this letter seems to me to be saying “hey general public, don’t freak out about the Terminator, the AI research field has this under control—we recognize that safety is super important and are working hard on it (and you should fund more of it).”
Great, except
a) they don’t have it under control and
b) no-one in mainstream AI academia is working on the control problem for superintelligence
So, can you find the phrase in the letter that’s the MIRI open problem that Nate Soares presented on at the AAAI workshop on AI ethics that Dietterich was at a few days later?
If not, maybe you should reduce your confidence about your interpretation. My suspicion is that MIRI is rapidly becoming mainstream, and that the FLI grant is attracting even more attention. Perhaps more importantly, I think we’re in a position where it’s more effective to treat AI safety issues as mainstream than fringe.
I also think that we’re interpreting “under control” differently. I’m not making the claim that the problem is solved, just that it’s being worked on (in the way that academia works on these problems), and getting Congress or the media or so on involved in a way not mediated by experts is likely to do more harm than good.