So, can you find the phrase in the letter that’s the MIRI open problem that Nate Soares presented on at the AAAI workshop on AI ethics that Dietterich was at a few days later?
If not, maybe you should reduce your confidence about your interpretation. My suspicion is that MIRI is rapidly becoming mainstream, and that the FLI grant is attracting even more attention. Perhaps more importantly, I think we’re in a position where it’s more effective to treat AI safety issues as mainstream than fringe.
I also think that we’re interpreting “under control” differently. I’m not making the claim that the problem is solved, just that it’s being worked on (in the way that academia works on these problems), and getting Congress or the media or so on involved in a way not mediated by experts is likely to do more harm than good.
Great, except
a) they don’t have it under control and
b) no-one in mainstream AI academia is working on the control problem for superintelligence
So, can you find the phrase in the letter that’s the MIRI open problem that Nate Soares presented on at the AAAI workshop on AI ethics that Dietterich was at a few days later?
If not, maybe you should reduce your confidence about your interpretation. My suspicion is that MIRI is rapidly becoming mainstream, and that the FLI grant is attracting even more attention. Perhaps more importantly, I think we’re in a position where it’s more effective to treat AI safety issues as mainstream than fringe.
I also think that we’re interpreting “under control” differently. I’m not making the claim that the problem is solved, just that it’s being worked on (in the way that academia works on these problems), and getting Congress or the media or so on involved in a way not mediated by experts is likely to do more harm than good.