I think practitioners of ML should be more wary of their tools. I’m not saying ML is a fast track to strong AI, just that we don’t know if it is. Several ML people voiced reassurances recently, but I would have expected them to do that even if it was possible to detect danger at this point. So I think someone should find a way to make the field more careful.
I don’t think that someone should be MIRI though; status differences are too high, they are not insiders, etc. My best bet would be a prominent ML researcher starting to speak up and giving detailed, plausible hypotheticals in public (I mean near-future hypotheticals where some error creates a lot of trouble for everyone).
I think practitioners of ML should be more wary of their tools. I’m not saying ML is a fast track to strong AI, just that we don’t know if it is. Several ML people voiced reassurances recently, but I would have expected them to do that even if it was possible to detect danger at this point. So I think someone should find a way to make the field more careful.
I don’t think that someone should be MIRI though; status differences are too high, they are not insiders, etc. My best bet would be a prominent ML researcher starting to speak up and giving detailed, plausible hypotheticals in public (I mean near-future hypotheticals where some error creates a lot of trouble for everyone).