That’s a great point! It’ll also help with communicating the difficulty of the problem if they’ll conclude that the field is in trouble and time running out (in case that’s true – experts disagree here). I think AI strategy people should consider trying to get more ambassadors on board. (I think I see the ambassador effect as more important now than those people’s direct contributions, but you definitely only want ambassadors whose understanding of AI risk is crystal clear.)
Edit: That said, bringing in reputable people from outside ML may not be a good strategy to convince opinion leaders within ML, so this could backfire.
That’s a great point! It’ll also help with communicating the difficulty of the problem if they’ll conclude that the field is in trouble and time running out (in case that’s true – experts disagree here). I think AI strategy people should consider trying to get more ambassadors on board. (I think I see the ambassador effect as more important now than those people’s direct contributions, but you definitely only want ambassadors whose understanding of AI risk is crystal clear.)
Edit: That said, bringing in reputable people from outside ML may not be a good strategy to convince opinion leaders within ML, so this could backfire.