I think we’re talking past each other a bit. I’m saying that people sympathetic to AI risk will be discouraged from publishing AI capability work, and publishing AI capability work is exactly why Stuart Russell and Yoshua Bengio have credibility. Because publishing AI capability work is so strongly discouraged, any new professors of AI will to some degree be selected for not caring about AI risk, which was not the case when Russell or Bengio entered the field.
any new professors of AI will to some degree be selected for not caring about AI risk, which was not the case when Russell or Bengio entered the field.
I agree that this is a concern that hypothetically could make a difference, but as I said in my other comment, we are likely to alienate many of the best people by doing means-end-reasoning like this (including people like Stuart and Yoshua), and also, this seems like a very slow process that would take decades to have a large effect, and my timelines are not that long.
Seems like we mostly agree and our difference is based on timelines. I agree the effect is more of a long term one, although I wouldn’t say decades. OpenAI was founded in 2015 and raised the profile of AI risk in 2022, so in the counterfactual case where Sam Altman was dissuaded from founding OpenAI due to timeline concerns, AI risk would have much lower public credibility less than a decade.
Public recognition as a researcher does seem to favour longer periods of time though, the biggest names are all people who’ve been in the field multiple decades, so you have a point there.
Stuart and Yoshua seem to be welcomed into the field just fine, and their stature as respected people on the topic of existential risk seems to be in good shape, and I don’t expect that to change on the relevant timescales.
I think people talking openly about the danger and harm caused by developing cutting edge systems is exactly the kind of thing that made them engage with the field, and a field that didn’t straightforwardly recognize and try to hold the people who are causing the harm accountable would have been much less likely to get at least Stuart involved (I know less about Yoshua). Stuart himself is one of the people who is harshest about people doing dangerous research, and who is most strongly calling for pretty hard accountability.
I think we’re talking past each other a bit. I’m saying that people sympathetic to AI risk will be discouraged from publishing AI capability work, and publishing AI capability work is exactly why Stuart Russell and Yoshua Bengio have credibility. Because publishing AI capability work is so strongly discouraged, any new professors of AI will to some degree be selected for not caring about AI risk, which was not the case when Russell or Bengio entered the field.
I agree that this is a concern that hypothetically could make a difference, but as I said in my other comment, we are likely to alienate many of the best people by doing means-end-reasoning like this (including people like Stuart and Yoshua), and also, this seems like a very slow process that would take decades to have a large effect, and my timelines are not that long.
Seems like we mostly agree and our difference is based on timelines. I agree the effect is more of a long term one, although I wouldn’t say decades. OpenAI was founded in 2015 and raised the profile of AI risk in 2022, so in the counterfactual case where Sam Altman was dissuaded from founding OpenAI due to timeline concerns, AI risk would have much lower public credibility less than a decade.
Public recognition as a researcher does seem to favour longer periods of time though, the biggest names are all people who’ve been in the field multiple decades, so you have a point there.
Stuart and Yoshua seem to be welcomed into the field just fine, and their stature as respected people on the topic of existential risk seems to be in good shape, and I don’t expect that to change on the relevant timescales.
I think people talking openly about the danger and harm caused by developing cutting edge systems is exactly the kind of thing that made them engage with the field, and a field that didn’t straightforwardly recognize and try to hold the people who are causing the harm accountable would have been much less likely to get at least Stuart involved (I know less about Yoshua). Stuart himself is one of the people who is harshest about people doing dangerous research, and who is most strongly calling for pretty hard accountability.