As Buck points out, Toby’s estimate of P(AI doom) is closer to the ‘mainstream’ than MIRI’s, and close enough that “so low” doesn’t seem like a good description.
I can’t really speak on behalf of others at FHI, of course, by I don’t think there is some ‘FHI consensus’ that is markedly higher or lower than Toby’s estimate.
Also, I just want to point out that Toby’s 1⁄10 figure is not for human extinction, it is for existential catastrophe caused by AI, which includes scenarios which don’t involve extinction (forms of ‘lock-in’). Therefore his estimate for extinction caused by AI is lower than 1⁄10.
As Buck points out, Toby’s estimate of P(AI doom) is closer to the ‘mainstream’ than MIRI’s, and close enough that “so low” doesn’t seem like a good description.
I can’t really speak on behalf of others at FHI, of course, by I don’t think there is some ‘FHI consensus’ that is markedly higher or lower than Toby’s estimate.
Also, I just want to point out that Toby’s 1⁄10 figure is not for human extinction, it is for existential catastrophe caused by AI, which includes scenarios which don’t involve extinction (forms of ‘lock-in’). Therefore his estimate for extinction caused by AI is lower than 1⁄10.