In Bostrom’s recent interview with Liv Boeree, he said (I’m paraphrasing; you’re probably better off listening to what he actually said)
p(doom)-related
it’s actually gone up for him, not down (contra your guess, unless I misinterpreted you), at least when broadening the scope beyond AI (cf. vulnerable world hypothesis, 34:50 in video)
re: AI, his prob. dist. has ‘narrowed towards the shorter end of the timeline—not a huge surprise, but a bit faster I think’ (30:24 in video)
also re: AI, ‘slow and medium-speed takeoffs have gained credibility compared to fast takeoffs’
he wouldn’t overstate any of this
contrary to people’s impression of him, he’s always been writing about ‘both sides’ (doom and utopia)
in the past it just seemed more pressing to him to call attention to ‘various things that could go wrong so we could avoid these pitfalls and then we’d have plenty of time to think about what to do with this big future’
this reminded me of this illustration from his old paper introducing the idea of x-risk prevention as global priority:
In Bostrom’s recent interview with Liv Boeree, he said (I’m paraphrasing; you’re probably better off listening to what he actually said)
p(doom)-related
it’s actually gone up for him, not down (contra your guess, unless I misinterpreted you), at least when broadening the scope beyond AI (cf. vulnerable world hypothesis, 34:50 in video)
re: AI, his prob. dist. has ‘narrowed towards the shorter end of the timeline—not a huge surprise, but a bit faster I think’ (30:24 in video)
also re: AI, ‘slow and medium-speed takeoffs have gained credibility compared to fast takeoffs’
he wouldn’t overstate any of this
contrary to people’s impression of him, he’s always been writing about ‘both sides’ (doom and utopia)
e.g. his Letter from Utopia first published in 2005,
in the past it just seemed more pressing to him to call attention to ‘various things that could go wrong so we could avoid these pitfalls and then we’d have plenty of time to think about what to do with this big future’
this reminded me of this illustration from his old paper introducing the idea of x-risk prevention as global priority: