I’ve begun to notice discussion of AI risk in more and more places in the last year. Many of them reference Superintelligence. It doesn’t seem like a confirmation bias/Baader-Meinhoff effect, not really. It’s quite an unexpected change. Have others encountered a similar broadening in the sorts of people you encounter talking about this?
Yup. Nick Bostrom is basically the man. Above and beyond being the man, he’s a respectable focal point for a sea change that has been happening for broader reasons.
I’ve begun to notice discussion of AI risk in more and more places in the last year. Many of them reference Superintelligence. It doesn’t seem like a confirmation bias/Baader-Meinhoff effect, not really. It’s quite an unexpected change. Have others encountered a similar broadening in the sorts of people you encounter talking about this?
Yup. Nick Bostrom is basically the man. Above and beyond being the man, he’s a respectable focal point for a sea change that has been happening for broader reasons.