Does the data note whether the shift is among new machine learning researchers? Among those who have a p(Doom) > 5%, I wonder how many would come to that conclusion without having read lesswrong or the associated rationalist fiction.
The dataset is public and includes a question “how long have you worked in” the “AI research area [you have] worked in for the longest time,” so you could check something related!
Thanks for the link! I ended up looking through the data and there wasn’t any clear correlation between amount of time spent in research area and p(Doom).
For the most part, groups don’t differ very much, although as might be expected, more North Americans have a high p(Doom) conditional on HLMI than other regions.
Does the data note whether the shift is among new machine learning researchers? Among those who have a p(Doom) > 5%, I wonder how many would come to that conclusion without having read lesswrong or the associated rationalist fiction.
The dataset is public and includes a question “how long have you worked in” the “AI research area [you have] worked in for the longest time,” so you could check something related!
Thanks for the link! I ended up looking through the data and there wasn’t any clear correlation between amount of time spent in research area and p(Doom).
I ran a few averages by both time spent in research area and region of undergraduate study here: https://docs.google.com/spreadsheets/d/1Kp0cWKJt7tmRtlXbPdpirQRwILO29xqAVcpmy30C9HQ/edit#gid=583622504
For the most part, groups don’t differ very much, although as might be expected, more North Americans have a high p(Doom) conditional on HLMI than other regions.