On attitudes among ML researchers, surveys (e.g.) provide some information, but for some reason most ML researchers say there’s at least a 5% probability of doom (or 10%, depending on how you ask) but this doesn’t seem to translate into their actions or culture. Perhaps interviews would reveal researchers’ attitudes better than closed-ended surveys (note to self: talk to Vael Gates).
Critically, this only is necessary if we assume that researchers care about basically everyone in the present (to a loose approximation.) If we instead model researchers as basically selfish by default, then the low chance of a technological singularity outweighs the high chance of death, especially for older folks.
Basically, this could be explained as a goal alignment problem: LW and AI Researchers have very different goals in mind.
Critically, this only is necessary if we assume that researchers care about basically everyone in the present (to a loose approximation.) If we instead model researchers as basically selfish by default, then the low chance of a technological singularity outweighs the high chance of death, especially for older folks.
Basically, this could be explained as a goal alignment problem: LW and AI Researchers have very different goals in mind.