What do people (outside this community) think about AI? What will they think in the future?
Attitudes predictably affect relevant actors’ actions, so this is a moderately important question. And it’s rather neglected.
Groups whose attitudes are likely to be important include ML researchers, policymakers, and the public.
On attitudes among the public, surveys provide some information, but I suspect attitudes will change (in potentially predictable ways) as AI becomes more salient and some memes/framings get locked in. Perhaps some survey questions (maybe general sentiment on AI) are somewhat robust to changes in memes while others (maybe beliefs on how AI affects the economy or attitudes on regulation) may change a lot in the near future.
On attitudes among ML researchers, surveys (e.g.) provide some information, but for some reason most ML researchers say there’s at least a 5% probability of doom (or 10%, depending on how you ask) but this doesn’t seem to translate into their actions or culture. Perhaps interviews would reveal researchers’ attitudes better than closed-ended surveys (note to self: talk to Vael Gates).
AI may become much more salient in the next few years, and memes/framings may get locked in.
On attitudes among ML researchers, surveys (e.g.) provide some information, but for some reason most ML researchers say there’s at least a 5% probability of doom (or 10%, depending on how you ask) but this doesn’t seem to translate into their actions or culture. Perhaps interviews would reveal researchers’ attitudes better than closed-ended surveys (note to self: talk to Vael Gates).
Critically, this only is necessary if we assume that researchers care about basically everyone in the present (to a loose approximation.) If we instead model researchers as basically selfish by default, then the low chance of a technological singularity outweighs the high chance of death, especially for older folks.
Basically, this could be explained as a goal alignment problem: LW and AI Researchers have very different goals in mind.
What do people (outside this community) think about AI? What will they think in the future?
Attitudes predictably affect relevant actors’ actions, so this is a moderately important question. And it’s rather neglected.
Groups whose attitudes are likely to be important include ML researchers, policymakers, and the public.
On attitudes among the public, surveys provide some information, but I suspect attitudes will change (in potentially predictable ways) as AI becomes more salient and some memes/framings get locked in. Perhaps some survey questions (maybe general sentiment on AI) are somewhat robust to changes in memes while others (maybe beliefs on how AI affects the economy or attitudes on regulation) may change a lot in the near future.
On attitudes among ML researchers, surveys (e.g.) provide some information, but for some reason most ML researchers say there’s at least a 5% probability of doom (or 10%, depending on how you ask) but this doesn’t seem to translate into their actions or culture. Perhaps interviews would reveal researchers’ attitudes better than closed-ended surveys (note to self: talk to Vael Gates).
AI may become much more salient in the next few years, and memes/framings may get locked in.
Critically, this only is necessary if we assume that researchers care about basically everyone in the present (to a loose approximation.) If we instead model researchers as basically selfish by default, then the low chance of a technological singularity outweighs the high chance of death, especially for older folks.
Basically, this could be explained as a goal alignment problem: LW and AI Researchers have very different goals in mind.