There’s more variety than I expected in the group of people who are deferred to
I suspect that some of the people in the “everyone else” cluster defer to people in one of the other clusters—in which case there is more deference happening than these results suggest.
There were more “inside view” responses than I expected (maybe partly because people who have inside views were incentivised to respond, because it’s cool to say you have inside views or something). Might be interesting to think about whether it’s good (on the community level) for this number of people to have inside views on this topic.
Metaculus was given less weight than I expected (but as per Eli (see footnote 2), I think that’s a good thing).
Grace et al. AI expert surveys (1, 2) were deferred to less than I expected (but again, I think that’s good—many respondents to those surveys seem to have inconsistent views, see here for more details. And also there’s not much reason to expect AI experts to be excellent at forecasting things like AGI—it’s not their job, it’s probably not a skill they spend time training).
It seems that if you go around talking to lots of people about AI timelines, you could move the needle on community beliefs more than I expected.
I don’t remember if I put down “inside view” on the form when filling it out but that does sound like the type of thing I may have done. I think I might have been overly eager at the time to say I had an “inside view” when what I really had was: confusion and disagreements with others’ methods for forecasting, weighing others’ forecasts in a mostly non-principled way, intuitions about AI progress that were maybe overly strong and as much or more based on hanging around a group of people and picking up their beliefs instead of evaluating evidence for myself. It feels really hard to not let the general vibe around me affect the process of thinking through things independently.
Based on the results, I would think more people thinking about this for themselves and writing up their reasoning or even rough intuitions would be good. I suspect my beliefs are more influenced by the people that ranked high in survey answers than I’d want them to be because it turns out people around me are deferring to the same few people. Even when I think I have my own view on something, it is very largely affected by the fact that Ajeya said 2040/2050 and Daniel Kokotajlo said 5⁄7 years, and the vibes have trickled down to me even though I would weigh their forecasts/methodology less if I were coming across it for the first time.
(The timelines question doesn’t feel that important to me for its own sake at the moment but I think it is a useful one to practise figuring out where my beliefs actually come from)
Things that surprised me about the results
There’s more variety than I expected in the group of people who are deferred to
I suspect that some of the people in the “everyone else” cluster defer to people in one of the other clusters—in which case there is more deference happening than these results suggest.
There were more “inside view” responses than I expected (maybe partly because people who have inside views were incentivised to respond, because it’s cool to say you have inside views or something). Might be interesting to think about whether it’s good (on the community level) for this number of people to have inside views on this topic.
Metaculus was given less weight than I expected (but as per Eli (see footnote 2), I think that’s a good thing).
Grace et al. AI expert surveys (1, 2) were deferred to less than I expected (but again, I think that’s good—many respondents to those surveys seem to have inconsistent views, see here for more details. And also there’s not much reason to expect AI experts to be excellent at forecasting things like AGI—it’s not their job, it’s probably not a skill they spend time training).
It seems that if you go around talking to lots of people about AI timelines, you could move the needle on community beliefs more than I expected.
I don’t remember if I put down “inside view” on the form when filling it out but that does sound like the type of thing I may have done. I think I might have been overly eager at the time to say I had an “inside view” when what I really had was: confusion and disagreements with others’ methods for forecasting, weighing others’ forecasts in a mostly non-principled way, intuitions about AI progress that were maybe overly strong and as much or more based on hanging around a group of people and picking up their beliefs instead of evaluating evidence for myself. It feels really hard to not let the general vibe around me affect the process of thinking through things independently.
Based on the results, I would think more people thinking about this for themselves and writing up their reasoning or even rough intuitions would be good. I suspect my beliefs are more influenced by the people that ranked high in survey answers than I’d want them to be because it turns out people around me are deferring to the same few people. Even when I think I have my own view on something, it is very largely affected by the fact that Ajeya said 2040/2050 and Daniel Kokotajlo said 5⁄7 years, and the vibes have trickled down to me even though I would weigh their forecasts/methodology less if I were coming across it for the first time.
(The timelines question doesn’t feel that important to me for its own sake at the moment but I think it is a useful one to practise figuring out where my beliefs actually come from)