Does the AI survey paper say experts are biased in any direction? (I didn’t see it anywhere)
Is there an accusation of violation of existing norms (by a specific person/organization) you see “The AI Timelines Scam” as making? If so, which one(s)?
I personally wouldn’t point to “When Will AI Exceed Human Performance?” as an exemplar on this dimension, because it isn’t clear about the interesting implications of the facts it’s reporting. Katja’s take-away from the paper was:
In the past, it seemed pretty plausible that what AI researchers think is a decent guide to what’s going to happen. I think we’ve pretty much demonstrated that that’s not the case. I think there are a variety of different ways we might go about trying to work out what AI timelines are like, and talking to experts is one of them; I think we should weight that one down a lot.
I don’t know whether Katja’s co-authors agree with her about that summary, but if there’s disagreement, I think the paper still could have included more discussion of the question and which findings look relevant to it.
The actual Discussion section makes the opposite argument instead, listing a bunch of reasons to think AI experts are good at foreseeing AI progress. The introduction says “To prepare for these challenges, accurate forecasting of transformative AI would be invaluable. [...] The predictions of AI experts provide crucial additional information.” And the paper includes a list of four “key findings”, none of which even raise the question of survey respondents’ forecasting chops, and all of which are worded in ways that suggest we should in fact put some weight on the respondents’ views (sometimes switching between the phrasing ‘researchers believe X’ and ‘X is true’).
The abstract mentions the main finding that undermines how believable the responses are, but does so in such a way that someone reading through quickly might come away with the opposite impression. The abstract’s structure is:
To adapt public policy, we need to better anticipate [AI advances]. Researchers predict [A, B, C, D, E, and F]. Researchers believe [G and H]. These results will inform discussion amongst researchers and policymakers about anticipating and managing trends in AI.
If it slips past your attention that G and H are massively inconsistent, it’s easy for the reader to come away thinking the abstract is saying ‘Here’s a list of of credible statements from experts about their area of expertise’ as opposed to ‘Here’s a demonstration that what AI researchers think is not a decent guide to what’s going to happen’.
Does the AI survey paper say experts are biased in any direction? (I didn’t see it anywhere)
Is there an accusation of violation of existing norms (by a specific person/organization) you see “The AI Timelines Scam” as making? If so, which one(s)?
I personally wouldn’t point to “When Will AI Exceed Human Performance?” as an exemplar on this dimension, because it isn’t clear about the interesting implications of the facts it’s reporting. Katja’s take-away from the paper was:
I don’t know whether Katja’s co-authors agree with her about that summary, but if there’s disagreement, I think the paper still could have included more discussion of the question and which findings look relevant to it.
The actual Discussion section makes the opposite argument instead, listing a bunch of reasons to think AI experts are good at foreseeing AI progress. The introduction says “To prepare for these challenges, accurate forecasting of transformative AI would be invaluable. [...] The predictions of AI experts provide crucial additional information.” And the paper includes a list of four “key findings”, none of which even raise the question of survey respondents’ forecasting chops, and all of which are worded in ways that suggest we should in fact put some weight on the respondents’ views (sometimes switching between the phrasing ‘researchers believe X’ and ‘X is true’).
The abstract mentions the main finding that undermines how believable the responses are, but does so in such a way that someone reading through quickly might come away with the opposite impression. The abstract’s structure is:
If it slips past your attention that G and H are massively inconsistent, it’s easy for the reader to come away thinking the abstract is saying ‘Here’s a list of of credible statements from experts about their area of expertise’ as opposed to ‘Here’s a demonstration that what AI researchers think is not a decent guide to what’s going to happen’.
By bias, I mean the framing effects described in this SlateStarCodex post.
It’s unclear to me whether that post makes such an accusation.