How about focusing on the evidence, and on demonstrating good epistemics?
The styles encouraged by peer-review provide examples of how to minimize unnecessary accusations against individuals and accidental appearances of accusations against individuals (but peer-review includes too many other constraints to be the ideal norm).
Compare the paper When Will AI Exceed Human Performance? Evidence from AI Experts to The AI Timelines Scam. The former is more polite, and looks more epistemically trustworthy, when pointing out that experts give biased forecasts about AI timelines (more biased than I would have inferred from The AI Timelines Scam), but may err in the direction of being too subtle.
Raemon’s advice here doesn’t seem 100% right to me, but it seems pretty close. Accusing a specific person or organization of violating an existing norm seems like something that ought to be kept quite separate from arguments about what policies are good. But there are plenty of ways to point out patterns of bad behavior without accusing someone of violating an existing norm, and I’m unsure what rules should apply to those.
Good epistemics says: If X, I desire to believe X. If not-X, I desire to believe not-X.
This holds even when X is “Y person did Z thing” and Z is norm-violating.
If you don’t try to explicitly believe “Y person did Z thing” in worlds where in fact Y person did Z thing, you aren’t trying to have good epistemics. If you don’t say so where it’s relevant (and give a bogus explanation instead), you’re demonstrating bad epistemics. (This includes cases of saying a mistake theory where a conflict theory is correct)
It’s important to distinguish good epistemics (having beliefs correlated with reality) with the aesthetic that claims credit for good epistemics (e.g. the polite academic style).
Don’t conflate politeness with epistemology. They’re actually opposed in many cases!
Does the AI survey paper say experts are biased in any direction? (I didn’t see it anywhere)
Is there an accusation of violation of existing norms (by a specific person/organization) you see “The AI Timelines Scam” as making? If so, which one(s)?
I personally wouldn’t point to “When Will AI Exceed Human Performance?” as an exemplar on this dimension, because it isn’t clear about the interesting implications of the facts it’s reporting. Katja’s take-away from the paper was:
In the past, it seemed pretty plausible that what AI researchers think is a decent guide to what’s going to happen. I think we’ve pretty much demonstrated that that’s not the case. I think there are a variety of different ways we might go about trying to work out what AI timelines are like, and talking to experts is one of them; I think we should weight that one down a lot.
I don’t know whether Katja’s co-authors agree with her about that summary, but if there’s disagreement, I think the paper still could have included more discussion of the question and which findings look relevant to it.
The actual Discussion section makes the opposite argument instead, listing a bunch of reasons to think AI experts are good at foreseeing AI progress. The introduction says “To prepare for these challenges, accurate forecasting of transformative AI would be invaluable. [...] The predictions of AI experts provide crucial additional information.” And the paper includes a list of four “key findings”, none of which even raise the question of survey respondents’ forecasting chops, and all of which are worded in ways that suggest we should in fact put some weight on the respondents’ views (sometimes switching between the phrasing ‘researchers believe X’ and ‘X is true’).
The abstract mentions the main finding that undermines how believable the responses are, but does so in such a way that someone reading through quickly might come away with the opposite impression. The abstract’s structure is:
To adapt public policy, we need to better anticipate [AI advances]. Researchers predict [A, B, C, D, E, and F]. Researchers believe [G and H]. These results will inform discussion amongst researchers and policymakers about anticipating and managing trends in AI.
If it slips past your attention that G and H are massively inconsistent, it’s easy for the reader to come away thinking the abstract is saying ‘Here’s a list of of credible statements from experts about their area of expertise’ as opposed to ‘Here’s a demonstration that what AI researchers think is not a decent guide to what’s going to happen’.
How about focusing on the evidence, and on demonstrating good epistemics?
The styles encouraged by peer-review provide examples of how to minimize unnecessary accusations against individuals and accidental appearances of accusations against individuals (but peer-review includes too many other constraints to be the ideal norm).
Compare the paper When Will AI Exceed Human Performance? Evidence from AI Experts to The AI Timelines Scam. The former is more polite, and looks more epistemically trustworthy, when pointing out that experts give biased forecasts about AI timelines (more biased than I would have inferred from The AI Timelines Scam), but may err in the direction of being too subtle.
See also Bryan Caplan’s advice.
Raemon’s advice here doesn’t seem 100% right to me, but it seems pretty close. Accusing a specific person or organization of violating an existing norm seems like something that ought to be kept quite separate from arguments about what policies are good. But there are plenty of ways to point out patterns of bad behavior without accusing someone of violating an existing norm, and I’m unsure what rules should apply to those.
Good epistemics says: If X, I desire to believe X. If not-X, I desire to believe not-X.
This holds even when X is “Y person did Z thing” and Z is norm-violating.
If you don’t try to explicitly believe “Y person did Z thing” in worlds where in fact Y person did Z thing, you aren’t trying to have good epistemics. If you don’t say so where it’s relevant (and give a bogus explanation instead), you’re demonstrating bad epistemics. (This includes cases of saying a mistake theory where a conflict theory is correct)
It’s important to distinguish good epistemics (having beliefs correlated with reality) with the aesthetic that claims credit for good epistemics (e.g. the polite academic style).
Don’t conflate politeness with epistemology. They’re actually opposed in many cases!
Does the AI survey paper say experts are biased in any direction? (I didn’t see it anywhere)
Is there an accusation of violation of existing norms (by a specific person/organization) you see “The AI Timelines Scam” as making? If so, which one(s)?
I personally wouldn’t point to “When Will AI Exceed Human Performance?” as an exemplar on this dimension, because it isn’t clear about the interesting implications of the facts it’s reporting. Katja’s take-away from the paper was:
I don’t know whether Katja’s co-authors agree with her about that summary, but if there’s disagreement, I think the paper still could have included more discussion of the question and which findings look relevant to it.
The actual Discussion section makes the opposite argument instead, listing a bunch of reasons to think AI experts are good at foreseeing AI progress. The introduction says “To prepare for these challenges, accurate forecasting of transformative AI would be invaluable. [...] The predictions of AI experts provide crucial additional information.” And the paper includes a list of four “key findings”, none of which even raise the question of survey respondents’ forecasting chops, and all of which are worded in ways that suggest we should in fact put some weight on the respondents’ views (sometimes switching between the phrasing ‘researchers believe X’ and ‘X is true’).
The abstract mentions the main finding that undermines how believable the responses are, but does so in such a way that someone reading through quickly might come away with the opposite impression. The abstract’s structure is:
If it slips past your attention that G and H are massively inconsistent, it’s easy for the reader to come away thinking the abstract is saying ‘Here’s a list of of credible statements from experts about their area of expertise’ as opposed to ‘Here’s a demonstration that what AI researchers think is not a decent guide to what’s going to happen’.
By bias, I mean the framing effects described in this SlateStarCodex post.
It’s unclear to me whether that post makes such an accusation.