Interest in surveys doesn’t seem very related to whether a survey is a good source of information on the topic surveyed on. One of the strongest findings of the 2016 survey IMO was that surveys like that are unlikely to be a reliable guide to the future.
Can you say more?
How are these two sentences related?
The first sentence seems plausible, but why do you say it?
The second sentence seems plausible, but why do you say it? (Is it just because many responses were internally inconsistent and/or unreasonable?)
People say very different things depending on framing, so responses to any particularly-framed question are presumably not accurate, though I’d still take them as some evidence.
People say very different things from one another, so any particular person is highly unlikely to be accurate. An aggregate might still be good, but e.g. if people say such different things that three-quarters of them have to be totally wrong, then I don’t think it’s that much more likely that the last quarter is about right than that the answer is something almost nobody said.
First sentence:
In spite of the above, and the prior low probability of this being a reliable guide to AGI timelines, our paper was the 16th most discussed paper in the world. On the other hand, something like Ajeya’s timelines report (or even AI Impacts’ cruder timelines botec earlier) seem more informative, and to get way less attention. (I didn’t mean ‘within the class of surveys, interest doesn’t track informativeness much’ though that might be true, I meant ‘people seem to have substantial interest in surveys beyond what is explained by them being informative about e.g. AI timelines’
Good post.
Can you say more?
How are these two sentences related?
The first sentence seems plausible, but why do you say it?
The second sentence seems plausible, but why do you say it? (Is it just because many responses were internally inconsistent and/or unreasonable?)
Second sentence:
People say very different things depending on framing, so responses to any particularly-framed question are presumably not accurate, though I’d still take them as some evidence.
People say very different things from one another, so any particular person is highly unlikely to be accurate. An aggregate might still be good, but e.g. if people say such different things that three-quarters of them have to be totally wrong, then I don’t think it’s that much more likely that the last quarter is about right than that the answer is something almost nobody said.
First sentence:
In spite of the above, and the prior low probability of this being a reliable guide to AGI timelines, our paper was the 16th most discussed paper in the world. On the other hand, something like Ajeya’s timelines report (or even AI Impacts’ cruder timelines botec earlier) seem more informative, and to get way less attention. (I didn’t mean ‘within the class of surveys, interest doesn’t track informativeness much’ though that might be true, I meant ‘people seem to have substantial interest in surveys beyond what is explained by them being informative about e.g. AI timelines’
)