Second sentence:
People say very different things depending on framing, so responses to any particularly-framed question are presumably not accurate, though I’d still take them as some evidence.
People say very different things from one another, so any particular person is highly unlikely to be accurate. An aggregate might still be good, but e.g. if people say such different things that three-quarters of them have to be totally wrong, then I don’t think it’s that much more likely that the last quarter is about right than that the answer is something almost nobody said.
First sentence:
In spite of the above, and the prior low probability of this being a reliable guide to AGI timelines, our paper was the 16th most discussed paper in the world. On the other hand, something like Ajeya’s timelines report (or even AI Impacts’ cruder timelines botec earlier) seem more informative, and to get way less attention. (I didn’t mean ‘within the class of surveys, interest doesn’t track informativeness much’ though that might be true, I meant ‘people seem to have substantial interest in surveys beyond what is explained by them being informative about e.g. AI timelines’
)
Do you mean that the half-day projects have to be in sequence relative to the other half-day projects, or within a particular half-day project, its contents have to be in sequence (so you can’t for instance miss the first step then give up and skip to the second step)?
In general if things have to be done in sequence, often I make the tasks non-specific, e.g. lets say i want to read a set of chapters in order, then i might make the tasks ‘read a chapter’ rather than ‘read the first chapter’ etc. Then if I were to fail at the first one, I would keep reading the first chapter to grab the second item, then when I eventually rescued what would have been the first chapter, I would collect it by reading whatever chapter I was up to. (This is all hypothetical—I never read chapters that fast.)