I think it’s plausible that it is either harmful to perpetuate “every alignment person needs to read the Sequences / Rationality A-Z” or maybe even inefficient.
For example, to the extent that alignment needs more really good machine learning engineers, it’s possible they might benefit less from the sequences than a conceptual alignment researcher.
However, relying on anecdotal evidence seems potentially unnecessary. We might be able to use polls, or otherwise systemically investigate the relationship between interest/engagement with the sequences and various paths to contribution with AI. A prediction market might also work for information aggregation.
I’d bet that all else equal, engagement with the sequences is beneficial but that this might be less pronounced among those growing up in academically inclined cultures.
I think it’s plausible that it is either harmful to perpetuate “every alignment person needs to read the Sequences / Rationality A-Z” or maybe even inefficient.
For example, to the extent that alignment needs more really good machine learning engineers, it’s possible they might benefit less from the sequences than a conceptual alignment researcher.
However, relying on anecdotal evidence seems potentially unnecessary. We might be able to use polls, or otherwise systemically investigate the relationship between interest/engagement with the sequences and various paths to contribution with AI. A prediction market might also work for information aggregation.
I’d bet that all else equal, engagement with the sequences is beneficial but that this might be less pronounced among those growing up in academically inclined cultures.