I think you reading way too much into the specific questionable wording of “tragically flawed”. By that I meant that they are flawed in some of the key background assumptions, how that influences thinking on AI risk/alignment, and the consequent system wide effects. I didn’t mean they are flawed at their surface level purpose—as rationalist self help and community foundations. They are very well written and concentrate a large amount of modern wisdom. But that of course isn’t the full reason for why EY wrote them: they are part of a training funnel to produce alignment researchers.
I think you reading way too much into the specific questionable wording of “tragically flawed”. By that I meant that they are flawed in some of the key background assumptions, how that influences thinking on AI risk/alignment, and the consequent system wide effects. I didn’t mean they are flawed at their surface level purpose—as rationalist self help and community foundations. They are very well written and concentrate a large amount of modern wisdom. But that of course isn’t the full reason for why EY wrote them: they are part of a training funnel to produce alignment researchers.