There’s something right about seeing a new EY essay on Less Wrong. Here’s hoping the LW revival will have some measure of success.
Essay feedback: I appreciate the density of the arguments, which might not have worked at a shorter length. Still wish there had been a summary as well, but I suppose something like that might already exist elsewhere*, and this essay just expands on it. It might also help to provide the various chapters / sections with headings. Finally, I wish the writing had been a bit more humorous (the Sequences felt easier to read, for me; and Scott Alexander also uses microhumor to make his long essays more hedonically rewarding to read), but I understand that could be perceived as off-putting by (part of) the actual target audience, i.e. Serious People / actual AI researchers.
* e.g. AFAIK several paper abstracts by the AI alignment community mention the general challenge of forecasting technological developments.
I had the weird experience of “at the end of each section, I felt I understood the point, but a the end of each subsequent section I was happy to have gotten additional points of clarification.”
I assume the essay is oriented around something like “hopefully you keep reading until you are convinced, addressing more and more specific points as it goes along.”
There’s something right about seeing a new EY essay on Less Wrong. Here’s hoping the LW revival will have some measure of success.
Essay feedback: I appreciate the density of the arguments, which might not have worked at a shorter length. Still wish there had been a summary as well, but I suppose something like that might already exist elsewhere*, and this essay just expands on it. It might also help to provide the various chapters / sections with headings. Finally, I wish the writing had been a bit more humorous (the Sequences felt easier to read, for me; and Scott Alexander also uses microhumor to make his long essays more hedonically rewarding to read), but I understand that could be perceived as off-putting by (part of) the actual target audience, i.e. Serious People / actual AI researchers.
* e.g. AFAIK several paper abstracts by the AI alignment community mention the general challenge of forecasting technological developments.
I had the weird experience of “at the end of each section, I felt I understood the point, but a the end of each subsequent section I was happy to have gotten additional points of clarification.”
I assume the essay is oriented around something like “hopefully you keep reading until you are convinced, addressing more and more specific points as it goes along.”