I didn’t finish reading this, but if it were the case that:
There were clear and important implications of this result for making the world better (via aligning AGI)
These implications were stated in the summary at the beginning
then I very plausibly would have finished reading the post or saved it for later.
ETA: For what it’s worth, I still upvoted and liked the post, since I think deconfusing ourselves about stuff like this is plausibly very good and at the very least interesting. I just didn’t like it enough to finish reading it or save it, because from my perspective it’s expected usefulness wasn’t high enough given the information I had.
Ah okay—I have updated positively in terms of the usefulness based on that description, and have updated positively on the hypothesis “I am missing a lot of important information that contextualizes this project,” though still confused.
Would be interested to know the causal chain from understanding circuit simplicity to the future being better, but maybe I should just stay posted (or maybe there is a different post I should read that you can link me to; or maybe the impact is diffuse and talking about any particular path doesn’t make that much sense [though even in this case my guess is that it is still helpful to have at least one possible impact story]).
Also, just want to make clear that I made my original comment because I figured sharing my user-experience would be helpful (e.g. via causing a sentence about the ToC), and hopefully not with the effect of being discouraging / being a downer.