And your reply isn’t a concrete reply to any of my points.
The quotes from LOGI clearly establish exactly where EY agrees with T&C, and the other quotes establish the relevance of that to the sequences. It’s not like two separate brains wrote LOGI vs the sequences, and the other quotes establish the correspondence regardless.
This is not a law case where I’m critiquing some super specific thing EY said. Instead I’m tracing memetic influences: establishing what high level abstract brain/AI viewpoint cluster he was roughly in when he wrote the sequences, and how that influenced them. The quotes are pretty clear enough for that.
So it turns out I’m just too stupid for this high level critique. I’m only used to ones where you directly reference the content of the thing you’re asserting is tragically flawed. In order to get across to less sophisticated people like me in the future, my advice is to independently figure out how this “memetic influence” got into the sequences and then just directly refute whatever content it tainted. Otherwise us brainlets won’t be able to figure out which part of the sequences to label “not true” due to memetic influence, and won’t know if your disagreements are real or made up for contrarianism’s sake.
I think you reading way too much into the specific questionable wording of “tragically flawed”. By that I meant that they are flawed in some of the key background assumptions, how that influences thinking on AI risk/alignment, and the consequent system wide effects. I didn’t mean they are flawed at their surface level purpose—as rationalist self help and community foundations. They are very well written and concentrate a large amount of modern wisdom. But that of course isn’t the full reason for why EY wrote them: they are part of a training funnel to produce alignment researchers.
And your reply isn’t a concrete reply to any of my points.
The quotes from LOGI clearly establish exactly where EY agrees with T&C, and the other quotes establish the relevance of that to the sequences. It’s not like two separate brains wrote LOGI vs the sequences, and the other quotes establish the correspondence regardless.
This is not a law case where I’m critiquing some super specific thing EY said. Instead I’m tracing memetic influences: establishing what high level abstract brain/AI viewpoint cluster he was roughly in when he wrote the sequences, and how that influenced them. The quotes are pretty clear enough for that.
So it turns out I’m just too stupid for this high level critique. I’m only used to ones where you directly reference the content of the thing you’re asserting is tragically flawed. In order to get across to less sophisticated people like me in the future, my advice is to independently figure out how this “memetic influence” got into the sequences and then just directly refute whatever content it tainted. Otherwise us brainlets won’t be able to figure out which part of the sequences to label “not true” due to memetic influence, and won’t know if your disagreements are real or made up for contrarianism’s sake.
This comment contains a specific disagreement.
I think you reading way too much into the specific questionable wording of “tragically flawed”. By that I meant that they are flawed in some of the key background assumptions, how that influences thinking on AI risk/alignment, and the consequent system wide effects. I didn’t mean they are flawed at their surface level purpose—as rationalist self help and community foundations. They are very well written and concentrate a large amount of modern wisdom. But that of course isn’t the full reason for why EY wrote them: they are part of a training funnel to produce alignment researchers.