If you parse this post as “attempting to impart a basic intuition that might let people (new to AI x-risk arguments) avoid certain classes of errors” rather than “trying to argue with the bleeding-edge arguments on x-risk”, this post seems good
This seems reasonable in isolation, but it gets frustrating when the former is all Eliezer seems to do these days, with seemingly no attempt at the latter. When all you do is retread these dunks on “midwits” and show apathy/contempt for engaging with newer arguments, it makes it look like you don’t actually have an interest in being maximally truth-seeking but instead like you want to just dig in and grandstand.
From what little engagement there is with novel criticisms of their arguments (like Nate’s attempt to respond to Quintin/Nora’s work), it seems like there’s a cluster of people here who don’t understand and don’t particularly care about understanding some objections to their ideas and instead want to just focus on relitigating arguments they know they can win.
This seems reasonable in isolation, but it gets frustrating when the former is all Eliezer seems to do these days, with seemingly no attempt at the latter. When all you do is retread these dunks on “midwits” and show apathy/contempt for engaging with newer arguments, it makes it look like you don’t actually have an interest in being maximally truth-seeking but instead like you want to just dig in and grandstand.
From what little engagement there is with novel criticisms of their arguments (like Nate’s attempt to respond to Quintin/Nora’s work), it seems like there’s a cluster of people here who don’t understand and don’t particularly care about understanding some objections to their ideas and instead want to just focus on relitigating arguments they know they can win.