E.g both me and Nesov have been persuaded (once fully filled in) that this is really nasty stuff and shouldn’t be let out.
I wasn’t “filled in”, and I don’t know whether my argument coincides with Eliezer’s. I also don’t understand why he won’t explain his argument, if it’s the same as mine, now that content is in the open (but it’s consistent with, that is responds to the same reasons as, continuing to remove comments pertaining to the topic of the post, which makes it less of a mystery).
As a decision on expected utility under logical uncertainty, but extremely low confidence, yes. I can argue that it most certainly won’t be a bad thing (which I even attempted in comments to the post itself, my bad), the expectation of it being a bad thing derives from remaining possibility of those arguments failing. As Carl said, “that estimate is unstable in the face of new info” (which refers to his own argument, not necessarily mine).
I wasn’t “filled in”, and I don’t know whether my argument coincides with Eliezer’s. I also don’t understand why he won’t explain his argument, if it’s the same as mine, now that content is in the open (but it’s consistent with, that is responds to the same reasons as, continuing to remove comments pertaining to the topic of the post, which makes it less of a mystery).
But you think that it is not a good thing for this to propagate more?
As a decision on expected utility under logical uncertainty, but extremely low confidence, yes. I can argue that it most certainly won’t be a bad thing (which I even attempted in comments to the post itself, my bad), the expectation of it being a bad thing derives from remaining possibility of those arguments failing. As Carl said, “that estimate is unstable in the face of new info” (which refers to his own argument, not necessarily mine).