I think it goes without saying that one can disagree with anything in the Sequences and can also be assumed to have read and understood it
This seems false as stated—some nontrivial content in the Sequences consists of theorems.
More generally, there are some claims in the original Sequences that are false (so agreeing with the claim may be at least some evidence that you didn’t understand it), some that I’d say “I think that’s true, but reasonable people can definitely disagree”, some where it’s very easy for disagreement to update me toward “you didn’t understand that claim”, etc. Possibly you agree with all that, but I want to state it explicitly; this seems extra important to be clear about if you plan to behave as though it’s not true in object-level conversation.
It depends on whether you think what I stated was closer to “completely false” or “technically false, because of the word ‘anything’.” If I had instead said “I think it goes without saying that one can disagree with nearly anything in the Sequences and can also be assumed to have read and understood it”, that might bring it out of “false” territory for you, but I feel we would still have a disagreement.
There are theorems in the Sequences that I disagree with Eliezer’s characterization of, like Löb’s Theorem, where I feel very confident that I have fully understood both my reading of the theorem as well as Eliezer’s interpretation of it to arrive at my conclusions. Also, that this disagreement is fairly substantial, and also may be a key pillar of Eliezer’s case for very high AI Risk in general.
My worry still stands that disagreement with Eliezer (especially about how high AI Risk actually is) will be conflated with not being up-to-speed on the Sequences, or about misunderstanding key material, or about misunderstanding theorems or things that have allegedly been proven. I think the example I gave is one specific case of something where Eliezer’s interpretation of the theorem (which I believe to have been incorrect) was characterized as the theorem itself.
My position that is regardless of whether or not you think all what I just said is preposterous and proof that I don’t understand key material, the norm(s) of good-faith assumption and charitability are still highly advisable to have. I generally believe that in most disagreements, it is possible for both parties to assume that the other party understands them well enough, just that they have assigned very different probabilities to the same statements.
This seems false as stated—some nontrivial content in the Sequences consists of theorems.
More generally, there are some claims in the original Sequences that are false (so agreeing with the claim may be at least some evidence that you didn’t understand it), some that I’d say “I think that’s true, but reasonable people can definitely disagree”, some where it’s very easy for disagreement to update me toward “you didn’t understand that claim”, etc. Possibly you agree with all that, but I want to state it explicitly; this seems extra important to be clear about if you plan to behave as though it’s not true in object-level conversation.
It depends on whether you think what I stated was closer to “completely false” or “technically false, because of the word ‘anything’.” If I had instead said “I think it goes without saying that one can disagree with nearly anything in the Sequences and can also be assumed to have read and understood it”, that might bring it out of “false” territory for you, but I feel we would still have a disagreement.
There are theorems in the Sequences that I disagree with Eliezer’s characterization of, like Löb’s Theorem, where I feel very confident that I have fully understood both my reading of the theorem as well as Eliezer’s interpretation of it to arrive at my conclusions. Also, that this disagreement is fairly substantial, and also may be a key pillar of Eliezer’s case for very high AI Risk in general.
My worry still stands that disagreement with Eliezer (especially about how high AI Risk actually is) will be conflated with not being up-to-speed on the Sequences, or about misunderstanding key material, or about misunderstanding theorems or things that have allegedly been proven. I think the example I gave is one specific case of something where Eliezer’s interpretation of the theorem (which I believe to have been incorrect) was characterized as the theorem itself.
My position that is regardless of whether or not you think all what I just said is preposterous and proof that I don’t understand key material, the norm(s) of good-faith assumption and charitability are still highly advisable to have. I generally believe that in most disagreements, it is possible for both parties to assume that the other party understands them well enough, just that they have assigned very different probabilities to the same statements.