I won’t try to comment on the formal argument (my understanding that literature is mostly just what Robin Hanson has said about it), but intuitively, this seems wrong.
Wei Dai’s description is correct, see here for an example where the final estimate is outside the range of the initial two. And yes, the Aumann agreement theorem does not say what nearly everyone (including Eliezer) seems to intuitively think it says.
And yes, the Aumann agreement theorem does not say what nearly everyone (including Eliezer) seems to intuitively think it says.
Wonder if a list of such things can be constructed. Algorithmic information theory is an example where Eliezer drew the wrong implications from the math and unfortunately much of LessWrong inherited that. Group selection (multi-level selection) might be another example, but less clear cut, as that requires computational modeling and not just interpretation of mathematics. I’m sure there are more and better examples.
Wei Dai’s description is correct, see here for an example where the final estimate is outside the range of the initial two. And yes, the Aumann agreement theorem does not say what nearly everyone (including Eliezer) seems to intuitively think it says.
Wonder if a list of such things can be constructed. Algorithmic information theory is an example where Eliezer drew the wrong implications from the math and unfortunately much of LessWrong inherited that. Group selection (multi-level selection) might be another example, but less clear cut, as that requires computational modeling and not just interpretation of mathematics. I’m sure there are more and better examples.