If we restrict ourselves to minds specifiable in a trillion bits or less, then each universal generalization “All minds m: X(m)” has two to the trillionth chances to be false, while each existential generalization “Exists mind m: X(m)” has two to the trillionth chances to be true.
This doesn’t seem true to me, at least not as a general rule. For example, given every terrestrial DNA sequence describable in a trillion bits or less, it is not the case that every generalization of the form ‘s:X(s)’ has two to the trillionth chances to be false (e.g. ‘have more than one base pair’, ‘involve hydrogen’ etc.). Given that this doesn’t hold true of many other things, is this supposed to be a special fact about minds? Even then, it would seem odd to say that while all generalizations of the form m:X(m) have two to the trillionth chances to be false, nevertheless the generalization ‘for all minds, a generalization of the form m:X(m) has two to the trillionth chances to be false’ (which does seem to be of the form m:X(m)) is somehow more likely.
Also, doesn’t this inference imply that ‘being convinced by an argument’ is a bit that can flip on or off independently of any others? Eliezer doesn’t think that’s true, and I can’t imagine why he would think his (hypothetical) interlocutor would accept it.
I mean to say, I think the argument is something of a paradox:
The claim the argument purports to defeat is something like this: for all minds, A is convincing. Lets call this m:A(m).
The argument goes like this: for all minds (at or under a trillion bits etc.), a generalization of the form m:X(m) has a one in two to the trillionth chance of being true for each mind. Call this m:U(m), if you grant me that this claim has the form m:X(m).
If we infer from m:U(m) that any claim of the form m:X(m) is unlikely to be true, then to whatever extent I am persuaded that m:A(m) is unlikely to be true, to that extent I ought to be persuaded that m:U(m) is unlikely to be true. You cannot accept the argument, because accepting it as decisive entails accepting decisive reasons for rejecting it.
The argument seems to be fixable at this stage, since there’s a lot of room to generate significant distinctions between m:A(m) and m:U(m). If you were pressed to defend it (presuming you still wish to be generous with your time) how would you fix this? Or am I getting something very wrong?
for all minds (at or under a trillion bits etc.), a generalization of the form m:X(m) has a one in two to the trillionth chance of being true for each mind.
That’s not what it says; compare the emphasis in both quotes.
If we restrict ourselves to minds specifiable in a trillion bits or less, then each universal generalization “All minds m: X(m)” has two to the trillionth chances to be false, while each existential generalization “Exists mind m: X(m)” has two to the trillionth chances to be true.
Sorry, I may have misunderstood and presumed that ‘two to the trillionth chances to be false’ meant ‘one in two to the trillionth chances to be true’. That may be wrong, but it doesn’t affect my argument at all: EY’s argument for the implausibility of m:A(m) is that claims of the form m:X(m) are all implausible. His argument to the effect that all claims of the form m:X(m) are implausible is itself a claim of the form m:X(m).
This doesn’t seem true to me, at least not as a general rule. For example, given every terrestrial DNA sequence describable in a trillion bits or less, it is not the case that every generalization of the form ‘s:X(s)’ has two to the trillionth chances to be false (e.g. ‘have more than one base pair’, ‘involve hydrogen’ etc.). Given that this doesn’t hold true of many other things, is this supposed to be a special fact about minds? Even then, it would seem odd to say that while all generalizations of the form m:X(m) have two to the trillionth chances to be false, nevertheless the generalization ‘for all minds, a generalization of the form m:X(m) has two to the trillionth chances to be false’ (which does seem to be of the form m:X(m)) is somehow more likely.
Also, doesn’t this inference imply that ‘being convinced by an argument’ is a bit that can flip on or off independently of any others? Eliezer doesn’t think that’s true, and I can’t imagine why he would think his (hypothetical) interlocutor would accept it.
It’s not a proof, no, but it seems plausible.
I mean to say, I think the argument is something of a paradox:
The claim the argument purports to defeat is something like this: for all minds, A is convincing. Lets call this m:A(m).
The argument goes like this: for all minds (at or under a trillion bits etc.), a generalization of the form m:X(m) has a one in two to the trillionth chance of being true for each mind. Call this m:U(m), if you grant me that this claim has the form m:X(m).
If we infer from m:U(m) that any claim of the form m:X(m) is unlikely to be true, then to whatever extent I am persuaded that m:A(m) is unlikely to be true, to that extent I ought to be persuaded that m:U(m) is unlikely to be true. You cannot accept the argument, because accepting it as decisive entails accepting decisive reasons for rejecting it.
The argument seems to be fixable at this stage, since there’s a lot of room to generate significant distinctions between m:A(m) and m:U(m). If you were pressed to defend it (presuming you still wish to be generous with your time) how would you fix this? Or am I getting something very wrong?
That’s not what it says; compare the emphasis in both quotes.
Sorry, I may have misunderstood and presumed that ‘two to the trillionth chances to be false’ meant ‘one in two to the trillionth chances to be true’. That may be wrong, but it doesn’t affect my argument at all: EY’s argument for the implausibility of m:A(m) is that claims of the form m:X(m) are all implausible. His argument to the effect that all claims of the form m:X(m) are implausible is itself a claim of the form m:X(m).