My previous comment should have sufficed to communicate to you that I do not regard the distinction you are making as relevant to the present discussion. It should be amply clear by this point that I am exclusively concerned with things-that-must-be-either-true-or-false, and that calling attention to a separate class of utterances that do not have truth-values (and therefore do not have probabilities assigned to them) is not an interesting thing to do in this context. Downvoted for failure to take a hint.
It should be amply clear by this point that I am exclusively concerned with things-that-must-be-either-true-or-false
Then when Eliezer says:
Any statement for which you have the faintest idea of its truth conditions will be specified in sufficient detail that you can count the bits, or count the symbols, and that’s where the rough measure of prior probability starts—not at 50%.
you shouldn’t say:
Given your preceding comment, I realize you have a high prior on people making simple errors. And, at the very least, this is a perfect illustration of why never to use the “50%” line on a non-initiate: even Yudkowsky won’t realize you’re saying something sophisticated and true rather than banal and false.
as a response to Eliezer making true statements about statements and not playing along with OP’s possible special definition of “statement”. If when Eliezer read “statement”, he’d interpreted it as “proposition”, he might be unreasonable in inferring there was an error, but he didn’t, so he wasn’t. So you shouldn’t implicitly call him out as having made a simple error.
As far as “even Yudkowsky won’t realize you’re saying something sophisticated and true rather than banal and false” goes, no one can read minds. It is possible the OP meant to convey actual perfect understanding with the inaccurate language he used. Likewise for “Not just low-status like believing in a deity, but majorly low status,” assuming idiosyncratic enough meaning.
calling attention to a separate class of utterances that do not have truth-values (and therefore do not have probabilities assigned to them) is not an interesting thing to do in this context.
I’m calling attention to a class that contains all propositions as well as other things. Probabilities may be assigned to statements being true even if they are actually false, or neither true nor false. If a statement is specified to be a proposition, you have information such that a bare 50% won’t do.
Where did you get the idea that “statement” in Eliezer’s comment is to be understood in your idiosyncratic sense of “utterances that may or may not be ‘propositions’”? Not only do I dispute this, I explicitly did so earlier when I wrote (emphasis added):
Neither the grandparent nor (so far as I can tell) the great-grandparent makes the distinction between “statements” and “propositions” that you have drawn elsewhere.
Indeed, it is manifestly clear from this sentence in his comment:
it’s questionable whether you can even call that a statement, since you can’t say anything about its truth-conditions.
that Eliezer means by “statement” what you have insisted on calling a “proposition”: something with truth-conditions, i.e. which is capable of assuming a truth-value. I, in turn, simply followed this usage in my reply. I have never had the slightest interest in entering a sub-discussion about whether this is a good choice of terminology. Furthermore, I deny the following:
Probabilities may be assigned to [statements/propositions/what-the-heck-ever] being true even if they are...neither true nor false.
and, indeed, regard the falsity of that claim as a basic background assumption upon which my entire discussion was premised.
Perhaps it would make things clearer if the linguistic terminology (“statement”, “proposition”, etc) were abandoned altogether (being really inappropriate to begin with), in favor of the term “hypothesis”. I can then state my position in (hopefully) unambiguous terms: all hypotheses are either true or false (otherwise they are not hypotheses), hypotheses are the only entities to which probabilities may be assigned, and a Bayesian with literally zero information about whether a hypothesis is true or false must assign it a probability of 50% -- the last point being an abstract technicality that seldom if ever needs to be mentioned explicitly, lest it cause confusion of the sort we have been seeing here (so that Bayesian Bob indeed made a mistake by saying it, although I am impressed with Zed for having him say it).
and a Bayesian with literally zero information about whether a hypothesis is true or false must assign it a probability of 50%
You can state it better like this: “A Bayesian with literally zero information about the hypothesis.”
“Zero information about whether a hypothesis is true or false” implies that we know the hypothesis, and we just don’t know whether it’s a member in the set of true propositions.
“Zero information about the hypothesis” indicates what you really seem to want to say—that we don’t know anything about this hypothesis; not its content, not its length, not even who made the hypothesis, or how it came to our attention.
I don’t see how this can make sense in one sense. If we don’t know exactly how it came to our attention, we know that it didn’t come to our attention in a way that stuck with us, so that is some information we have about how it came to our attention—we know some ways that it didn’t come to our attention.
You’re thinking of human minds. But perhaps we’re talking about a computer that knows it’s trying to determine the truth-value of a proposition, but the history of how the proposition got inputted into it got deleted from its memory; or perhaps it was designed to never holds that history in the first place.
the history of how the proposition got inputted into it got deleted from its memory
So it knows that whoever gave it the proposition didn’t have the power, desire, or competence to tell it how it got the proposition.
It knows the proposition is not from a mind that is meticulous about making sure those to whom it gives propositions know where the propositions are from.
If the computer doesn’t know that it doesn’t know how it learned of something, and can’t know that, I’m not sure it counts as a general intelligence.
Indeed, it is manifestly clear from this sentence in his comment:
What odds does “manifestly clear” imply when you say it? I believe he was referring to either X or Y as otherwise the content of the statement containing “one and only one...X or Y” would be a confusing...coincidence is the best word I can think of. So I think it most likely “call that a statement” is a very poorly worded phrase referring to simultaneously separately statement X or statement Y.
In general, there is a problem with prescribing taboo when one of the two parties is claiming a third party is wrong.
I am impressed by your patience in light of my comments. I think it not terribly unlikely that in this argument I am the equivalent of Jordan Leopold or Ray Fittipaldo (not an expert!), while you are Andy Sutton.
But I still don’t think that’s probable, and think it is easy to see that you have cheated at rationalist’s taboo as one term is replacing the excluded ones, a sure sign that mere label swapping has taken place.
I still think that if I only know that something is a hypothesis and know nothing more, I have enough knowledge to examine how I know that and use an estimate of the hypothesis’ bits that is superior to a raw 0%. I don’t think “a Bayesian with literally zero information about whether a hypothesis is true or false” is a meaningful sentence. You know it’s a hypothesis because you have information. Granted, the final probability you estimate could be 50⁄50.
Shall I mentally substitute “acoustic vibrations in the air” for “an auditory experience in a brain”?
My previous comment should have sufficed to communicate to you that I do not regard the distinction you are making as relevant to the present discussion. It should be amply clear by this point that I am exclusively concerned with things-that-must-be-either-true-or-false, and that calling attention to a separate class of utterances that do not have truth-values (and therefore do not have probabilities assigned to them) is not an interesting thing to do in this context. Downvoted for failure to take a hint.
Then when Eliezer says:
you shouldn’t say:
as a response to Eliezer making true statements about statements and not playing along with OP’s possible special definition of “statement”. If when Eliezer read “statement”, he’d interpreted it as “proposition”, he might be unreasonable in inferring there was an error, but he didn’t, so he wasn’t. So you shouldn’t implicitly call him out as having made a simple error.
As far as “even Yudkowsky won’t realize you’re saying something sophisticated and true rather than banal and false” goes, no one can read minds. It is possible the OP meant to convey actual perfect understanding with the inaccurate language he used. Likewise for “Not just low-status like believing in a deity, but majorly low status,” assuming idiosyncratic enough meaning.
I’m calling attention to a class that contains all propositions as well as other things. Probabilities may be assigned to statements being true even if they are actually false, or neither true nor false. If a statement is specified to be a proposition, you have information such that a bare 50% won’t do.
Where did you get the idea that “statement” in Eliezer’s comment is to be understood in your idiosyncratic sense of “utterances that may or may not be ‘propositions’”? Not only do I dispute this, I explicitly did so earlier when I wrote (emphasis added):
Indeed, it is manifestly clear from this sentence in his comment:
that Eliezer means by “statement” what you have insisted on calling a “proposition”: something with truth-conditions, i.e. which is capable of assuming a truth-value. I, in turn, simply followed this usage in my reply. I have never had the slightest interest in entering a sub-discussion about whether this is a good choice of terminology. Furthermore, I deny the following:
and, indeed, regard the falsity of that claim as a basic background assumption upon which my entire discussion was premised.
Perhaps it would make things clearer if the linguistic terminology (“statement”, “proposition”, etc) were abandoned altogether (being really inappropriate to begin with), in favor of the term “hypothesis”. I can then state my position in (hopefully) unambiguous terms: all hypotheses are either true or false (otherwise they are not hypotheses), hypotheses are the only entities to which probabilities may be assigned, and a Bayesian with literally zero information about whether a hypothesis is true or false must assign it a probability of 50% -- the last point being an abstract technicality that seldom if ever needs to be mentioned explicitly, lest it cause confusion of the sort we have been seeing here (so that Bayesian Bob indeed made a mistake by saying it, although I am impressed with Zed for having him say it).
Make sense now?
You can state it better like this: “A Bayesian with literally zero information about the hypothesis.”
“Zero information about whether a hypothesis is true or false” implies that we know the hypothesis, and we just don’t know whether it’s a member in the set of true propositions.
“Zero information about the hypothesis” indicates what you really seem to want to say—that we don’t know anything about this hypothesis; not its content, not its length, not even who made the hypothesis, or how it came to our attention.
I don’t see how this can make sense in one sense. If we don’t know exactly how it came to our attention, we know that it didn’t come to our attention in a way that stuck with us, so that is some information we have about how it came to our attention—we know some ways that it didn’t come to our attention.
You’re thinking of human minds. But perhaps we’re talking about a computer that knows it’s trying to determine the truth-value of a proposition, but the history of how the proposition got inputted into it got deleted from its memory; or perhaps it was designed to never holds that history in the first place.
So it knows that whoever gave it the proposition didn’t have the power, desire, or competence to tell it how it got the proposition.
It knows the proposition is not from a mind that is meticulous about making sure those to whom it gives propositions know where the propositions are from.
If the computer doesn’t know that it doesn’t know how it learned of something, and can’t know that, I’m not sure it counts as a general intelligence.
What odds does “manifestly clear” imply when you say it? I believe he was referring to either X or Y as otherwise the content of the statement containing “one and only one...X or Y” would be a confusing...coincidence is the best word I can think of. So I think it most likely “call that a statement” is a very poorly worded phrase referring to simultaneously separately statement X or statement Y.
In general, there is a problem with prescribing taboo when one of the two parties is claiming a third party is wrong.
I am impressed by your patience in light of my comments. I think it not terribly unlikely that in this argument I am the equivalent of Jordan Leopold or Ray Fittipaldo (not an expert!), while you are Andy Sutton.
But I still don’t think that’s probable, and think it is easy to see that you have cheated at rationalist’s taboo as one term is replacing the excluded ones, a sure sign that mere label swapping has taken place.
I still think that if I only know that something is a hypothesis and know nothing more, I have enough knowledge to examine how I know that and use an estimate of the hypothesis’ bits that is superior to a raw 0%. I don’t think “a Bayesian with literally zero information about whether a hypothesis is true or false” is a meaningful sentence. You know it’s a hypothesis because you have information. Granted, the final probability you estimate could be 50⁄50.