I think I’m confused. We’re talking about something that’s never even heard of colors, so there shouldn’t be anything in the mind of the robot related to “blue” in any way. This ought to be like the prior probability from your perspective that zorgumphs are wogle. Now that I’ve said the words, I suppose there’s some very low probability that zorgumphs are wogle, since there’s a probability that “zorgumph” refers to “cats” and “wogle” to “furry”. But when you didn’t even have those words in your head anywhere, how could there have been a prior? How could B9′s prior be “very low” instead of “nonexistent”?
Eliezer seems to be substituting the actual meaning of “blue”. Now, if we present the AI with the English statement and ask it to assign a probability...my first impulse is to say it should use a complexity/simplicity prior based on length. This might actually be correct, if shorter message-length corresponds to greater frequency of use. (ETA that you might not be able to distinguish words within the sentence, if faced with a claim in a totally alien language.)
Well, if nothing else, when I ask B9 “is your ball blue?”, I’m only providing a finite amount of evidence thereby that “blue” refers to a property that balls can have or not have. So if B9′s priors on “blue” referring to anything at all are vastly low, then B9 will continue to believe, even after being asked the question, that “blue” doesn’t refer to anything. Which doesn’t seem like terribly sensible behavior. That sets a floor on how low the prior on “‘blue’ is meaningful” can be.
I think I’m confused. We’re talking about something that’s never even heard of colors, so there shouldn’t be anything in the mind of the robot related to “blue” in any way. This ought to be like the prior probability from your perspective that zorgumphs are wogle. Now that I’ve said the words, I suppose there’s some very low probability that zorgumphs are wogle, since there’s a probability that “zorgumph” refers to “cats” and “wogle” to “furry”. But when you didn’t even have those words in your head anywhere, how could there have been a prior? How could B9′s prior be “very low” instead of “nonexistent”?
Eliezer seems to be substituting the actual meaning of “blue”. Now, if we present the AI with the English statement and ask it to assign a probability...my first impulse is to say it should use a complexity/simplicity prior based on length. This might actually be correct, if shorter message-length corresponds to greater frequency of use. (ETA that you might not be able to distinguish words within the sentence, if faced with a claim in a totally alien language.)
Well, if nothing else, when I ask B9 “is your ball blue?”, I’m only providing a finite amount of evidence thereby that “blue” refers to a property that balls can have or not have. So if B9′s priors on “blue” referring to anything at all are vastly low, then B9 will continue to believe, even after being asked the question, that “blue” doesn’t refer to anything. Which doesn’t seem like terribly sensible behavior. That sets a floor on how low the prior on “‘blue’ is meaningful” can be.