and a Bayesian with literally zero information about whether a hypothesis is true or false must assign it a probability of 50%
You can state it better like this: “A Bayesian with literally zero information about the hypothesis.”
“Zero information about whether a hypothesis is true or false” implies that we know the hypothesis, and we just don’t know whether it’s a member in the set of true propositions.
“Zero information about the hypothesis” indicates what you really seem to want to say—that we don’t know anything about this hypothesis; not its content, not its length, not even who made the hypothesis, or how it came to our attention.
I don’t see how this can make sense in one sense. If we don’t know exactly how it came to our attention, we know that it didn’t come to our attention in a way that stuck with us, so that is some information we have about how it came to our attention—we know some ways that it didn’t come to our attention.
You’re thinking of human minds. But perhaps we’re talking about a computer that knows it’s trying to determine the truth-value of a proposition, but the history of how the proposition got inputted into it got deleted from its memory; or perhaps it was designed to never holds that history in the first place.
the history of how the proposition got inputted into it got deleted from its memory
So it knows that whoever gave it the proposition didn’t have the power, desire, or competence to tell it how it got the proposition.
It knows the proposition is not from a mind that is meticulous about making sure those to whom it gives propositions know where the propositions are from.
If the computer doesn’t know that it doesn’t know how it learned of something, and can’t know that, I’m not sure it counts as a general intelligence.
You can state it better like this: “A Bayesian with literally zero information about the hypothesis.”
“Zero information about whether a hypothesis is true or false” implies that we know the hypothesis, and we just don’t know whether it’s a member in the set of true propositions.
“Zero information about the hypothesis” indicates what you really seem to want to say—that we don’t know anything about this hypothesis; not its content, not its length, not even who made the hypothesis, or how it came to our attention.
I don’t see how this can make sense in one sense. If we don’t know exactly how it came to our attention, we know that it didn’t come to our attention in a way that stuck with us, so that is some information we have about how it came to our attention—we know some ways that it didn’t come to our attention.
You’re thinking of human minds. But perhaps we’re talking about a computer that knows it’s trying to determine the truth-value of a proposition, but the history of how the proposition got inputted into it got deleted from its memory; or perhaps it was designed to never holds that history in the first place.
So it knows that whoever gave it the proposition didn’t have the power, desire, or competence to tell it how it got the proposition.
It knows the proposition is not from a mind that is meticulous about making sure those to whom it gives propositions know where the propositions are from.
If the computer doesn’t know that it doesn’t know how it learned of something, and can’t know that, I’m not sure it counts as a general intelligence.