Why does not knowing the hypothesis translate into assigning the hypothesis probability 0.5 ?
If this is the approach that you want to take, then surely the AI-internal-speak translation of “What is P(A), for totally unspecified hypothesis A?” would be “What proportion of binary strings encode true statements?”
ETA: On second thought, even that wouldn’t make sense, because the truth of a binary string is a property involving the territory, while prior probability should be entirely determined by the map. Perhaps sense could be salvaged by passing to a meta-language. Then the AI could translate “What is P(A), for totally unspecified hypothesis A?” as “What is the expected value of the proportion of binary strings that encode true statements?”.
But really, the question “What is P(A), for totally unspecified hypothesis A?” just isn’t well-formed. For the AI to evaluate “P(A)”, the AI needs already to have been fed a symbol A in the domain of P.
Your AI-internal-speak version is a perfectly valid question to ask, but why do you consider it to be the translation of “What is P(A), for totally unspecified hypothesis A?” ?
Why does not knowing the hypothesis translate into assigning the hypothesis probability 0.5 ?
If this is the approach that you want to take, then surely the AI-internal-speak translation of “What is P(A), for totally unspecified hypothesis A?” would be “What proportion of binary strings encode true statements?”
ETA: On second thought, even that wouldn’t make sense, because the truth of a binary string is a property involving the territory, while prior probability should be entirely determined by the map. Perhaps sense could be salvaged by passing to a meta-language. Then the AI could translate “What is P(A), for totally unspecified hypothesis A?” as “What is the expected value of the proportion of binary strings that encode true statements?”.
But really, the question “What is P(A), for totally unspecified hypothesis A?” just isn’t well-formed. For the AI to evaluate “P(A)”, the AI needs already to have been fed a symbol A in the domain of P.
Your AI-internal-speak version is a perfectly valid question to ask, but why do you consider it to be the translation of “What is P(A), for totally unspecified hypothesis A?” ?