K-information is about communicating to “someone”—do you compute the amount of K-information for the most receptive person you’re communicating with, or do you have a different amount for each layer of detail?
Actually, you might have a tree structure, not just layers—the prevalence of white crows in time and space is a different branch than the explanation of how crows can be white.
K-information is about communicating to “someone”—do you compute the amount of K-information for the most receptive person you’re communicating with, or do you have a different amount for each layer of detail?
A very interesting question. Especially when you consider the analogy with canon:Kolmogorov. Here we have an ambiguity as to what person we communicate to. There, the ambiguity was regarding exactly what model of universal Turing machine we were programming. And there, there was a theorem to the effect that the differences among Turing machines aren’t all that big. Do we have a similar theorem here, for the differences among people—seen as universal programmable epistemic engines.
K-information is about communicating to “someone”—do you compute the amount of K-information for the most receptive person you’re communicating with, or do you have a different amount for each layer of detail?
Actually, you might have a tree structure, not just layers—the prevalence of white crows in time and space is a different branch than the explanation of how crows can be white.
A very interesting question. Especially when you consider the analogy with canon:Kolmogorov. Here we have an ambiguity as to what person we communicate to. There, the ambiguity was regarding exactly what model of universal Turing machine we were programming. And there, there was a theorem to the effect that the differences among Turing machines aren’t all that big. Do we have a similar theorem here, for the differences among people—seen as universal programmable epistemic engines.