K-information is about communicating to “someone”—do you compute the amount of K-information for the most receptive person you’re communicating with, or do you have a different amount for each layer of detail?
A very interesting question. Especially when you consider the analogy with canon:Kolmogorov. Here we have an ambiguity as to what person we communicate to. There, the ambiguity was regarding exactly what model of universal Turing machine we were programming. And there, there was a theorem to the effect that the differences among Turing machines aren’t all that big. Do we have a similar theorem here, for the differences among people—seen as universal programmable epistemic engines.
A very interesting question. Especially when you consider the analogy with canon:Kolmogorov. Here we have an ambiguity as to what person we communicate to. There, the ambiguity was regarding exactly what model of universal Turing machine we were programming. And there, there was a theorem to the effect that the differences among Turing machines aren’t all that big. Do we have a similar theorem here, for the differences among people—seen as universal programmable epistemic engines.