My intuition is that the learnability of our universe is mostly because it’s not a max entropic universe. There is real structure to it, and there are hyperpriors and inductive biases that let one effectively learn it. Because we evolved in such a universe, we have such machinery.
I haven’t been thinking of it in terms of the Telephone Theorem.
I don’t agree that max entropic universes are simpler. I think a lot of intelligence is compression (efficiently generating accurate world models, prediction, etc.). I don’t agree that one can better compress or predict a max entropic universe. And I think what macroscale properties you pick to care about is somewhat arbitrary. See also: “utility maximisation = description length minimisation”
My intuition is that the learnability of our universe is mostly because it’s not a max entropic universe. There is real structure to it, and there are hyperpriors and inductive biases that let one effectively learn it. Because we evolved in such a universe, we have such machinery.
I haven’t been thinking of it in terms of the Telephone Theorem.
I don’t agree that max entropic universes are simpler. I think a lot of intelligence is compression (efficiently generating accurate world models, prediction, etc.). I don’t agree that one can better compress or predict a max entropic universe. And I think what macroscale properties you pick to care about is somewhat arbitrary. See also: “utility maximisation = description length minimisation”