This seems problematic because it implies that humans would be perfectly fine with accepting grue over blue if they didn’t know about the nature of light.
Fortunately, the reason this helps is deeper than counting the number of hertz. When you want to determine the complexity of a term, you have to specify what language to use to write the term. The reason grue seems complicated to us evolved animals is because it has higher complexity in the language of our observations—the language of what neurons we feel light up when we look at the rock.
So does that mean that if an entity had a neuronal structure that intuited grue and bleen it would be justified in treating the hypothesis that way? I’d be willing to bite that bullet I think.
It means that that entity’s evolved instincts would be out-of-whack with the MML, so if that entity also got to the point where it invented Turing machines, it would see the flaw in its reasoning. This is no different than realizing that Maxwell’s equations, though they look more complicated than “anger” to a human, are actually simpler. Sometimes, the intuition is wrong. In the blue/grue case, human intuition happens to not be wrong, but a hypothetical entity is—and both humans and the entity, after understanding math and computer science, would agree that humans are wrong about anger, and hypothetical entities are wrong about grue. Why is that a problem?
This seems problematic because it implies that humans would be perfectly fine with accepting grue over blue if they didn’t know about the nature of light.
Right, they would, if for weird historical reasons they also thought of “grue” and “bleen” as reasonable linguistic primitives. So the human scientists would be surprised when the next emerald turned out to be bleen rather than grue, and they’d be able to observe that the shift happened at time T, and thus observe that green is a natural property. So this isn’t really much of a problem.
That’s not completely satisfying in that one wants an induction scheme that assigns priors independent of linguistic accident. If one tries to make hypotheses simplicity depend on language then one quickly gets very complicated hypotheses being labeled as simple (e.g. “God”).
This seems problematic because it implies that humans would be perfectly fine with accepting grue over blue if they didn’t know about the nature of light.
Fortunately, the reason this helps is deeper than counting the number of hertz. When you want to determine the complexity of a term, you have to specify what language to use to write the term. The reason grue seems complicated to us evolved animals is because it has higher complexity in the language of our observations—the language of what neurons we feel light up when we look at the rock.
So does that mean that if an entity had a neuronal structure that intuited grue and bleen it would be justified in treating the hypothesis that way? I’d be willing to bite that bullet I think.
It means that that entity’s evolved instincts would be out-of-whack with the MML, so if that entity also got to the point where it invented Turing machines, it would see the flaw in its reasoning. This is no different than realizing that Maxwell’s equations, though they look more complicated than “anger” to a human, are actually simpler. Sometimes, the intuition is wrong. In the blue/grue case, human intuition happens to not be wrong, but a hypothetical entity is—and both humans and the entity, after understanding math and computer science, would agree that humans are wrong about anger, and hypothetical entities are wrong about grue. Why is that a problem?
Right, they would, if for weird historical reasons they also thought of “grue” and “bleen” as reasonable linguistic primitives. So the human scientists would be surprised when the next emerald turned out to be bleen rather than grue, and they’d be able to observe that the shift happened at time T, and thus observe that green is a natural property. So this isn’t really much of a problem.
That’s not completely satisfying in that one wants an induction scheme that assigns priors independent of linguistic accident. If one tries to make hypotheses simplicity depend on language then one quickly gets very complicated hypotheses being labeled as simple (e.g. “God”).