Right, which is why I say it’s misguided to search for a truly general intelligence; what you want instead is an intelligence with priors slanted toward this universe, not one that has to iterate through every hypothesis shorter than it.
Making a machine that’s optimal across all universe algorithms means making it very suboptimal for this universe.
That’s true. At least I think it is. I can’t imagine what a general intelligence that could handle this universe and an anti-Occamian one optimally would look like.
Right, which is why I say it’s misguided to search for a truly general intelligence; what you want instead is an intelligence with priors slanted toward this universe, not one that has to iterate through every hypothesis shorter than it.
Making a machine that’s optimal across all universe algorithms means making it very suboptimal for this universe.
That’s true. At least I think it is. I can’t imagine what a general intelligence that could handle this universe and an anti-Occamian one optimally would look like.