A decision algorithm that would tend to win in this contrived situation would tend to lose in regular situations, right?
Yes. There is No Free Lunch. For every possible algorithm it is possible to create a problem in which the algorithm fares poorly. An algorithm optimized for any problem which gives a payoff in utility for being irrational will tend lose in regular situations. Also, decisions made based on pathological priors will tend to lose. This includes having inaccurate priors about the likely behaviour of the superintelligence that you are playing with.
Right, which is why I say it’s misguided to search for a truly general intelligence; what you want instead is an intelligence with priors slanted toward this universe, not one that has to iterate through every hypothesis shorter than it.
Making a machine that’s optimal across all universe algorithms means making it very suboptimal for this universe.
That’s true. At least I think it is. I can’t imagine what a general intelligence that could handle this universe and an anti-Occamian one optimally would look like.
What is “inaccurate prior”? Prior that is not posterior enough, that is state of knowledge based on too little information/evidence? Frequentist connotations.
Good point Vladimir. What phrase would I use to convey not just having too little evidence but having evidence that just happens to be concentrated in a really inconvenient way. Perhaps I’ll just go with ‘bad priors’. Such as the sort of prior distribution you would have when you had just drawn three red balls out of a jar without replacement, know that the five balls left are red or blue but have no clue that you’ve just drawn the only three reds. Not so much lacking evidence but having evidence that is bad/pathological/improbable/bad/inconvenient.
Yes. There is No Free Lunch. For every possible algorithm it is possible to create a problem in which the algorithm fares poorly. An algorithm optimized for any problem which gives a payoff in utility for being irrational will tend lose in regular situations. Also, decisions made based on pathological priors will tend to lose. This includes having inaccurate priors about the likely behaviour of the superintelligence that you are playing with.
Right, which is why I say it’s misguided to search for a truly general intelligence; what you want instead is an intelligence with priors slanted toward this universe, not one that has to iterate through every hypothesis shorter than it.
Making a machine that’s optimal across all universe algorithms means making it very suboptimal for this universe.
That’s true. At least I think it is. I can’t imagine what a general intelligence that could handle this universe and an anti-Occamian one optimally would look like.
What is “inaccurate prior”? Prior that is not posterior enough, that is state of knowledge based on too little information/evidence? Frequentist connotations.
Good point Vladimir. What phrase would I use to convey not just having too little evidence but having evidence that just happens to be concentrated in a really inconvenient way. Perhaps I’ll just go with ‘bad priors’. Such as the sort of prior distribution you would have when you had just drawn three red balls out of a jar without replacement, know that the five balls left are red or blue but have no clue that you’ve just drawn the only three reds. Not so much lacking evidence but having evidence that is bad/pathological/improbable/bad/inconvenient.