I was trying to apply the principle of charity and interpret your post as anything but begging the question: ‘assume rational agents are penalized. How do they do better than irrational agents explicitly favored by the rules/Omega?’
Question begging is boring, and if that’s really what you were asking - ‘assume rational agents lose. How do they not lose?’ - then this thread is deserving only of downvotes.
And Eliezer was talking about humans, not the finer points of AI design in a hugely arbitrary setup. It may be a bad idea for LWers to choose to be biased, but a perfectly good idea for AIXI stuck in a particularly annoying computable universe.
Also, LYING about what you believe has nothing to do with this. Omega can read your mind.
Since I’m not an AI with direct access to my beliefs in storage on a substrate, I was using an analogy to as close as I can get.
Sorry, I were hoping that there were some kind of difference between “penalize this specific belief in this specific way” and “penalize rationality as such in general”, some kind of trick to work around the problem, that I hadn’t noticed and which resolved the dilemma.
And your analogy didn’t work for me, is all I’m saying.
I was trying to apply the principle of charity and interpret your post as anything but begging the question: ‘assume rational agents are penalized. How do they do better than irrational agents explicitly favored by the rules/Omega?’
Question begging is boring, and if that’s really what you were asking - ‘assume rational agents lose. How do they not lose?’ - then this thread is deserving only of downvotes.
And Eliezer was talking about humans, not the finer points of AI design in a hugely arbitrary setup. It may be a bad idea for LWers to choose to be biased, but a perfectly good idea for AIXI stuck in a particularly annoying computable universe.
Since I’m not an AI with direct access to my beliefs in storage on a substrate, I was using an analogy to as close as I can get.
Sorry, I were hoping that there were some kind of difference between “penalize this specific belief in this specific way” and “penalize rationality as such in general”, some kind of trick to work around the problem, that I hadn’t noticed and which resolved the dilemma.
And your analogy didn’t work for me, is all I’m saying.