Now, of course, in this universe, googolplexes are utterly irrelevant. If an AI could harness every planck volume of space in the observable universe to each perform one computations per planck time, all the stars would burn out long before it got anywhere close to 2^1024 computations, which is a long way off from a googolplex. So it seems to me “circular altruism” on this level is of absolutely no consequence.
Of course we might still cling to the “thought experiment” aspect of it. I don’t see why we should, but even if we do, it doesn’t help: ideal rationality, in the Savage sense, isn’t even computable. No AI, even with unlimited time and space to make up its mind, can be rational, in the sense of always choosing the course that maximises some utility function with respect to some subjective probability distribution in all situations. So something still has to give. Of course there are lots of ways to do this. We can be “rational within epsilon” if you like, but this epsilon will matter when considering these googolplex circularity arguments. I’m skeptical that there is anything coherent here at all.
Now that I have read (and commented on) the “Savage axiom” (http://lesswrong.com/lw/5te/a_summary_of_savages_foundations_for_probability/) thread, I would like to note here on this thread that there are no computable solutions to the Savage axioms.
Now, of course, in this universe, googolplexes are utterly irrelevant. If an AI could harness every planck volume of space in the observable universe to each perform one computations per planck time, all the stars would burn out long before it got anywhere close to 2^1024 computations, which is a long way off from a googolplex. So it seems to me “circular altruism” on this level is of absolutely no consequence.
Of course we might still cling to the “thought experiment” aspect of it. I don’t see why we should, but even if we do, it doesn’t help: ideal rationality, in the Savage sense, isn’t even computable. No AI, even with unlimited time and space to make up its mind, can be rational, in the sense of always choosing the course that maximises some utility function with respect to some subjective probability distribution in all situations. So something still has to give. Of course there are lots of ways to do this. We can be “rational within epsilon” if you like, but this epsilon will matter when considering these googolplex circularity arguments. I’m skeptical that there is anything coherent here at all.