It appears that this issue has been discussed before in the thread Naturalism versus unbounded (or unmaximisable) utility options. The discussion there didn’t end up drawing the conclusion that perfect rationality doesn’t exist, so I believe this current thread adds something new.
Instead, the earlier thread considers the Heaven and Hell scenario where you can spend X days in Hell to get the opportunity to spend 2X days in Heaven. Most of the discussion on that thread was related to the limit of how many days an agent count so as to exit at some point. Stuart Armstrong also comes up with the same solution for demonstrating that this problem isn’t related to unbounded utility.
Qiaochu Yaun summarises one of the key takeaways: “This isn’t a paradox about unbounded utility functions but a paradox about how to do decision theory if you expect to have to make infinitely many decisions. Because of the possible failure of the ability to exchange limits and integrals, the expected utility of a sequence of infinitely many decisions can’t in general be computed by summing up the expected utility of each decision separately.”
Cudos to Andreas Giger for noticing what most of the commentators seemed to miss: “How can utility be maximised when there is no maximum utility? The answer of course is that it can’t.” This is incredibly close to stating that perfect rationality doesn’t exist, but it wasn’t explicitly stated, only implied.
Further, Wei Dai’s comment on a randomised strategy that obtains infinite expected utility is an interesting problem that will be addressed in my next post.
Okay, so if by ‘perfect rationality’ we mean “ability to solve problems that don’t have a solution”, then I agree, perfect rationality is not possible. Not sure if that was your point.
I’m not asking you, for example, to make a word out of the two letters Q and K, or to write a program that will determine if an arbitrary program halts.
Where rationality fails if that there is always another person who scores higher than you and there was nothing stopping you from scoring the same score or higher. Such a program is more rational than you in that situation and there is another program more rational than them until infinity. That there is no maximally rational program, only successively more rational programs is a completely accurate way of characterising that situation
Seems like you are asking me to (or at least judging me as irrational for failing to) say a finite number such that I could not have said a higher number despite having unlimited time and resources. That is an impossible task.
I’m arguing against perfect rationality as defined as the ability to choose the option that maximises the agents utility. I don’t believe that this at all an unusual way of using this term. But regardless, let’s taboo perfect rationality and talk about utility maximisation. There is no utility maximiser for this scenario because there is no maximum utility that can be obtained. That’s all that I’m saying, nothing more nothing less. Yet, people often assume that such a perfect maximiser (aka perfectly rational agent) exists without even realising that they are making an assumption.
Cudos to Andreas Giger for noticing what most of the commentators seemed to miss: “How can utility be maximised when there is no maximum utility? The answer of course is that it can’t.” This is incredibly close to stating that perfect rationality doesn’t exist, but it wasn’t explicitly stated, only implied.
I think the key is infinite vs finite universes. Any conceivable finite universe can be arranged in a finite number of states, one, or perhaps several of which, could be assigned maximum utility. You can’t do this in universes involving infinity. So if you want perfect rationality, you need to reduce your infinite universe to just the stuff you care about. This is doable in some universes, but not in the ones you posit.
In our universe, we can shave off the infinity, since we presumably only care about our light cone.
An update to this post
It appears that this issue has been discussed before in the thread Naturalism versus unbounded (or unmaximisable) utility options. The discussion there didn’t end up drawing the conclusion that perfect rationality doesn’t exist, so I believe this current thread adds something new.
Instead, the earlier thread considers the Heaven and Hell scenario where you can spend X days in Hell to get the opportunity to spend 2X days in Heaven. Most of the discussion on that thread was related to the limit of how many days an agent count so as to exit at some point. Stuart Armstrong also comes up with the same solution for demonstrating that this problem isn’t related to unbounded utility.
Qiaochu Yaun summarises one of the key takeaways: “This isn’t a paradox about unbounded utility functions but a paradox about how to do decision theory if you expect to have to make infinitely many decisions. Because of the possible failure of the ability to exchange limits and integrals, the expected utility of a sequence of infinitely many decisions can’t in general be computed by summing up the expected utility of each decision separately.”
Cudos to Andreas Giger for noticing what most of the commentators seemed to miss: “How can utility be maximised when there is no maximum utility? The answer of course is that it can’t.” This is incredibly close to stating that perfect rationality doesn’t exist, but it wasn’t explicitly stated, only implied.
Further, Wei Dai’s comment on a randomised strategy that obtains infinite expected utility is an interesting problem that will be addressed in my next post.
Okay, so if by ‘perfect rationality’ we mean “ability to solve problems that don’t have a solution”, then I agree, perfect rationality is not possible. Not sure if that was your point.
I’m not asking you, for example, to make a word out of the two letters Q and K, or to write a program that will determine if an arbitrary program halts.
Where rationality fails if that there is always another person who scores higher than you and there was nothing stopping you from scoring the same score or higher. Such a program is more rational than you in that situation and there is another program more rational than them until infinity. That there is no maximally rational program, only successively more rational programs is a completely accurate way of characterising that situation
Seems like you are asking me to (or at least judging me as irrational for failing to) say a finite number such that I could not have said a higher number despite having unlimited time and resources. That is an impossible task.
I’m arguing against perfect rationality as defined as the ability to choose the option that maximises the agents utility. I don’t believe that this at all an unusual way of using this term. But regardless, let’s taboo perfect rationality and talk about utility maximisation. There is no utility maximiser for this scenario because there is no maximum utility that can be obtained. That’s all that I’m saying, nothing more nothing less. Yet, people often assume that such a perfect maximiser (aka perfectly rational agent) exists without even realising that they are making an assumption.
Oh. In that case, I guess I agree.
For some scenarios that have unbounded utility there is no such thing as an utility maximizer.
I think the scenario requires unbounded utility and unlimited resources to acquire it.
I think the key is infinite vs finite universes. Any conceivable finite universe can be arranged in a finite number of states, one, or perhaps several of which, could be assigned maximum utility. You can’t do this in universes involving infinity. So if you want perfect rationality, you need to reduce your infinite universe to just the stuff you care about. This is doable in some universes, but not in the ones you posit.
In our universe, we can shave off the infinity, since we presumably only care about our light cone.