Okay, so if by ‘perfect rationality’ we mean “ability to solve problems that don’t have a solution”, then I agree, perfect rationality is not possible. Not sure if that was your point.
I’m not asking you, for example, to make a word out of the two letters Q and K, or to write a program that will determine if an arbitrary program halts.
Where rationality fails if that there is always another person who scores higher than you and there was nothing stopping you from scoring the same score or higher. Such a program is more rational than you in that situation and there is another program more rational than them until infinity. That there is no maximally rational program, only successively more rational programs is a completely accurate way of characterising that situation
Seems like you are asking me to (or at least judging me as irrational for failing to) say a finite number such that I could not have said a higher number despite having unlimited time and resources. That is an impossible task.
I’m arguing against perfect rationality as defined as the ability to choose the option that maximises the agents utility. I don’t believe that this at all an unusual way of using this term. But regardless, let’s taboo perfect rationality and talk about utility maximisation. There is no utility maximiser for this scenario because there is no maximum utility that can be obtained. That’s all that I’m saying, nothing more nothing less. Yet, people often assume that such a perfect maximiser (aka perfectly rational agent) exists without even realising that they are making an assumption.
Okay, so if by ‘perfect rationality’ we mean “ability to solve problems that don’t have a solution”, then I agree, perfect rationality is not possible. Not sure if that was your point.
I’m not asking you, for example, to make a word out of the two letters Q and K, or to write a program that will determine if an arbitrary program halts.
Where rationality fails if that there is always another person who scores higher than you and there was nothing stopping you from scoring the same score or higher. Such a program is more rational than you in that situation and there is another program more rational than them until infinity. That there is no maximally rational program, only successively more rational programs is a completely accurate way of characterising that situation
Seems like you are asking me to (or at least judging me as irrational for failing to) say a finite number such that I could not have said a higher number despite having unlimited time and resources. That is an impossible task.
I’m arguing against perfect rationality as defined as the ability to choose the option that maximises the agents utility. I don’t believe that this at all an unusual way of using this term. But regardless, let’s taboo perfect rationality and talk about utility maximisation. There is no utility maximiser for this scenario because there is no maximum utility that can be obtained. That’s all that I’m saying, nothing more nothing less. Yet, people often assume that such a perfect maximiser (aka perfectly rational agent) exists without even realising that they are making an assumption.
Oh. In that case, I guess I agree.
For some scenarios that have unbounded utility there is no such thing as an utility maximizer.
I think the scenario requires unbounded utility and unlimited resources to acquire it.