I see I made Bob unnecessarily complicated. Bob = 99.9 Repeating (sorry don’t know how to get a vinculum over the .9) This is a number. It exists.
It is a number, it is also known as 100, which we are explicitly not allowed to pick (0.99 repeating = 1 so 99.99 repeating = 100).
In any case, I think casebash successfully specified a problem that doesn’t have any optimal solutions (which is definitely interesting) but I don’t think that is a problem for perfect rationality anymore than problems that have more than one optimal solution are a problem for perfect rationality.
I was born a non-Archimedean and I’ll die a non-Archimedean.
“0.99 repeating = 1” I only accept that kind of talk from people with the gumption to admit that the quotient of any number divided by zero is infinity. And I’ve got college calculus and 25 years of not doing much mathematical thinking since then to back me up.
I’m kind of defining perfect rationality as the ability to maximise utility (more or less). If there are multiple optimal solutions, then picking any one maximises utility. If there is no optimal solution, then picking none maximises utility. So this is problematic for perfect rationality as defined as utility maximisation, but if you disagree with the definition, we can just taboo “perfect rationality” and talk about utility maximisation instead. In either case, this is something people often assume exists without even realising that they are making an assumption.
That’s fair, I tried to formulate a better definition but couldn’t immediately come up with anything that sidesteps the issue (without explicitly mentioning this class of problems).
When I taboo perfect rationality and instead just ask what the correct course of action is, I have to agree that I don’t have an answer. Intuitive answers to questions like “What would I do if I actually found myself in this situation?” and “What would the average intelligent person do?” are unsatisfying because they seem to rely on implicit costs to computational power/time.
On the other hand I can also not generalize this problem to more practical situations (or find a similar problem without optimal solution that would be applicable to reality) so there might not be any practical difference between a perfectly rational agent and an agent that takes the optimal solution if there is one and explodes violently if there isn’t one. Maybe the solution is to simply exclude problems like this when talking about rationality, unsatisfying as it may be.
If there is no optimal solution, then picking none maximises utility.
This statement is not necessarily true when there is no optimal solution because the solutions are part of an infinite set of solutions. That is, it is not true in exactly the situation described in your problem.
Sorry, that was badly phrased. It should have been: “If there is no optimal solution, then no matter what solution you pick you won’t be able to maximise utility”
It is a number, it is also known as 100, which we are explicitly not allowed to pick (0.99 repeating = 1 so 99.99 repeating = 100).
In any case, I think casebash successfully specified a problem that doesn’t have any optimal solutions (which is definitely interesting) but I don’t think that is a problem for perfect rationality anymore than problems that have more than one optimal solution are a problem for perfect rationality.
I was born a non-Archimedean and I’ll die a non-Archimedean.
“0.99 repeating = 1” I only accept that kind of talk from people with the gumption to admit that the quotient of any number divided by zero is infinity. And I’ve got college calculus and 25 years of not doing much mathematical thinking since then to back me up.
I’ll show myself out.
I’m kind of defining perfect rationality as the ability to maximise utility (more or less). If there are multiple optimal solutions, then picking any one maximises utility. If there is no optimal solution, then picking none maximises utility. So this is problematic for perfect rationality as defined as utility maximisation, but if you disagree with the definition, we can just taboo “perfect rationality” and talk about utility maximisation instead. In either case, this is something people often assume exists without even realising that they are making an assumption.
That’s fair, I tried to formulate a better definition but couldn’t immediately come up with anything that sidesteps the issue (without explicitly mentioning this class of problems).
When I taboo perfect rationality and instead just ask what the correct course of action is, I have to agree that I don’t have an answer. Intuitive answers to questions like “What would I do if I actually found myself in this situation?” and “What would the average intelligent person do?” are unsatisfying because they seem to rely on implicit costs to computational power/time.
On the other hand I can also not generalize this problem to more practical situations (or find a similar problem without optimal solution that would be applicable to reality) so there might not be any practical difference between a perfectly rational agent and an agent that takes the optimal solution if there is one and explodes violently if there isn’t one. Maybe the solution is to simply exclude problems like this when talking about rationality, unsatisfying as it may be.
In any case, it is an interesting problem.
This statement is not necessarily true when there is no optimal solution because the solutions are part of an infinite set of solutions. That is, it is not true in exactly the situation described in your problem.
Sorry, that was badly phrased. It should have been: “If there is no optimal solution, then no matter what solution you pick you won’t be able to maximise utility”