So, it seems you’ve hit the nail on the head when you say it’s an idealized model. Full rationality (in the sense it’s used here) isn’t something that you can implement as a computationally bounded agent. There’s a whole different question which is how to come up with good approximations to it, though.
It’s analagous to, say, proving the completeness of natural deduction for first-order logic. That tells you that there is a proof for any true statement, but not that you, as a computationally bounded agent, will be able to find it. And coming up with better heuristics for proving things is a big question of its own.
The issue is that LW handwavy preaches it as lifestyle of some kind (instead of studying it rigorously as idealized model). It is also unlike the ideal models in physics. Ideal gas is a very close approximation to air at normal conditions. Computationally unbounded agent on other hand… it’s to bounded agent as ideal gas from classical physics is to cooking omelette.
I doubt even the ‘coming up with good approximations to it’ offers anything (for human self improvement) beyond trivial ‘making agent win the most’. One has to do some minor stuff, such as e.g. studying math, and calculating probabilities correctly in some neat cases like medical diagnosis. Actual winning the most is too much about thinking about the right things.
edit: and about strategies, and about agent-agent interaction where you want to take in reasoning by other agents but don’t want to be exploited, don’t want other agent’s failures to propagate to you, don’t want to fall prey to odd mixture of exploitation and failure where the agent takes own failed reasoning seriously enough to convince you but not seriously enough to allow that failure to damage itself, etc. Overall, a very very complex issue.
So, it seems you’ve hit the nail on the head when you say it’s an idealized model. Full rationality (in the sense it’s used here) isn’t something that you can implement as a computationally bounded agent. There’s a whole different question which is how to come up with good approximations to it, though.
It’s analagous to, say, proving the completeness of natural deduction for first-order logic. That tells you that there is a proof for any true statement, but not that you, as a computationally bounded agent, will be able to find it. And coming up with better heuristics for proving things is a big question of its own.
The issue is that LW handwavy preaches it as lifestyle of some kind (instead of studying it rigorously as idealized model). It is also unlike the ideal models in physics. Ideal gas is a very close approximation to air at normal conditions. Computationally unbounded agent on other hand… it’s to bounded agent as ideal gas from classical physics is to cooking omelette.
I doubt even the ‘coming up with good approximations to it’ offers anything (for human self improvement) beyond trivial ‘making agent win the most’. One has to do some minor stuff, such as e.g. studying math, and calculating probabilities correctly in some neat cases like medical diagnosis. Actual winning the most is too much about thinking about the right things.
edit: and about strategies, and about agent-agent interaction where you want to take in reasoning by other agents but don’t want to be exploited, don’t want other agent’s failures to propagate to you, don’t want to fall prey to odd mixture of exploitation and failure where the agent takes own failed reasoning seriously enough to convince you but not seriously enough to allow that failure to damage itself, etc. Overall, a very very complex issue.