It seems to me that the LessWrong rationality does not concern itself with the computational limitations of the agents, using as normative the idealized model that ignores limitations, and lacks extensive discussion of comparative computational complexities of different methods, as well as the security of the agent from deliberate (or semi accidental) subversion by other agents. (See my post about naive agent)
Thus the default hypothesis should be that the teachings of LessWrong for the most part do not increase the efficacy (win-ness) of computationally bounded agents, and likely decrease the efficacy. Most cures do not work, even those that intuitively should; furthermore there is a strong placebo effect when it comes to reporting of the efficacy of the cures.
The burden of proof is not on those who claim it does not work. The expected utility of the LW teachings should start at zero, or small negative value (for the time spent, instead of spending that time e.g. training for a profession, studying math in more conventional way, etc).
As intuition pump for the computationally limited agents, consider a weather simulator that has to predict weather on specific hardware, having to ‘outrun’ the real weather. If you replace each number in the weather simulator with the probability distribution of the sensor data (with Bayesian updates if you wish), you will obtain a much, much slower weather simulator, which will have to simulate weather on a lower resolution grid, and will perform much worse than original weather simulator, on same hardware. Improving weather prediction within same hardware is a very difficult task with no neat solutions, that will involve a lot of timing of the different approaches.
So, it seems you’ve hit the nail on the head when you say it’s an idealized model. Full rationality (in the sense it’s used here) isn’t something that you can implement as a computationally bounded agent. There’s a whole different question which is how to come up with good approximations to it, though.
It’s analagous to, say, proving the completeness of natural deduction for first-order logic. That tells you that there is a proof for any true statement, but not that you, as a computationally bounded agent, will be able to find it. And coming up with better heuristics for proving things is a big question of its own.
The issue is that LW handwavy preaches it as lifestyle of some kind (instead of studying it rigorously as idealized model). It is also unlike the ideal models in physics. Ideal gas is a very close approximation to air at normal conditions. Computationally unbounded agent on other hand… it’s to bounded agent as ideal gas from classical physics is to cooking omelette.
I doubt even the ‘coming up with good approximations to it’ offers anything (for human self improvement) beyond trivial ‘making agent win the most’. One has to do some minor stuff, such as e.g. studying math, and calculating probabilities correctly in some neat cases like medical diagnosis. Actual winning the most is too much about thinking about the right things.
edit: and about strategies, and about agent-agent interaction where you want to take in reasoning by other agents but don’t want to be exploited, don’t want other agent’s failures to propagate to you, don’t want to fall prey to odd mixture of exploitation and failure where the agent takes own failed reasoning seriously enough to convince you but not seriously enough to allow that failure to damage itself, etc. Overall, a very very complex issue.
It seems to me that the LessWrong rationality does not concern itself with the computational limitations of the agents
The LessWrong community is made up of a lot of people that concern themselves with all kinds of things. I get annoyed when I hear people generalizing too much about LessWrong members, or even worse, when they talk about LessWrong as if it is a thing with beliefs and concerns. Sorry if I’m being too nit-picky.
It seems to me that the LessWrong rationality does not concern itself with the computational limitations of the agents, using as normative the idealized model that ignores limitations, and lacks extensive discussion of comparative computational complexities of different methods, as well as the security of the agent from deliberate (or semi accidental) subversion by other agents. (See my post about naive agent)
Thus the default hypothesis should be that the teachings of LessWrong for the most part do not increase the efficacy (win-ness) of computationally bounded agents, and likely decrease the efficacy. Most cures do not work, even those that intuitively should; furthermore there is a strong placebo effect when it comes to reporting of the efficacy of the cures.
The burden of proof is not on those who claim it does not work. The expected utility of the LW teachings should start at zero, or small negative value (for the time spent, instead of spending that time e.g. training for a profession, studying math in more conventional way, etc).
As intuition pump for the computationally limited agents, consider a weather simulator that has to predict weather on specific hardware, having to ‘outrun’ the real weather. If you replace each number in the weather simulator with the probability distribution of the sensor data (with Bayesian updates if you wish), you will obtain a much, much slower weather simulator, which will have to simulate weather on a lower resolution grid, and will perform much worse than original weather simulator, on same hardware. Improving weather prediction within same hardware is a very difficult task with no neat solutions, that will involve a lot of timing of the different approaches.
So, it seems you’ve hit the nail on the head when you say it’s an idealized model. Full rationality (in the sense it’s used here) isn’t something that you can implement as a computationally bounded agent. There’s a whole different question which is how to come up with good approximations to it, though.
It’s analagous to, say, proving the completeness of natural deduction for first-order logic. That tells you that there is a proof for any true statement, but not that you, as a computationally bounded agent, will be able to find it. And coming up with better heuristics for proving things is a big question of its own.
The issue is that LW handwavy preaches it as lifestyle of some kind (instead of studying it rigorously as idealized model). It is also unlike the ideal models in physics. Ideal gas is a very close approximation to air at normal conditions. Computationally unbounded agent on other hand… it’s to bounded agent as ideal gas from classical physics is to cooking omelette.
I doubt even the ‘coming up with good approximations to it’ offers anything (for human self improvement) beyond trivial ‘making agent win the most’. One has to do some minor stuff, such as e.g. studying math, and calculating probabilities correctly in some neat cases like medical diagnosis. Actual winning the most is too much about thinking about the right things.
edit: and about strategies, and about agent-agent interaction where you want to take in reasoning by other agents but don’t want to be exploited, don’t want other agent’s failures to propagate to you, don’t want to fall prey to odd mixture of exploitation and failure where the agent takes own failed reasoning seriously enough to convince you but not seriously enough to allow that failure to damage itself, etc. Overall, a very very complex issue.
The LessWrong community is made up of a lot of people that concern themselves with all kinds of things. I get annoyed when I hear people generalizing too much about LessWrong members, or even worse, when they talk about LessWrong as if it is a thing with beliefs and concerns. Sorry if I’m being too nit-picky.