Are you not getting the point? Agents can correctly apply inductive and deductive reasoning, but draw the wrong conclusion—because of their priors, or because of misleading sensory data. Rationality is about reasoning correctly. It is possible to reason correctly and yet still do badly—for example if a hostile agent has manipulated your sense data without giving you a clue about what has happened. Maybe you could have done better by behaving “irrationally”. However, if you had no way of knowing that, the behaviour that led to the poor outcome could still be rational.
I absolutely agree with this point. Rationality in this sense is that truth-engine I named in the comment you replied to: it’s built for a range of possible environments, but can fail in case of an unfortunate happenstance. As opposed to having an insane maintainer who is convinced that the engine works when in fact it doesn’t, not just on the actual test runs, but on the range of possible environments for which it’s supposedly built. When you are 90% sure that something will happen, you expect it NOT to happen 1 time in 10.
Are you not getting the point? Agents can correctly apply inductive and deductive reasoning, but draw the wrong conclusion—because of their priors, or because of misleading sensory data. Rationality is about reasoning correctly. It is possible to reason correctly and yet still do badly—for example if a hostile agent has manipulated your sense data without giving you a clue about what has happened. Maybe you could have done better by behaving “irrationally”. However, if you had no way of knowing that, the behaviour that led to the poor outcome could still be rational.
Good point Tim, rational doesn’t mean right.
Garbage in, Garbage out.
I absolutely agree with this point. Rationality in this sense is that truth-engine I named in the comment you replied to: it’s built for a range of possible environments, but can fail in case of an unfortunate happenstance. As opposed to having an insane maintainer who is convinced that the engine works when in fact it doesn’t, not just on the actual test runs, but on the range of possible environments for which it’s supposedly built. When you are 90% sure that something will happen, you expect it NOT to happen 1 time in 10.