Induction is the belief that the more often a pattern happens the more likely it is to continue. Anti-induction is the opposite claim: the more likely a pattern happens the less likely future events are to follow it.
Somehow I seem to have gotten the idea in my head that anti-induction is self-reinforcing. The argument for it is as follows: Suppose we have a game where at each step a screen flashes an A or a B and we try to predict what it will show. Suppose that the screen always flashes A, but the agent initially thinks that the screen is more likely to display B. So it guesses B, observes that it guessed incorrectly and then, if it is an anti-inductive agent will increase it’s likelihood that the next symbol will be B because of anti-induction. So in this scenario your confidence that the next symbol will be B, despite the long stream of As, will keep increasing. This particular anti-inductive belief is self-reinforcing.
However, there is a sense in which anti-induction is contradictory—if you observe anti-induction working, then you should update towards it not working in the future. I suppose the distinction here is that we are using anti-induction to update our beliefs on anti-induction and not just our concrete beliefs. And each of these is a valid update rule: in the first we apply this update rule to everything including itself and in the other we apply this update rule to things other than itself. The idea of a rule applying to everything except itself feels suspicious, but is not invalid.
Also, it’s not that the anti-inductive belief that B will be next is self-reinforcing. After all, anti-induction given consistent As pushes you towards believing B more and more regardless of what you believe initially. In other words, it’s more of an attractor state.
Anti-induction and Self-Reinforcement
Induction is the belief that the more often a pattern happens the more likely it is to continue. Anti-induction is the opposite claim: the more likely a pattern happens the less likely future events are to follow it.
Somehow I seem to have gotten the idea in my head that anti-induction is self-reinforcing. The argument for it is as follows: Suppose we have a game where at each step a screen flashes an A or a B and we try to predict what it will show. Suppose that the screen always flashes A, but the agent initially thinks that the screen is more likely to display B. So it guesses B, observes that it guessed incorrectly and then, if it is an anti-inductive agent will increase it’s likelihood that the next symbol will be B because of anti-induction. So in this scenario your confidence that the next symbol will be B, despite the long stream of As, will keep increasing. This particular anti-inductive belief is self-reinforcing.
However, there is a sense in which anti-induction is contradictory—if you observe anti-induction working, then you should update towards it not working in the future. I suppose the distinction here is that we are using anti-induction to update our beliefs on anti-induction and not just our concrete beliefs. And each of these is a valid update rule: in the first we apply this update rule to everything including itself and in the other we apply this update rule to things other than itself. The idea of a rule applying to everything except itself feels suspicious, but is not invalid.
Also, it’s not that the anti-inductive belief that B will be next is self-reinforcing. After all, anti-induction given consistent As pushes you towards believing B more and more regardless of what you believe initially. In other words, it’s more of an attractor state.
The best reason to believe in anti-induction is that it’s never worked before. Discussed at a bit of depth in https://www.lesswrong.com/posts/zmSuDDFE4dicqd4Hg/you-only-need-faith-in-two-things .