In order to evaluate if it is an example of induction, you’ll need to start with a statement of the method of induction. This is not b/c I’m unfamiliar with such a thing but because we will disagree about it and we better have one to get us on the same page more (inductivists vary a lot. I know many different statements of how it works.)
I’m tempted just to point to my example and say ‘there, that’s what I call induction’, but I doubt that will satisfy you so I will try to give a more rigorous explanation.
I view induction as Bayesian updating/decision theory with and inductive prior. To clarify what I mean, suppose I am faced with a opaque jar, containing ten beads, each of which is either red or white. What is my prior for the contents of the jar? It depends on my background knowledge.
1) I may know that someone carefully put 5 red beads and 5 white beads in the jar
2) I may know that each ball was chosen randomly with probably p, where p is a parameter which is (as far as I know) equally likely to be anywhere between 0 and 1
3) I may know that each ball was tossed in by a monkey which was drawing randomly from two barrels, one containing red balls, one containing white balls.
I may also have many other states of knowledge, but I give just three examples for simplicity.
1) is anti-inductive. If I have drawn N balls, R of which have been red, then P(the next ball is red) = (5-R)/(10-N), so every red I draw decreases my anticipation of red, while every white increases it.
2) is inductive. If I have drawn N balls, R of which have been red, then P(the next ball is red) = (R+1)/(N+2) (this is a theorem due to Laplace, the proof is not quite trivial). Every red ball increases my anticipation of red, while every white increases it. Notice how it takes many reds to provide strong evidence, but even one red is sufficient for a fairly large update, from 0.5 to 0.67.
3) is neither inductive nor anti-inductive. P(the next ball is red) = 0.5 regardless of what I have drawn. Past observations do not influence expectation of future observations.
With the mud, neither of the three examples perfectly describes my prior, but 2) comes closest. Most proposals for universal priors are to some extent inductive, for examples Solomonoff assigns a much higher probability to ’1000 0s’ than ‘999 0s followed by a 1’.
Brief note: Human induction, and Solomonoff Induction are more sophisticated than 2) mainly because they have better pattern spotting abilities, and so the process in not quite analogous.
I’m tempted just to point to my example and say ‘there, that’s what I call induction’, but I doubt that will satisfy you so I will try to give a more rigorous explanation.
I view induction as Bayesian updating/decision theory with and inductive prior. To clarify what I mean, suppose I am faced with a opaque jar, containing ten beads, each of which is either red or white. What is my prior for the contents of the jar? It depends on my background knowledge.
1) I may know that someone carefully put 5 red beads and 5 white beads in the jar
2) I may know that each ball was chosen randomly with probably p, where p is a parameter which is (as far as I know) equally likely to be anywhere between 0 and 1
3) I may know that each ball was tossed in by a monkey which was drawing randomly from two barrels, one containing red balls, one containing white balls.
I may also have many other states of knowledge, but I give just three examples for simplicity.
1) is anti-inductive. If I have drawn N balls, R of which have been red, then P(the next ball is red) = (5-R)/(10-N), so every red I draw decreases my anticipation of red, while every white increases it.
2) is inductive. If I have drawn N balls, R of which have been red, then P(the next ball is red) = (R+1)/(N+2) (this is a theorem due to Laplace, the proof is not quite trivial). Every red ball increases my anticipation of red, while every white increases it. Notice how it takes many reds to provide strong evidence, but even one red is sufficient for a fairly large update, from 0.5 to 0.67.
3) is neither inductive nor anti-inductive. P(the next ball is red) = 0.5 regardless of what I have drawn. Past observations do not influence expectation of future observations.
With the mud, neither of the three examples perfectly describes my prior, but 2) comes closest. Most proposals for universal priors are to some extent inductive, for examples Solomonoff assigns a much higher probability to ’1000 0s’ than ‘999 0s followed by a 1’.
Brief note: Human induction, and Solomonoff Induction are more sophisticated than 2) mainly because they have better pattern spotting abilities, and so the process in not quite analogous.