Looking with a naive view on math we see that initially all statments have not only a probability but a frequency.
I just have to look at my children:
My 2.5 year old recently went from detecting single digits (mixing up 6 and 9 of course; who though about using rotation symetric digits#$!) to counting correctly (more or less) up to 14 but cannot apply this to determine cardinality larger than 3 or 4.
The 5 year old can count up to 100 (but often skips numbers) to determine cardinaility and can add up to 20 but not subtract that far. He will put forward random guesses when asked (by his brothers) about 10+10+10 (e.g. “43”).
The 7.5 year old adds arbitrarily long numbers—but often gets the decimal places wrong (adding/missing 0s).
The 10 year old can extract and solve simple additive equations (a+b=20, a+4=b) without trying numbers (as his younger brother still would) but by using intuition (“a and b are about (a+b)/2 one 4⁄2 less and the other 4⁄2 more”) but is far away from solving any equation system. He can find solutions only where he has an intuition.
Note that all these examples are not from lack of rigor or forgetting rules.
And they are also not completely random (whatever that means).
Each example names concepts that are in the process of being learned and where frequent ‘errors’ occur, not by noise but by insufficient detection and acquisition of the pattern.
If I wanted I could assign probabilities to propositions like “N is the cardinality of set S” for each of the boys.
They show a gradual grasp of the concepts.
I’d model probabilities from statements of logic or arithmetic by assigning non-zero probabilties to axioms (actually the axioms do not necessarily get the highest probabilities—at least not for my children).
Rules of inference also get probabilities e.g. does none of my children undertand modus ponens. They might but it would initially have a low probability and you have to apply it very often to reach any sensible results.
And probabilities are not simply propagated unlimited to other statements via the rules.
On the other hand simple arithmetic surely has $P(“a+b”=”a”+”b”)>0.99$ for $a, b < 100$ which also indicates that the probability depends on the size of the arguments (compare with the 53 is prime example).
The example of my 10 year boy is actually not far from Haim Gaifmans modelling:
The axioms (arithmetics) are mastered (P(A)~1) and he can apply them but there are sentences which are too complex to handle such that only those that can be handled (by intermediate sentences captured by intuition) can be applied to solve problems reliably.
The problem is that there is almost no grey. Either he will be able to solve a problem or he will not. He will most likely not guess on such tasks.
If he reaches a wrong result that is not due to probabilistic failure to combine complex intermediate results nor by accumulated processing error (incresing rapidly by boredom) but by mis-judging a rule to apply (thus showing that he learning at the process level). This leads me to think that Gaifmans modelling may be interesting from a theoretical point but not suitable to model human reasoning limits (not that that’d be his goal).
Looking with a naive view on math we see that initially all statments have not only a probability but a frequency. I just have to look at my children:
My 2.5 year old recently went from detecting single digits (mixing up 6 and 9 of course; who though about using rotation symetric digits#$!) to counting correctly (more or less) up to 14 but cannot apply this to determine cardinality larger than 3 or 4.
The 5 year old can count up to 100 (but often skips numbers) to determine cardinaility and can add up to 20 but not subtract that far. He will put forward random guesses when asked (by his brothers) about 10+10+10 (e.g. “43”).
The 7.5 year old adds arbitrarily long numbers—but often gets the decimal places wrong (adding/missing 0s).
The 10 year old can extract and solve simple additive equations (a+b=20, a+4=b) without trying numbers (as his younger brother still would) but by using intuition (“a and b are about (a+b)/2 one 4⁄2 less and the other 4⁄2 more”) but is far away from solving any equation system. He can find solutions only where he has an intuition.
Note that all these examples are not from lack of rigor or forgetting rules. And they are also not completely random (whatever that means). Each example names concepts that are in the process of being learned and where frequent ‘errors’ occur, not by noise but by insufficient detection and acquisition of the pattern. If I wanted I could assign probabilities to propositions like “N is the cardinality of set S” for each of the boys.
They show a gradual grasp of the concepts. I’d model probabilities from statements of logic or arithmetic by assigning non-zero probabilties to axioms (actually the axioms do not necessarily get the highest probabilities—at least not for my children). Rules of inference also get probabilities e.g. does none of my children undertand modus ponens. They might but it would initially have a low probability and you have to apply it very often to reach any sensible results. And probabilities are not simply propagated unlimited to other statements via the rules. On the other hand simple arithmetic surely has $P(“a+b”=”a”+”b”)>0.99$ for $a, b < 100$ which also indicates that the probability depends on the size of the arguments (compare with the 53 is prime example).
The example of my 10 year boy is actually not far from Haim Gaifmans modelling: The axioms (arithmetics) are mastered (P(A)~1) and he can apply them but there are sentences which are too complex to handle such that only those that can be handled (by intermediate sentences captured by intuition) can be applied to solve problems reliably. The problem is that there is almost no grey. Either he will be able to solve a problem or he will not. He will most likely not guess on such tasks. If he reaches a wrong result that is not due to probabilistic failure to combine complex intermediate results nor by accumulated processing error (incresing rapidly by boredom) but by mis-judging a rule to apply (thus showing that he learning at the process level).
This leads me to think that Gaifmans modelling may be interesting from a theoretical point but not suitable to model human reasoning limits (not that that’d be his goal).