It doesn’t seem odd at all, we have an expectation of the calculator, and if it fails to fulfill that expectation then we start to doubt that it is, in fact, what we thought it was (a working calculator).
Except that if you examine the workings of a calculator that does agree with us, you’re much much less likely to find a wiring fault (that is, that it’s implementing a different algorithm).
if (a) [a reasonable human would agree implements arithmetic] and (b) [which disagrees with us on whether 2+2 equals 4] both hold, then (c) [The human decides she ve was mistaken and needs to fix the machine]. If the human can alter the machine so as to make it agree with 2+2 = 4, then and only then will the human feel justified in asserting that it implements arithmetic.
If the only value for which the machine disagrees with us is 2+2, and the human adds a trap to detect the case “Has been asked 2+2”, which overrides the usual algorithm and just outputs 4… would the human then claim they’d “made it implement arithmetic”? I don’t think so.
I’ll try a different tack: an implementation of arithmetic can be created which is general and compact (in a Solomonoff sense) - we are able to make calculators rather than Artificial Arithmeticians. Clearly not all concepts can be compressed in this manner, by a counting argument. So there is a fact-of-the-matter that “these {foo} are the concepts which can be compressed by thus-and-such algorithm” (For instance, arithmetic on integers up to N can be formalised in O(log N) bits, which grows strictly slower than O(N); thus integer arithmetic is compressed by positional numeral systems). That fact-of-the-matter would still be true if there were no humans around to implement arithmetic, and it would still be true in Ancient Rome where they haven’t heard of positional numeral systems (though their system still beats the Artificial Arithmetician).
I’ll look over it, but given what you say here I’m not confident that it won’t be an attempt at a resurrection of Platonism.
What’s wrong with resurrecting (or rather, reformulating) Platonism? Although, it’s more a Platonic Formalism than straight Platonism.
If the only value for which the machine disagrees with us is 2+2, and the human adds a trap to detect the case “Has been asked 2+2”, which overrides the usual algorithm and just outputs 4… would the human then claim they’d “made it implement arithmetic”? I don’t think so.
Well, this seems a bit unclear. We are operating under the assumption that the set up looks very similar to a correct set up, close enough to fool a reasonable expert. So while the previous fault would cause some consternation and force the expert to lower his priors for “this is a working calculator”, it doesn’t follow that he wouldn’t make the appropriate adjustment and then (upon seeing nothing else wrong with it) decide that it is likely that it should resume working correctly.
That fact-of-the-matter would still be true if there were no humans around to implement arithmetic, and it would still be true in Ancient Rome where they haven’t heard of positional numeral systems (though their system still beats the Artificial Arithmetician).
Yes, it would be true, but what exactly is it that ‘is true’? The human brain is a tangle of probabilistic algorithms playing various functional roles; it is “intuitively obvious” that there should be a Solomonoff-irreducible (up to some constant) program that can be implemented (given sufficient background knowledge of all of the components involved; Boolean circuits implemented on some substrate in such and such a way that “computes” “arithmetic operations on integers” (really it is doing some fancy electrical acrobatics, to be later interpreted first into a form we can perceive as an output, such as a sequence of pixels on some manner of screen arranged in a way to resemble the numerical output we want etc.) and that this is a physical fact about the universe (that we can have things arranged in such a way lead to such an outcome).
It is not obvious that we should then reverse the matter and claim that we ought to project a computational Platonism on to reality any more than logical positivist philosophers should have felt justified in doing that with mathematics and predicate logic a hundred years ago.
It is clear to me that we can perceive ‘computational’ patterns in top level phenomena such as the output of calculators or mental computations and that we can and have devised a framework for organizing the functional role of these processes (in terms of algorithmic information theory/computational complexity/computability theory) in a way that allows us to reason generally about them. It is not clear to me that we are further justified in taking the epistemological step that you seem to want to take.
I’m inclined to think that there is a fundamental problem with how you are approaching epistemology, and you should strongly consider looking into Bayesian epistemology (or statistical inference generally). I am also inclined to suggest that you look into the work of C.S. Peirce, and E.T. Jaynes’ book (as was mentioned previously and is a bit of a favorite around here; it really is quite profound). You might also consider Judea Pearl’s book “Causality”; I think some of the material is quite relevant and it seems likely to me that you would be very interested in it.
ETA: To Clarify, I’m not attacking the computable universe hypothesis; I think it is likely right (though I think that the term ‘computable’ in the broad sense in which it is often used needs some unpacking).
Except that if you examine the workings of a calculator that does agree with us, you’re much much less likely to find a wiring fault (that is, that it’s implementing a different algorithm).
If the only value for which the machine disagrees with us is 2+2, and the human adds a trap to detect the case “Has been asked 2+2”, which overrides the usual algorithm and just outputs 4… would the human then claim they’d “made it implement arithmetic”? I don’t think so.
I’ll try a different tack: an implementation of arithmetic can be created which is general and compact (in a Solomonoff sense) - we are able to make calculators rather than Artificial Arithmeticians. Clearly not all concepts can be compressed in this manner, by a counting argument. So there is a fact-of-the-matter that “these {foo} are the concepts which can be compressed by thus-and-such algorithm” (For instance, arithmetic on integers up to N can be formalised in O(log N) bits, which grows strictly slower than O(N); thus integer arithmetic is compressed by positional numeral systems). That fact-of-the-matter would still be true if there were no humans around to implement arithmetic, and it would still be true in Ancient Rome where they haven’t heard of positional numeral systems (though their system still beats the Artificial Arithmetician).
What’s wrong with resurrecting (or rather, reformulating) Platonism? Although, it’s more a Platonic Formalism than straight Platonism.
Well, this seems a bit unclear. We are operating under the assumption that the set up looks very similar to a correct set up, close enough to fool a reasonable expert. So while the previous fault would cause some consternation and force the expert to lower his priors for “this is a working calculator”, it doesn’t follow that he wouldn’t make the appropriate adjustment and then (upon seeing nothing else wrong with it) decide that it is likely that it should resume working correctly.
Yes, it would be true, but what exactly is it that ‘is true’? The human brain is a tangle of probabilistic algorithms playing various functional roles; it is “intuitively obvious” that there should be a Solomonoff-irreducible (up to some constant) program that can be implemented (given sufficient background knowledge of all of the components involved; Boolean circuits implemented on some substrate in such and such a way that “computes” “arithmetic operations on integers” (really it is doing some fancy electrical acrobatics, to be later interpreted first into a form we can perceive as an output, such as a sequence of pixels on some manner of screen arranged in a way to resemble the numerical output we want etc.) and that this is a physical fact about the universe (that we can have things arranged in such a way lead to such an outcome).
It is not obvious that we should then reverse the matter and claim that we ought to project a computational Platonism on to reality any more than logical positivist philosophers should have felt justified in doing that with mathematics and predicate logic a hundred years ago.
It is clear to me that we can perceive ‘computational’ patterns in top level phenomena such as the output of calculators or mental computations and that we can and have devised a framework for organizing the functional role of these processes (in terms of algorithmic information theory/computational complexity/computability theory) in a way that allows us to reason generally about them. It is not clear to me that we are further justified in taking the epistemological step that you seem to want to take.
I’m inclined to think that there is a fundamental problem with how you are approaching epistemology, and you should strongly consider looking into Bayesian epistemology (or statistical inference generally). I am also inclined to suggest that you look into the work of C.S. Peirce, and E.T. Jaynes’ book (as was mentioned previously and is a bit of a favorite around here; it really is quite profound). You might also consider Judea Pearl’s book “Causality”; I think some of the material is quite relevant and it seems likely to me that you would be very interested in it.
ETA: To Clarify, I’m not attacking the computable universe hypothesis; I think it is likely right (though I think that the term ‘computable’ in the broad sense in which it is often used needs some unpacking).