What is clear to me is that when we set up a physical system (such as a Von Neumann machine, or a human who has been ‘set up’ by being educated and then asked a certain question) in a certain way, some part of the future state of that system is (say with 99.999% likelihood) recognizable to us as output (perhaps certain patterns of light resonate with us as “the correct answer”)
But note that there are also patterns of light which we would interpret as “the wrong answer”. If arithmetic is implementation-dependent, isn’t it a bit odd that whenever we build a calculator that outputs “5″ for 2+2, it turns out to have something we would consider to be a wiring fault (so that it is not implementing arithmetic)? Can you point to a machine (or an idealised abstract algorithm, for that matter) which a reasonable human would agree implements arithmetic, but which disagrees with us on whether 2+2 equals 4? Because, if arithmetic is implementation-dependent, you should be able to do so.
Are we then to take computation as “more fundamental” than physics?
Yes! (So long as we define computation as “abstract manipulation-rules on syntactic tokens”, and don’t make any condition about the computation’s having been implemented on any substrate.)
But note that there are also patterns of light which we would interpret as “the wrong answer”.
I did note that, maybe not explicitly but it isn’t really something that anyone would expect another person not to consider.
isn’t it a bit odd that whenever we build a calculator that outputs “5” for 2+2, it turns out to have something we would consider to be a wiring fault (so that it is not implementing arithmetic)?
It doesn’t seem odd at all, we have an expectation of the calculator, and if it fails to fulfill that expectation then we start to doubt that it is, in fact, what we thought it was (a working calculator). This refocuses the issue on us and the mechanics of how we compress information; we expected information ‘X’ at time t, but instead received ‘Y’ and decide that something is wrong with out model (and then aim to fix it by figuring out if it is indeed a wiring problem or a bit-flip or a bug in the programming of the calculator or some electromagnetic interference).
Can you point to a machine (or an idealised abstract algorithm, for that matter) which a reasonable human would agree implements arithmetic, but which disagrees with us on whether 2+2 equals 4?
No. But why is this? Because if (a) [a reasonable human would agree implements arithmetic] and (b) [which disagrees with us on whether 2+2 equals 4] both hold, then (c) [The human decides she ve was mistaken and needs to fix the machine]. If the human can alter the machine so as to make it agree with 2+2 = 4, then and only then will the human feel justified in asserting that it implements arithmetic.
The implementation is decidedly correct only if it demonstrates itself to be correct. Only if it fulfills our expectations of it. With a calculator, we are looking for something that allows us to extend our ability to infer things about the world. If I know that a car has a mass of 1000 kilograms and a speed of 200 kilometers for hour, then I can determine whether it will be able to topple a wall given that I have some number that encoded the amount of force it can withstand. I compute the output and compare it to the data for the wall.
Because, if arithmetic is implementation-dependent, you should be able to do so.
I tend to think it depends on a human-like brain that has been trained to interpret ‘2’, ‘+’ and ‘4’ in a certain way, so I don’t readily agree with your claim here.
Yes! (So long as we define computation as “abstract manipulation-rules on syntactic tokens”, and don’t make any condition about the computation’s having been implemented on any substrate.)
I’ll look over it, but given what you say here I’m not confident that it won’t be an attempt at a resurrection of Platonism.
It doesn’t seem odd at all, we have an expectation of the calculator, and if it fails to fulfill that expectation then we start to doubt that it is, in fact, what we thought it was (a working calculator).
Except that if you examine the workings of a calculator that does agree with us, you’re much much less likely to find a wiring fault (that is, that it’s implementing a different algorithm).
if (a) [a reasonable human would agree implements arithmetic] and (b) [which disagrees with us on whether 2+2 equals 4] both hold, then (c) [The human decides she ve was mistaken and needs to fix the machine]. If the human can alter the machine so as to make it agree with 2+2 = 4, then and only then will the human feel justified in asserting that it implements arithmetic.
If the only value for which the machine disagrees with us is 2+2, and the human adds a trap to detect the case “Has been asked 2+2”, which overrides the usual algorithm and just outputs 4… would the human then claim they’d “made it implement arithmetic”? I don’t think so.
I’ll try a different tack: an implementation of arithmetic can be created which is general and compact (in a Solomonoff sense) - we are able to make calculators rather than Artificial Arithmeticians. Clearly not all concepts can be compressed in this manner, by a counting argument. So there is a fact-of-the-matter that “these {foo} are the concepts which can be compressed by thus-and-such algorithm” (For instance, arithmetic on integers up to N can be formalised in O(log N) bits, which grows strictly slower than O(N); thus integer arithmetic is compressed by positional numeral systems). That fact-of-the-matter would still be true if there were no humans around to implement arithmetic, and it would still be true in Ancient Rome where they haven’t heard of positional numeral systems (though their system still beats the Artificial Arithmetician).
I’ll look over it, but given what you say here I’m not confident that it won’t be an attempt at a resurrection of Platonism.
What’s wrong with resurrecting (or rather, reformulating) Platonism? Although, it’s more a Platonic Formalism than straight Platonism.
If the only value for which the machine disagrees with us is 2+2, and the human adds a trap to detect the case “Has been asked 2+2”, which overrides the usual algorithm and just outputs 4… would the human then claim they’d “made it implement arithmetic”? I don’t think so.
Well, this seems a bit unclear. We are operating under the assumption that the set up looks very similar to a correct set up, close enough to fool a reasonable expert. So while the previous fault would cause some consternation and force the expert to lower his priors for “this is a working calculator”, it doesn’t follow that he wouldn’t make the appropriate adjustment and then (upon seeing nothing else wrong with it) decide that it is likely that it should resume working correctly.
That fact-of-the-matter would still be true if there were no humans around to implement arithmetic, and it would still be true in Ancient Rome where they haven’t heard of positional numeral systems (though their system still beats the Artificial Arithmetician).
Yes, it would be true, but what exactly is it that ‘is true’? The human brain is a tangle of probabilistic algorithms playing various functional roles; it is “intuitively obvious” that there should be a Solomonoff-irreducible (up to some constant) program that can be implemented (given sufficient background knowledge of all of the components involved; Boolean circuits implemented on some substrate in such and such a way that “computes” “arithmetic operations on integers” (really it is doing some fancy electrical acrobatics, to be later interpreted first into a form we can perceive as an output, such as a sequence of pixels on some manner of screen arranged in a way to resemble the numerical output we want etc.) and that this is a physical fact about the universe (that we can have things arranged in such a way lead to such an outcome).
It is not obvious that we should then reverse the matter and claim that we ought to project a computational Platonism on to reality any more than logical positivist philosophers should have felt justified in doing that with mathematics and predicate logic a hundred years ago.
It is clear to me that we can perceive ‘computational’ patterns in top level phenomena such as the output of calculators or mental computations and that we can and have devised a framework for organizing the functional role of these processes (in terms of algorithmic information theory/computational complexity/computability theory) in a way that allows us to reason generally about them. It is not clear to me that we are further justified in taking the epistemological step that you seem to want to take.
I’m inclined to think that there is a fundamental problem with how you are approaching epistemology, and you should strongly consider looking into Bayesian epistemology (or statistical inference generally). I am also inclined to suggest that you look into the work of C.S. Peirce, and E.T. Jaynes’ book (as was mentioned previously and is a bit of a favorite around here; it really is quite profound). You might also consider Judea Pearl’s book “Causality”; I think some of the material is quite relevant and it seems likely to me that you would be very interested in it.
ETA: To Clarify, I’m not attacking the computable universe hypothesis; I think it is likely right (though I think that the term ‘computable’ in the broad sense in which it is often used needs some unpacking).
But note that there are also patterns of light which we would interpret as “the wrong answer”. If arithmetic is implementation-dependent, isn’t it a bit odd that whenever we build a calculator that outputs “5″ for 2+2, it turns out to have something we would consider to be a wiring fault (so that it is not implementing arithmetic)? Can you point to a machine (or an idealised abstract algorithm, for that matter) which a reasonable human would agree implements arithmetic, but which disagrees with us on whether 2+2 equals 4? Because, if arithmetic is implementation-dependent, you should be able to do so.
Yes! (So long as we define computation as “abstract manipulation-rules on syntactic tokens”, and don’t make any condition about the computation’s having been implemented on any substrate.)
I did note that, maybe not explicitly but it isn’t really something that anyone would expect another person not to consider.
It doesn’t seem odd at all, we have an expectation of the calculator, and if it fails to fulfill that expectation then we start to doubt that it is, in fact, what we thought it was (a working calculator). This refocuses the issue on us and the mechanics of how we compress information; we expected information ‘X’ at time t, but instead received ‘Y’ and decide that something is wrong with out model (and then aim to fix it by figuring out if it is indeed a wiring problem or a bit-flip or a bug in the programming of the calculator or some electromagnetic interference).
No. But why is this? Because if (a) [a reasonable human would agree implements arithmetic] and (b) [which disagrees with us on whether 2+2 equals 4] both hold, then (c) [The human decides she ve was mistaken and needs to fix the machine]. If the human can alter the machine so as to make it agree with 2+2 = 4, then and only then will the human feel justified in asserting that it implements arithmetic.
The implementation is decidedly correct only if it demonstrates itself to be correct. Only if it fulfills our expectations of it. With a calculator, we are looking for something that allows us to extend our ability to infer things about the world. If I know that a car has a mass of 1000 kilograms and a speed of 200 kilometers for hour, then I can determine whether it will be able to topple a wall given that I have some number that encoded the amount of force it can withstand. I compute the output and compare it to the data for the wall.
I tend to think it depends on a human-like brain that has been trained to interpret ‘2’, ‘+’ and ‘4’ in a certain way, so I don’t readily agree with your claim here.
I’ll look over it, but given what you say here I’m not confident that it won’t be an attempt at a resurrection of Platonism.
Except that if you examine the workings of a calculator that does agree with us, you’re much much less likely to find a wiring fault (that is, that it’s implementing a different algorithm).
If the only value for which the machine disagrees with us is 2+2, and the human adds a trap to detect the case “Has been asked 2+2”, which overrides the usual algorithm and just outputs 4… would the human then claim they’d “made it implement arithmetic”? I don’t think so.
I’ll try a different tack: an implementation of arithmetic can be created which is general and compact (in a Solomonoff sense) - we are able to make calculators rather than Artificial Arithmeticians. Clearly not all concepts can be compressed in this manner, by a counting argument. So there is a fact-of-the-matter that “these {foo} are the concepts which can be compressed by thus-and-such algorithm” (For instance, arithmetic on integers up to N can be formalised in O(log N) bits, which grows strictly slower than O(N); thus integer arithmetic is compressed by positional numeral systems). That fact-of-the-matter would still be true if there were no humans around to implement arithmetic, and it would still be true in Ancient Rome where they haven’t heard of positional numeral systems (though their system still beats the Artificial Arithmetician).
What’s wrong with resurrecting (or rather, reformulating) Platonism? Although, it’s more a Platonic Formalism than straight Platonism.
Well, this seems a bit unclear. We are operating under the assumption that the set up looks very similar to a correct set up, close enough to fool a reasonable expert. So while the previous fault would cause some consternation and force the expert to lower his priors for “this is a working calculator”, it doesn’t follow that he wouldn’t make the appropriate adjustment and then (upon seeing nothing else wrong with it) decide that it is likely that it should resume working correctly.
Yes, it would be true, but what exactly is it that ‘is true’? The human brain is a tangle of probabilistic algorithms playing various functional roles; it is “intuitively obvious” that there should be a Solomonoff-irreducible (up to some constant) program that can be implemented (given sufficient background knowledge of all of the components involved; Boolean circuits implemented on some substrate in such and such a way that “computes” “arithmetic operations on integers” (really it is doing some fancy electrical acrobatics, to be later interpreted first into a form we can perceive as an output, such as a sequence of pixels on some manner of screen arranged in a way to resemble the numerical output we want etc.) and that this is a physical fact about the universe (that we can have things arranged in such a way lead to such an outcome).
It is not obvious that we should then reverse the matter and claim that we ought to project a computational Platonism on to reality any more than logical positivist philosophers should have felt justified in doing that with mathematics and predicate logic a hundred years ago.
It is clear to me that we can perceive ‘computational’ patterns in top level phenomena such as the output of calculators or mental computations and that we can and have devised a framework for organizing the functional role of these processes (in terms of algorithmic information theory/computational complexity/computability theory) in a way that allows us to reason generally about them. It is not clear to me that we are further justified in taking the epistemological step that you seem to want to take.
I’m inclined to think that there is a fundamental problem with how you are approaching epistemology, and you should strongly consider looking into Bayesian epistemology (or statistical inference generally). I am also inclined to suggest that you look into the work of C.S. Peirce, and E.T. Jaynes’ book (as was mentioned previously and is a bit of a favorite around here; it really is quite profound). You might also consider Judea Pearl’s book “Causality”; I think some of the material is quite relevant and it seems likely to me that you would be very interested in it.
ETA: To Clarify, I’m not attacking the computable universe hypothesis; I think it is likely right (though I think that the term ‘computable’ in the broad sense in which it is often used needs some unpacking).