Therefore there is some sense in which the theorems are inherent in the (axioms + deduction rules): there is a truth about what those (axioms + deduction rules) lead to, and that truth exists outside of any implementation.
If that is so, then how come others tend to reach the same truth? In the same way that there is something outside me that produces my experimental results (The Simple Truth), so there is something outside me that causes it to be the case that, when I (or any other cognitive agent) implements this particular algorithm, this particular result results.
It is not a mind-projection fallacy, any more than “the sheep control my pebbles” is a mind-projection fallacy. It’s just that it’s operating one meta-level higher.
If that is so, then how come others tend to reach the same truth? In the same way that there is something outside me that produces my experimental results (The Simple Truth), so there is something outside me that causes it to be the case that, when I (or any other cognitive agent) implements this particular algorithm, this particular result results.
People have very similar brains and I’d bet that all of the ideas of people that are cognitively available to you shared a similar cultural experience (at least in terms of what intellectual capital was/is available to them).
Viewing mathematics as something that is at least partially a reflection of the way that humans tend to compress information, it seems like you could argue that there is an awful lot of stuff to unpack when you say “2+2 = 4 is true outside of implementation” as well as the term “cognitive agent”.
What is clear to me is that when we set up a physical system (such as a Von Neumann machine, or a human who has been ‘set up’ by being educated and then asked a certain question) in a certain way, some part of the future state of that system is (say with 99.999% likelihood) recognizable to us as output (perhaps certain patterns of light resonate with us as “the correct answer”, perhaps some phonemes register in our cochlea and we store them in our working memory and compare them with the ‘expected’ phonemes). There appears to be an underlying regularity, but it isn’t clear to me what the true reduction looks like! Is the computation the ‘bottom level’? Do we aim to rephrase mathematics in terms of some algorithms that are capable of producing it? Are we then to take computation as “more fundamental” than physics?
What is clear to me is that when we set up a physical system (such as a Von Neumann machine, or a human who has been ‘set up’ by being educated and then asked a certain question) in a certain way, some part of the future state of that system is (say with 99.999% likelihood) recognizable to us as output (perhaps certain patterns of light resonate with us as “the correct answer”)
But note that there are also patterns of light which we would interpret as “the wrong answer”. If arithmetic is implementation-dependent, isn’t it a bit odd that whenever we build a calculator that outputs “5″ for 2+2, it turns out to have something we would consider to be a wiring fault (so that it is not implementing arithmetic)? Can you point to a machine (or an idealised abstract algorithm, for that matter) which a reasonable human would agree implements arithmetic, but which disagrees with us on whether 2+2 equals 4? Because, if arithmetic is implementation-dependent, you should be able to do so.
Are we then to take computation as “more fundamental” than physics?
Yes! (So long as we define computation as “abstract manipulation-rules on syntactic tokens”, and don’t make any condition about the computation’s having been implemented on any substrate.)
But note that there are also patterns of light which we would interpret as “the wrong answer”.
I did note that, maybe not explicitly but it isn’t really something that anyone would expect another person not to consider.
isn’t it a bit odd that whenever we build a calculator that outputs “5” for 2+2, it turns out to have something we would consider to be a wiring fault (so that it is not implementing arithmetic)?
It doesn’t seem odd at all, we have an expectation of the calculator, and if it fails to fulfill that expectation then we start to doubt that it is, in fact, what we thought it was (a working calculator). This refocuses the issue on us and the mechanics of how we compress information; we expected information ‘X’ at time t, but instead received ‘Y’ and decide that something is wrong with out model (and then aim to fix it by figuring out if it is indeed a wiring problem or a bit-flip or a bug in the programming of the calculator or some electromagnetic interference).
Can you point to a machine (or an idealised abstract algorithm, for that matter) which a reasonable human would agree implements arithmetic, but which disagrees with us on whether 2+2 equals 4?
No. But why is this? Because if (a) [a reasonable human would agree implements arithmetic] and (b) [which disagrees with us on whether 2+2 equals 4] both hold, then (c) [The human decides she ve was mistaken and needs to fix the machine]. If the human can alter the machine so as to make it agree with 2+2 = 4, then and only then will the human feel justified in asserting that it implements arithmetic.
The implementation is decidedly correct only if it demonstrates itself to be correct. Only if it fulfills our expectations of it. With a calculator, we are looking for something that allows us to extend our ability to infer things about the world. If I know that a car has a mass of 1000 kilograms and a speed of 200 kilometers for hour, then I can determine whether it will be able to topple a wall given that I have some number that encoded the amount of force it can withstand. I compute the output and compare it to the data for the wall.
Because, if arithmetic is implementation-dependent, you should be able to do so.
I tend to think it depends on a human-like brain that has been trained to interpret ‘2’, ‘+’ and ‘4’ in a certain way, so I don’t readily agree with your claim here.
Yes! (So long as we define computation as “abstract manipulation-rules on syntactic tokens”, and don’t make any condition about the computation’s having been implemented on any substrate.)
I’ll look over it, but given what you say here I’m not confident that it won’t be an attempt at a resurrection of Platonism.
It doesn’t seem odd at all, we have an expectation of the calculator, and if it fails to fulfill that expectation then we start to doubt that it is, in fact, what we thought it was (a working calculator).
Except that if you examine the workings of a calculator that does agree with us, you’re much much less likely to find a wiring fault (that is, that it’s implementing a different algorithm).
if (a) [a reasonable human would agree implements arithmetic] and (b) [which disagrees with us on whether 2+2 equals 4] both hold, then (c) [The human decides she ve was mistaken and needs to fix the machine]. If the human can alter the machine so as to make it agree with 2+2 = 4, then and only then will the human feel justified in asserting that it implements arithmetic.
If the only value for which the machine disagrees with us is 2+2, and the human adds a trap to detect the case “Has been asked 2+2”, which overrides the usual algorithm and just outputs 4… would the human then claim they’d “made it implement arithmetic”? I don’t think so.
I’ll try a different tack: an implementation of arithmetic can be created which is general and compact (in a Solomonoff sense) - we are able to make calculators rather than Artificial Arithmeticians. Clearly not all concepts can be compressed in this manner, by a counting argument. So there is a fact-of-the-matter that “these {foo} are the concepts which can be compressed by thus-and-such algorithm” (For instance, arithmetic on integers up to N can be formalised in O(log N) bits, which grows strictly slower than O(N); thus integer arithmetic is compressed by positional numeral systems). That fact-of-the-matter would still be true if there were no humans around to implement arithmetic, and it would still be true in Ancient Rome where they haven’t heard of positional numeral systems (though their system still beats the Artificial Arithmetician).
I’ll look over it, but given what you say here I’m not confident that it won’t be an attempt at a resurrection of Platonism.
What’s wrong with resurrecting (or rather, reformulating) Platonism? Although, it’s more a Platonic Formalism than straight Platonism.
If the only value for which the machine disagrees with us is 2+2, and the human adds a trap to detect the case “Has been asked 2+2”, which overrides the usual algorithm and just outputs 4… would the human then claim they’d “made it implement arithmetic”? I don’t think so.
Well, this seems a bit unclear. We are operating under the assumption that the set up looks very similar to a correct set up, close enough to fool a reasonable expert. So while the previous fault would cause some consternation and force the expert to lower his priors for “this is a working calculator”, it doesn’t follow that he wouldn’t make the appropriate adjustment and then (upon seeing nothing else wrong with it) decide that it is likely that it should resume working correctly.
That fact-of-the-matter would still be true if there were no humans around to implement arithmetic, and it would still be true in Ancient Rome where they haven’t heard of positional numeral systems (though their system still beats the Artificial Arithmetician).
Yes, it would be true, but what exactly is it that ‘is true’? The human brain is a tangle of probabilistic algorithms playing various functional roles; it is “intuitively obvious” that there should be a Solomonoff-irreducible (up to some constant) program that can be implemented (given sufficient background knowledge of all of the components involved; Boolean circuits implemented on some substrate in such and such a way that “computes” “arithmetic operations on integers” (really it is doing some fancy electrical acrobatics, to be later interpreted first into a form we can perceive as an output, such as a sequence of pixels on some manner of screen arranged in a way to resemble the numerical output we want etc.) and that this is a physical fact about the universe (that we can have things arranged in such a way lead to such an outcome).
It is not obvious that we should then reverse the matter and claim that we ought to project a computational Platonism on to reality any more than logical positivist philosophers should have felt justified in doing that with mathematics and predicate logic a hundred years ago.
It is clear to me that we can perceive ‘computational’ patterns in top level phenomena such as the output of calculators or mental computations and that we can and have devised a framework for organizing the functional role of these processes (in terms of algorithmic information theory/computational complexity/computability theory) in a way that allows us to reason generally about them. It is not clear to me that we are further justified in taking the epistemological step that you seem to want to take.
I’m inclined to think that there is a fundamental problem with how you are approaching epistemology, and you should strongly consider looking into Bayesian epistemology (or statistical inference generally). I am also inclined to suggest that you look into the work of C.S. Peirce, and E.T. Jaynes’ book (as was mentioned previously and is a bit of a favorite around here; it really is quite profound). You might also consider Judea Pearl’s book “Causality”; I think some of the material is quite relevant and it seems likely to me that you would be very interested in it.
ETA: To Clarify, I’m not attacking the computable universe hypothesis; I think it is likely right (though I think that the term ‘computable’ in the broad sense in which it is often used needs some unpacking).
I am arguing against your concept “that truth exists outside of any implementation”.
My claim is that “truth” can only be determined and represented within some kind of truth evaluating physical context; there is nothing about the resulting physical state that implies or requires non-physical truth.
Our minds are not transparent windows unto veridical reality; when you look at a rock, you experience not the the rock itself, but your mind’s representation of the rock, reconstructed from photons bouncing off its surface.
To your question:
If that is so, then how come others tend to reach the same truth?
These others are producing physical artifacts such as writing or speech, which through some chain of physical interactions eventually trigger state changes in your brain. At a higher meta-level, You are taking multiple forms of observations, transforming them within your brain/mind and then comparing them… eventually concluding that “others tend to reach the same truth”. Another mind with its own unique perspective may come to a different conclusion such as “Fred is wearing a funny hat.”
Your conclusion on truth is a physical state in your mind, generated by physical processes. The existence of a metaphysical truth is not required for you to come to that conclusion.
Your conclusion on truth is a physical state in your mind, generated by physical processes. The existence of a metaphysical truth is not required for you to come to that conclusion.
I think a meta- has gone missing here: I can’t be certain that others tend to reach the same truth (rather than funny hats), and I can’t be certain that 2+2=4. I can’t even be certain that there is a fact-of-the-matter about whether 2+2=4. But it seems damned likely, given Occamian priors, that there is a fact-of-the-matter about whether 2+2=4 (and, inasmuch as a reflective mind can have evidence for anything, which has to be justified through a strange loop on the bedrock, I have strong evidence that 2+2 does indeed equal 4).
That “truth” in the map doesn’t imply truth in the territory, I accept. That there is no truth in the territory, I vehemently reject. If two minds implement the same computation, and reach different answers, then I simply do not believe that they were really implementing the same computation. If you compute 2+2 but get struck by a cosmic ray that flips a bit and makes you conclude “5!”, then you actually implemented the computation “2+2 with such-and-such a cosmic ray bitflip”.
I am not able to comprehend the workings of a mind which believes arithmetic truth to be a property only of minds, any more than I am able to comprehend a mind which believes sheep to be a property only of buckets. Your conclusion on sheep is a physical state in your mind, generated by physical processes. But the sheep still exist outside of your mind.
Your conclusion on sheep is a physical state in your mind, generated by physical processes. But the sheep still exist outside of your mind.
Restating my claim in terms of sheep: The identification of a sheep is a state change within a context of evaluation that implements sheep recognition. So a sheep exists in that context.
Physical reality however does not recognize sheep; it recognizes and responds to physical reality stuff. Sheep don’t exist within physical reality.
“Sheep” is at a different meta-level than the chain of physical inference that led to that classification.
That “truth” in the map doesn’t imply truth in the territory, I accept. That there is no truth in the territory, I vehemently reject.
“Truth” is at a different meta-level than the chain of physical inference that lead to that classification. There is no requirement that “truth” is in the set of stuff that has meaning within the territory.
When you look at the statement 2+2=4 you think some form of “hey, that’s true”. When I look at the statement, I also think some form of “hey, that’s true”. We can then talk and both come to our own unique conclusion that the other person agrees with us. This process does not require a metaphysical arithmetic; it only requires a common context.
For example we both have a proximal existence within the physical universe, we have a communication channel, we both understand English, and we both understand basic arithmetic. These types of common contexts allow us make some very practical and reasonable assumptions about what the other person means.
Common contexts allow us to agree on the consequences of arithmetic.
The short summary is that meaning/existence is formed by contexts of evaluation, and common contexts allow us to communicate. These processes explain your observations and operate entirely within the physical universe. The concept of metaphysical existence is not needed.
When you look at the statement 2+2=4 you think some form of “hey, that’s true”. When I look at the statement, I also think some form of “hey, that’s true”. We can then talk and both come to our own unique conclusion that the other person agrees with us.
I think your argument involves reflection somewhere. The desk calculator agrees that 2+2=4, and it’s not reflective. Putting two pebbles next to two pebbles also agrees.
Look at the discussion under this comment; I maintain that cognitive agents converge, even if their only common context is modus ponens—and that this implies there is something to be converged upon. At the least, it is ‘true’ that that-which-cognitive-agents-converge-on takes the value that it does (rather than any other value, like “1=0”).
These processes explain your observations and operate entirely within the physical universe. The concept of metaphysical existence is not needed.
Mathematical realism also explains my observations and operates entirely within the mathematical universe; the concept of physical existence is not needed. The ‘physical existence hypothesis’ has the burdensome detail that extant physical reality follows mathematical laws; I do not see a corresponding burdensome detail on the ‘mathematical realism hypothesis’. Thus by Occam, I conclude mathematical realism and no physical existence.
I am not sure I have answered your objections because I am not sure I understand them; if I do not, then I plead merely that it’s 8AM, I’ve been up all night, and I need some sleep :(
I think your argument involves reflection somewhere. The desk calculator agrees that 2+2=4, and it’s not reflective. Putting two pebbles next to two pebbles also agrees.
Agreement with statements such as 2+2=4 is not a function that desk calculators perform. It is not the function performed when you place two pebbles next to two pebbles.
Agreement is an evaluation performed by your mind from its unique position in the universe.
… this implies there is something to be converged upon.
The conclusion that convergence has occurred must be made from a context of evaluation. You make observations and derive a conclusion of convergence from them. Convergence is a state of your map, not a state of the territory.
Mathematical realism also explains my observations and operates entirely within the mathematical universe; …
Mathematical realism appears to confuse the map for the territory—as does scientific realism, as does physical realism.
When I refer to physical reality or existence I am only referring to a convenient level of abstraction. Space, time, electrons, arithmetic, these all are interpretations formed from different contexts of evaluation. We form networks of maps to describe our universe, but these maps are not the territory.
Gottlob Frege coined the term context principle in his Foundations of Arithmetic, 1884 (translated). He stated it as “We must never try to define the meaning of a word in isolation, but only as it is used in the context of a proposition.”
I am saying that we must never try to identify meaning or existence in isolation, but only as they are formed by a context of evaluation.
When you state:
Putting two pebbles next to two pebbles also agrees.
I look for the context of evaluation that produces this result—and I recognize that the pebbles and agreement are states formed within your mind as you interact with the universe. To believe that these states exist in the universe you are interacting with is a mind projection fallacy.
You are experiencing a mind projection fallacy.
The theorems don’t exist unless an implementation produces them and once produced they only exist within a context that can represent them.
In the same way, the truth you refer to is generated by and exists within your mind. It has no existence outside of that implementation.
If that is so, then how come others tend to reach the same truth? In the same way that there is something outside me that produces my experimental results (The Simple Truth), so there is something outside me that causes it to be the case that, when I (or any other cognitive agent) implements this particular algorithm, this particular result results.
It is not a mind-projection fallacy, any more than “the sheep control my pebbles” is a mind-projection fallacy. It’s just that it’s operating one meta-level higher.
People have very similar brains and I’d bet that all of the ideas of people that are cognitively available to you shared a similar cultural experience (at least in terms of what intellectual capital was/is available to them).
Viewing mathematics as something that is at least partially a reflection of the way that humans tend to compress information, it seems like you could argue that there is an awful lot of stuff to unpack when you say “2+2 = 4 is true outside of implementation” as well as the term “cognitive agent”.
What is clear to me is that when we set up a physical system (such as a Von Neumann machine, or a human who has been ‘set up’ by being educated and then asked a certain question) in a certain way, some part of the future state of that system is (say with 99.999% likelihood) recognizable to us as output (perhaps certain patterns of light resonate with us as “the correct answer”, perhaps some phonemes register in our cochlea and we store them in our working memory and compare them with the ‘expected’ phonemes). There appears to be an underlying regularity, but it isn’t clear to me what the true reduction looks like! Is the computation the ‘bottom level’? Do we aim to rephrase mathematics in terms of some algorithms that are capable of producing it? Are we then to take computation as “more fundamental” than physics?
Does this make sense?
But note that there are also patterns of light which we would interpret as “the wrong answer”. If arithmetic is implementation-dependent, isn’t it a bit odd that whenever we build a calculator that outputs “5″ for 2+2, it turns out to have something we would consider to be a wiring fault (so that it is not implementing arithmetic)? Can you point to a machine (or an idealised abstract algorithm, for that matter) which a reasonable human would agree implements arithmetic, but which disagrees with us on whether 2+2 equals 4? Because, if arithmetic is implementation-dependent, you should be able to do so.
Yes! (So long as we define computation as “abstract manipulation-rules on syntactic tokens”, and don’t make any condition about the computation’s having been implemented on any substrate.)
I did note that, maybe not explicitly but it isn’t really something that anyone would expect another person not to consider.
It doesn’t seem odd at all, we have an expectation of the calculator, and if it fails to fulfill that expectation then we start to doubt that it is, in fact, what we thought it was (a working calculator). This refocuses the issue on us and the mechanics of how we compress information; we expected information ‘X’ at time t, but instead received ‘Y’ and decide that something is wrong with out model (and then aim to fix it by figuring out if it is indeed a wiring problem or a bit-flip or a bug in the programming of the calculator or some electromagnetic interference).
No. But why is this? Because if (a) [a reasonable human would agree implements arithmetic] and (b) [which disagrees with us on whether 2+2 equals 4] both hold, then (c) [The human decides she ve was mistaken and needs to fix the machine]. If the human can alter the machine so as to make it agree with 2+2 = 4, then and only then will the human feel justified in asserting that it implements arithmetic.
The implementation is decidedly correct only if it demonstrates itself to be correct. Only if it fulfills our expectations of it. With a calculator, we are looking for something that allows us to extend our ability to infer things about the world. If I know that a car has a mass of 1000 kilograms and a speed of 200 kilometers for hour, then I can determine whether it will be able to topple a wall given that I have some number that encoded the amount of force it can withstand. I compute the output and compare it to the data for the wall.
I tend to think it depends on a human-like brain that has been trained to interpret ‘2’, ‘+’ and ‘4’ in a certain way, so I don’t readily agree with your claim here.
I’ll look over it, but given what you say here I’m not confident that it won’t be an attempt at a resurrection of Platonism.
Except that if you examine the workings of a calculator that does agree with us, you’re much much less likely to find a wiring fault (that is, that it’s implementing a different algorithm).
If the only value for which the machine disagrees with us is 2+2, and the human adds a trap to detect the case “Has been asked 2+2”, which overrides the usual algorithm and just outputs 4… would the human then claim they’d “made it implement arithmetic”? I don’t think so.
I’ll try a different tack: an implementation of arithmetic can be created which is general and compact (in a Solomonoff sense) - we are able to make calculators rather than Artificial Arithmeticians. Clearly not all concepts can be compressed in this manner, by a counting argument. So there is a fact-of-the-matter that “these {foo} are the concepts which can be compressed by thus-and-such algorithm” (For instance, arithmetic on integers up to N can be formalised in O(log N) bits, which grows strictly slower than O(N); thus integer arithmetic is compressed by positional numeral systems). That fact-of-the-matter would still be true if there were no humans around to implement arithmetic, and it would still be true in Ancient Rome where they haven’t heard of positional numeral systems (though their system still beats the Artificial Arithmetician).
What’s wrong with resurrecting (or rather, reformulating) Platonism? Although, it’s more a Platonic Formalism than straight Platonism.
Well, this seems a bit unclear. We are operating under the assumption that the set up looks very similar to a correct set up, close enough to fool a reasonable expert. So while the previous fault would cause some consternation and force the expert to lower his priors for “this is a working calculator”, it doesn’t follow that he wouldn’t make the appropriate adjustment and then (upon seeing nothing else wrong with it) decide that it is likely that it should resume working correctly.
Yes, it would be true, but what exactly is it that ‘is true’? The human brain is a tangle of probabilistic algorithms playing various functional roles; it is “intuitively obvious” that there should be a Solomonoff-irreducible (up to some constant) program that can be implemented (given sufficient background knowledge of all of the components involved; Boolean circuits implemented on some substrate in such and such a way that “computes” “arithmetic operations on integers” (really it is doing some fancy electrical acrobatics, to be later interpreted first into a form we can perceive as an output, such as a sequence of pixels on some manner of screen arranged in a way to resemble the numerical output we want etc.) and that this is a physical fact about the universe (that we can have things arranged in such a way lead to such an outcome).
It is not obvious that we should then reverse the matter and claim that we ought to project a computational Platonism on to reality any more than logical positivist philosophers should have felt justified in doing that with mathematics and predicate logic a hundred years ago.
It is clear to me that we can perceive ‘computational’ patterns in top level phenomena such as the output of calculators or mental computations and that we can and have devised a framework for organizing the functional role of these processes (in terms of algorithmic information theory/computational complexity/computability theory) in a way that allows us to reason generally about them. It is not clear to me that we are further justified in taking the epistemological step that you seem to want to take.
I’m inclined to think that there is a fundamental problem with how you are approaching epistemology, and you should strongly consider looking into Bayesian epistemology (or statistical inference generally). I am also inclined to suggest that you look into the work of C.S. Peirce, and E.T. Jaynes’ book (as was mentioned previously and is a bit of a favorite around here; it really is quite profound). You might also consider Judea Pearl’s book “Causality”; I think some of the material is quite relevant and it seems likely to me that you would be very interested in it.
ETA: To Clarify, I’m not attacking the computable universe hypothesis; I think it is likely right (though I think that the term ‘computable’ in the broad sense in which it is often used needs some unpacking).
I am arguing against your concept “that truth exists outside of any implementation”.
My claim is that “truth” can only be determined and represented within some kind of truth evaluating physical context; there is nothing about the resulting physical state that implies or requires non-physical truth.
As stated here
To your question:
These others are producing physical artifacts such as writing or speech, which through some chain of physical interactions eventually trigger state changes in your brain. At a higher meta-level, You are taking multiple forms of observations, transforming them within your brain/mind and then comparing them… eventually concluding that “others tend to reach the same truth”. Another mind with its own unique perspective may come to a different conclusion such as “Fred is wearing a funny hat.”
Your conclusion on truth is a physical state in your mind, generated by physical processes. The existence of a metaphysical truth is not required for you to come to that conclusion.
I think a meta- has gone missing here: I can’t be certain that others tend to reach the same truth (rather than funny hats), and I can’t be certain that 2+2=4. I can’t even be certain that there is a fact-of-the-matter about whether 2+2=4. But it seems damned likely, given Occamian priors, that there is a fact-of-the-matter about whether 2+2=4 (and, inasmuch as a reflective mind can have evidence for anything, which has to be justified through a strange loop on the bedrock, I have strong evidence that 2+2 does indeed equal 4).
That “truth” in the map doesn’t imply truth in the territory, I accept. That there is no truth in the territory, I vehemently reject. If two minds implement the same computation, and reach different answers, then I simply do not believe that they were really implementing the same computation. If you compute 2+2 but get struck by a cosmic ray that flips a bit and makes you conclude “5!”, then you actually implemented the computation “2+2 with such-and-such a cosmic ray bitflip”.
I am not able to comprehend the workings of a mind which believes arithmetic truth to be a property only of minds, any more than I am able to comprehend a mind which believes sheep to be a property only of buckets. Your conclusion on sheep is a physical state in your mind, generated by physical processes. But the sheep still exist outside of your mind.
Restating my claim in terms of sheep: The identification of a sheep is a state change within a context of evaluation that implements sheep recognition. So a sheep exists in that context.
Physical reality however does not recognize sheep; it recognizes and responds to physical reality stuff. Sheep don’t exist within physical reality.
“Sheep” is at a different meta-level than the chain of physical inference that led to that classification.
“Truth” is at a different meta-level than the chain of physical inference that lead to that classification. There is no requirement that “truth” is in the set of stuff that has meaning within the territory.
When you look at the statement 2+2=4 you think some form of “hey, that’s true”. When I look at the statement, I also think some form of “hey, that’s true”. We can then talk and both come to our own unique conclusion that the other person agrees with us. This process does not require a metaphysical arithmetic; it only requires a common context.
For example we both have a proximal existence within the physical universe, we have a communication channel, we both understand English, and we both understand basic arithmetic. These types of common contexts allow us make some very practical and reasonable assumptions about what the other person means.
Common contexts allow us to agree on the consequences of arithmetic.
The short summary is that meaning/existence is formed by contexts of evaluation, and common contexts allow us to communicate. These processes explain your observations and operate entirely within the physical universe. The concept of metaphysical existence is not needed.
I think your argument involves reflection somewhere. The desk calculator agrees that 2+2=4, and it’s not reflective. Putting two pebbles next to two pebbles also agrees.
Look at the discussion under this comment; I maintain that cognitive agents converge, even if their only common context is modus ponens—and that this implies there is something to be converged upon. At the least, it is ‘true’ that that-which-cognitive-agents-converge-on takes the value that it does (rather than any other value, like “1=0”).
Mathematical realism also explains my observations and operates entirely within the mathematical universe; the concept of physical existence is not needed. The ‘physical existence hypothesis’ has the burdensome detail that extant physical reality follows mathematical laws; I do not see a corresponding burdensome detail on the ‘mathematical realism hypothesis’. Thus by Occam, I conclude mathematical realism and no physical existence.
I am not sure I have answered your objections because I am not sure I understand them; if I do not, then I plead merely that it’s 8AM, I’ve been up all night, and I need some sleep :(
Agreement with statements such as 2+2=4 is not a function that desk calculators perform. It is not the function performed when you place two pebbles next to two pebbles.
Agreement is an evaluation performed by your mind from its unique position in the universe.
The conclusion that convergence has occurred must be made from a context of evaluation. You make observations and derive a conclusion of convergence from them. Convergence is a state of your map, not a state of the territory.
Mathematical realism appears to confuse the map for the territory—as does scientific realism, as does physical realism.
When I refer to physical reality or existence I am only referring to a convenient level of abstraction. Space, time, electrons, arithmetic, these all are interpretations formed from different contexts of evaluation. We form networks of maps to describe our universe, but these maps are not the territory.
Gottlob Frege coined the term context principle in his Foundations of Arithmetic, 1884 (translated). He stated it as “We must never try to define the meaning of a word in isolation, but only as it is used in the context of a proposition.”
I am saying that we must never try to identify meaning or existence in isolation, but only as they are formed by a context of evaluation.
When you state:
I look for the context of evaluation that produces this result—and I recognize that the pebbles and agreement are states formed within your mind as you interact with the universe. To believe that these states exist in the universe you are interacting with is a mind projection fallacy.