Why not call the set of all sets of actual objects with cardinality 3, “three”, the set of all sets of physical objects with cardinality 2, “two”, and the set of all sets of physical objects with cardinality 5, “five”? Then when I said that 2+3=5, all I would mean is that for any x in two and any y in three, the union of x and y is in five. If you allow sets of physical objects, and sets of sets of physical objects, into your ontology, then you got this; 2+3=5 no matter what anyone thinks, and two and three are real objects existing out there.
“But my dear sir, if the fact of 2 + 3 = 5 exists somewhere outside your brain… then where is it?”
Damned if I know.
Everywhere two and three things exist, “2 + 3 = 5” exists. Much like there is only one electron, there is one “2 + 3 = 5″. Electrons and mathematics are described by their behaviors. “If the behavior of electrons exists outside your brain… then where is it?”
I cannot speak for him, but my understanding is that he identifies instrumentalism with “traditional rationality”, which is but a small step toward Bayesianism.
As I said many times before on this forum, the instrumental approach is that the map-territory distinction is a model, i.e. territory is in the map, not in the territory :)
I think I see where you are coming from with that now.
It seems to me that the territory assumption is necessary for morality, and not much else (because we want to care about things that “exist”, but otherwise probability theory is defined over possible observations only).
Of course a great number of unnecessary things have been called “necessary for morality”...
I’m going to read your comments a bit more and see if I can settle my mind on this instrumentalism thing. Do you reccommend anything I should check out?
because we want to care about things that “exist”,
I think morality is a red herring here. “Wanting to care” about something is a confused state. I care about what I care about. If it so happens that what I care about is an element of a model rather than being something else, I don’t necessarily stop caring about it solely because of that fact.
That said, personally my response to instrumentalism is to take a step back and talk about expectations regarding consistency.
If we can agree that some models support predictions of future experiences better than others, I’m content to either refer to the model that best supports those predictions as a reality that actually exists, as a territory that maps describe, or as my preferred model, depending on what language makes communication easier. I suppose you could say I’m a compatibilist with respect to instrumentalism.
If we can’t agree on that, I’m not sure where to go from there.
I’m content to either refer to the model that best supports those predictions as a reality that actually exists, as a territory that maps describe, or as my preferred model, depending on what language makes communication easier.
I used to feel the same way, but then it is easy to start arguing about the imagined parts of the territory for which no map can ever exist, because “the territory is out there”, and about which of the many identical maps is “more right” (as opposed to “more useful for a given task”). And, given that there can be no experimental evidence to resolve such an argument, it can go on forever. Examples of this futile argument are How many angels can dance on the head of a pin?, QM interpretations, Tegmark’s mathematical universe, statements like “every imaginable world exists” and other untestable nonsense.
As an engineer, I don’t enjoy unproductive futile debates, so expending effort arguing about interpretations seems silly to me. Instrumentalism avoids worrying about “objective reality” and whether it has some yet-undiscovered “true laws” of which our theories are only an approximation. Life is easier that way. Or would be, were it not for the “realists”, who keep insisting that their meta-model is the One True Path. That is not to say that I reject the map-territory distinction, I just place both parts of it inside the [meta]map.
Agreed that futile debates are silly. (I do sometimes enjoy them, but only when they’re fun.)
That said, I find it works for me, in order to avoid them, to accept that questions about the persistent thing (be it reality or a model) are only useful insofar as they lead us to a clearer understanding of the persistent thing. It’s certainly possible to construct and argue about questions that don’t do this, but it’s not a useful thing to do, and I try to avoid it.
I haven’t yet found it necessary to assert a firm position on the ontological nature of reality beyond “the persistent thing” in order to do that. Whether reality is “in the map” or “in the territory” or “doesn’t exist at all” seems to me just another futile debate.
Whether reality is “in the map” or “in the territory” or “doesn’t exist at all” seems to me just another futile debate.
I largely agree. I assert that the territory is in the map mostly as a Schelling fence of sorts, beyond which there is a slippery slope into philosophizing about untestables.
the territory assumption is necessary for morality
I don’t see how. Feel free to explicate.
Do you reccommend anything I should check out?
Sorry. I wish I could say “Popper”, since he basically , but he argued against Bohr’s instrumentalism on some grounds I don’t fully understand. quote from Wikipedia:
my reply to instrumentalism consists in showing that there are profound differences between “pure” theories and technological computation rules, and that instrumentalism can give a perfect description of these rules but is quite unable to account for the difference between them and the theories.
Usually when I read a critique of instrumentalism, it is straw-manned first (I think of it as InSTRAWmentalism). I am quite well aware that this could be a problem with my, admittedly patchy, understanding of the issue, and am happy to change my mind when a good argument comes along.
Do you think the limit of the map as its error goes to zero exists? Do you think we will ever be able to determine whether or not the limit exists? What name would you give that limit if it existed?
I’m just trying to get a better idea of what you believe about instrumentalism. Personally, I think that every map is a territory (mathematical realism) because among all the vacuous explanations for why we experience something instead of nothing it seems to be a simpler model. Instrumentalism, in this case, means trying to figure out the probability distribution of the territories/maps you are a member of, or in other words which map is most likely to predict the measurements I make?
I can see how mathematical realism is obviated by Occam’s Razor since it’s not necessary to explain any measurement, but it’s probably the best metaphysical idea I’ve ran into and it does lend some insight into the question of what to simulate (it doesn’t matter; every simulation already exists just as much as we do), what to care about (everything happens in some universe, so just try to optimize your own), immortality (some universes have infinite time and energy, and some of those universes will simulate us), and god/Omega (there exist beings in other universes that simulate our universe, but it doesn’t matter since our existence is independent of being simulated).
Do you think the limit of the map as its error goes to zero exists? Do you think we will ever be able to determine whether or not the limit exists? What name would you give that limit if it existed?
The equivalent language I prefer is more lay-person: will science ever explain everything we observe and predict everything we may ever observe? And my answer is: there is no way to tell at this point, and the answer[ability] is not relevant to anything we do. After a moment of thought you can see that this might not even be the right question to ask: some day we might be powerful enough and smart enough to create new physical laws, so even defining such a limit will be meaningless.
Even if the Universe’s fundamental nature can be changed without limit there would still be a current territory that hasn’t changed yet. The future territory would be different, but if we knew how to create new laws we could also probably predict what the new territory would be like.
If the fundamental nature of the universe just changes over time on its own, then your argument is a lot stronger.
He explicitly identifies as a realist somewhere. Saying things along the lines of “once you have all these theories describing things, why postulate the additional fact that they don’t exist?” (that’s not an exact quote)
I already thought that Yudkowsky was a Platonist given his position on Everett’s interpretation and Tegmark’s multiverses, but that’s can be cosidered conclusive evidence.
Why not call the set of all sets of actual objects with cardinality 3, “three”, the set of all sets of physical objects with >cardinality 2, “two”, and the set of all sets of physical objects with cardinality 5, “five”?
Because that’s how naive class theory works, not how consistent formal mathematics works.
The closest thing to a canonical approach these days is to start from what you have, nothing, and call that the first set. Then you make sets from those sets in a very restrictive, axiomatic way. Variants get as exotic as the surreal numbers, but the running theme is to avoid defining sets by intension unless you’re quantifying over a known domain.
For the record, I don’t think any of these things “exist” in any meaningful sense. We can do mathematics with inconsistent systems just as well, if less usefully. The law of non-contradiction is something I don’t see how to get past (ie I can’t comprehend such a thing), and there is nothing much else distinguishing the consistent systems as being anything other than collections of statements to the effect that this & that follow if we grant these or those axioms. (Fortunately, it’s more interesting than that at the higher levels.)
You’ve misunderstood me. It’s really not at all conspicuous to allow a none-empty “set” into your ontology, but if you’d prefer we can talk about heaps; they serve for my purposes here (of course, by “heap”, I mean any random pile of stuff). Every heap has parts: you’re a heap of cells, decks are heaps of cards, masses are heaps of atoms, etc. Now if you apply a level filter to the parts of a heap, you can count them. For instance, I can count the organs in your body, count the organ cells in your body, and end up with two different values, though I counted the same object. The same object can constitute many heaps, as long as there are several ways of dividing the object into parts. So what we can do, is just talk about the laws of heap combination, rather than the laws of numbers. We don’t require any further generality in our mathematics to do all our counting, and yet, the only objects I’ve had to adopt into my ontology are heaps (rather inconspicuous material fellows in IMHO).
I should mention that this is not my real suggestion for a foundation of mathematics, but when it comes to the challenge of interpreting the theory of natural numbers without adopting any ghostly quantities, heaps work just fine.
(edit):
I should mention that while heaps, requiring only for you to accept a whole with parts, and a level test on any gven part, are much more ontologically inconspicuous than pure sets. Where exactly is the null set? Where is any pure set? I’ve never seen any of them. Of course, i see heaps all over the place.
Why not call the set of all sets of actual objects with cardinality 3, “three”, the set of all sets of physical objects with cardinality 2, “two”, and the set of all sets of physical objects with cardinality 5, “five”? Then when I said that 2+3=5, all I would mean is that for any x in two and any y in three, the union of x and y is in five. If you allow sets of physical objects, and sets of sets of physical objects, into your ontology, then you got this; 2+3=5 no matter what anyone thinks, and two and three are real objects existing out there.
Everywhere two and three things exist, “2 + 3 = 5” exists. Much like there is only one electron, there is one “2 + 3 = 5″. Electrons and mathematics are described by their behaviors. “If the behavior of electrons exists outside your brain… then where is it?”
Everywhere.
Some day EY will learn to taboo “exist”, and that will be his awakening as an instrumentalist.
Odd, EY never seemed to me as particularly opposed or holding views going against / away from instrumentalism, when I was reading the sequences.
I’m curious to see where that comment comes from.
I cannot speak for him, but my understanding is that he identifies instrumentalism with “traditional rationality”, which is but a small step toward Bayesianism.
Isn’t the territory and the map an explicit distinction between what exists and what we theorize?
As I said many times before on this forum, the instrumental approach is that the map-territory distinction is a model, i.e. territory is in the map, not in the territory :)
I think I see where you are coming from with that now.
It seems to me that the territory assumption is necessary for morality, and not much else (because we want to care about things that “exist”, but otherwise probability theory is defined over possible observations only).
Of course a great number of unnecessary things have been called “necessary for morality”...
I’m going to read your comments a bit more and see if I can settle my mind on this instrumentalism thing. Do you reccommend anything I should check out?
I think morality is a red herring here. “Wanting to care” about something is a confused state. I care about what I care about. If it so happens that what I care about is an element of a model rather than being something else, I don’t necessarily stop caring about it solely because of that fact.
That said, personally my response to instrumentalism is to take a step back and talk about expectations regarding consistency.
If we can agree that some models support predictions of future experiences better than others, I’m content to either refer to the model that best supports those predictions as a reality that actually exists, as a territory that maps describe, or as my preferred model, depending on what language makes communication easier. I suppose you could say I’m a compatibilist with respect to instrumentalism.
If we can’t agree on that, I’m not sure where to go from there.
I used to feel the same way, but then it is easy to start arguing about the imagined parts of the territory for which no map can ever exist, because “the territory is out there”, and about which of the many identical maps is “more right” (as opposed to “more useful for a given task”). And, given that there can be no experimental evidence to resolve such an argument, it can go on forever. Examples of this futile argument are How many angels can dance on the head of a pin?, QM interpretations, Tegmark’s mathematical universe, statements like “every imaginable world exists” and other untestable nonsense.
As an engineer, I don’t enjoy unproductive futile debates, so expending effort arguing about interpretations seems silly to me. Instrumentalism avoids worrying about “objective reality” and whether it has some yet-undiscovered “true laws” of which our theories are only an approximation. Life is easier that way. Or would be, were it not for the “realists”, who keep insisting that their meta-model is the One True Path. That is not to say that I reject the map-territory distinction, I just place both parts of it inside the [meta]map.
Agreed that futile debates are silly. (I do sometimes enjoy them, but only when they’re fun.)
That said, I find it works for me, in order to avoid them, to accept that questions about the persistent thing (be it reality or a model) are only useful insofar as they lead us to a clearer understanding of the persistent thing. It’s certainly possible to construct and argue about questions that don’t do this, but it’s not a useful thing to do, and I try to avoid it.
I haven’t yet found it necessary to assert a firm position on the ontological nature of reality beyond “the persistent thing” in order to do that. Whether reality is “in the map” or “in the territory” or “doesn’t exist at all” seems to me just another futile debate.
I largely agree. I assert that the territory is in the map mostly as a Schelling fence of sorts, beyond which there is a slippery slope into philosophizing about untestables.
I don’t see how. Feel free to explicate.
Sorry. I wish I could say “Popper”, since he basically , but he argued against Bohr’s instrumentalism on some grounds I don’t fully understand. quote from Wikipedia:
Usually when I read a critique of instrumentalism, it is straw-manned first (I think of it as InSTRAWmentalism). I am quite well aware that this could be a problem with my, admittedly patchy, understanding of the issue, and am happy to change my mind when a good argument comes along.
Do you think the limit of the map as its error goes to zero exists? Do you think we will ever be able to determine whether or not the limit exists? What name would you give that limit if it existed?
I’m just trying to get a better idea of what you believe about instrumentalism. Personally, I think that every map is a territory (mathematical realism) because among all the vacuous explanations for why we experience something instead of nothing it seems to be a simpler model. Instrumentalism, in this case, means trying to figure out the probability distribution of the territories/maps you are a member of, or in other words which map is most likely to predict the measurements I make?
I can see how mathematical realism is obviated by Occam’s Razor since it’s not necessary to explain any measurement, but it’s probably the best metaphysical idea I’ve ran into and it does lend some insight into the question of what to simulate (it doesn’t matter; every simulation already exists just as much as we do), what to care about (everything happens in some universe, so just try to optimize your own), immortality (some universes have infinite time and energy, and some of those universes will simulate us), and god/Omega (there exist beings in other universes that simulate our universe, but it doesn’t matter since our existence is independent of being simulated).
The equivalent language I prefer is more lay-person: will science ever explain everything we observe and predict everything we may ever observe? And my answer is: there is no way to tell at this point, and the answer[ability] is not relevant to anything we do. After a moment of thought you can see that this might not even be the right question to ask: some day we might be powerful enough and smart enough to create new physical laws, so even defining such a limit will be meaningless.
Even if the Universe’s fundamental nature can be changed without limit there would still be a current territory that hasn’t changed yet. The future territory would be different, but if we knew how to create new laws we could also probably predict what the new territory would be like.
If the fundamental nature of the universe just changes over time on its own, then your argument is a lot stronger.
But should my map mark territory as being in the map, or in the territory?
It helps if you start by tabooing the words “territory”, “real”, “exist” and explaining what you mean by them.
He explicitly identifies as a realist somewhere. Saying things along the lines of “once you have all these theories describing things, why postulate the additional fact that they don’t exist?” (that’s not an exact quote)
There
I already thought that Yudkowsky was a Platonist given his position on Everett’s interpretation and Tegmark’s multiverses, but that’s can be cosidered conclusive evidence.
Because that’s how naive class theory works, not how consistent formal mathematics works.
The closest thing to a canonical approach these days is to start from what you have, nothing, and call that the first set. Then you make sets from those sets in a very restrictive, axiomatic way. Variants get as exotic as the surreal numbers, but the running theme is to avoid defining sets by intension unless you’re quantifying over a known domain.
For the record, I don’t think any of these things “exist” in any meaningful sense. We can do mathematics with inconsistent systems just as well, if less usefully. The law of non-contradiction is something I don’t see how to get past (ie I can’t comprehend such a thing), and there is nothing much else distinguishing the consistent systems as being anything other than collections of statements to the effect that this & that follow if we grant these or those axioms. (Fortunately, it’s more interesting than that at the higher levels.)
You’ve misunderstood me. It’s really not at all conspicuous to allow a none-empty “set” into your ontology, but if you’d prefer we can talk about heaps; they serve for my purposes here (of course, by “heap”, I mean any random pile of stuff). Every heap has parts: you’re a heap of cells, decks are heaps of cards, masses are heaps of atoms, etc. Now if you apply a level filter to the parts of a heap, you can count them. For instance, I can count the organs in your body, count the organ cells in your body, and end up with two different values, though I counted the same object. The same object can constitute many heaps, as long as there are several ways of dividing the object into parts. So what we can do, is just talk about the laws of heap combination, rather than the laws of numbers. We don’t require any further generality in our mathematics to do all our counting, and yet, the only objects I’ve had to adopt into my ontology are heaps (rather inconspicuous material fellows in IMHO).
I should mention that this is not my real suggestion for a foundation of mathematics, but when it comes to the challenge of interpreting the theory of natural numbers without adopting any ghostly quantities, heaps work just fine.
(edit): I should mention that while heaps, requiring only for you to accept a whole with parts, and a level test on any gven part, are much more ontologically inconspicuous than pure sets. Where exactly is the null set? Where is any pure set? I’ve never seen any of them. Of course, i see heaps all over the place.