That is, if you cared about something closer to the reality of what happens to your sister, rather than your experience of it, you’d have hesitated in that choice long enough to ask Omega whether she would prefer death to being imprisoned on Mars.
Then, as I said, he cares about something closer to the reality.
The major point I’ve been trying to make in this thread is that because human preferences are not just in the map but of the map, is that it allows people to persist in delusions about their motivations. And not asking the question is a perfect example of the sort of decision error this can produce!
However, asking the question doesn’t magically make the preference about the territory either; in order to prefer the future include his sister’s best interests, he must first have an experience of the sister and a reason to wish well of her. But it’s still better than not asking, which is basically wireheading.
The irony I find in this discussion is that people seem to think I’m in favor of wireheading because I point out that we’re all doing it, all the time. When in fact, the usefulness of being aware that it’s all wireheading, is that it makes you better at noticing when you’re doing it less-usefully.
The fact that he hadn’t asked his sister, or about his sister’s actual well-being instantly jumped off the screen at me, because it was (to me) obvious wireheading.
So, you could say that I’m biased by my belief to notice wireheading more, but that’s an advantage for a rationalist, not a disadvantage.
The major point I’ve been trying to make in this thread is that because human preferences are not just in the map but of the map, is that it allows people to persist in delusions about their motivations.
Is human knowledge also not just in the map, but exclusively of the map? If not, what’s the difference?
Is human knowledge also not just in the map, but exclusively of the map? If not, what’s the difference?
Any knowledge about the actual territory can in principle be reduced to mechanical form without the presence of a human being in the system.
To put it another way, a preference is not a procedure, process, or product. The very use of the word “preference” is a mind projection—mechanical systems do not have “preferences”—they just have behavior.
The only reason we even think we have preferences in the first place (let alone that they’re about the territory!) is because we have inbuilt mind projection. The very idea of having preferences is hardwired into the model we use for thinking about other animals and people.
So, knowledge exists in the structure of map and is about the territory, while preference can’t be implemented in natural artifacts. Preference is a magical property of subjective experience, and it is over maps, or about subjective experience, but not, for example, about the brain. Saying that preference exists in the structure of map or that it is about the territory is a confusion, that you call “mind projection” Does that summarize your position? What are the specific errors in this account?
Preference is a magical property of subjective experience
No, “preference” is an illusory magical property projected by brains onto reality, which contains only behaviors.
Our brains infer “preferences” as a way of modeling expected behaviors of other agents: humans, animals, and anything else we perceive as having agency (e.g. gods, spirits, monsters). When a thing has a behavior, our brains conclude that the thing “prefers” to have either the behavior or the outcome of the behavior, in a particular circumstance. In other words, “preference” is a label attached to a clump of behavior-tendency observations and predictions in the brain—not a statement about the nature of the thing being observed.
Thus, presuming that these “preferences” actually exist in the territory is supernaturalism, i.e., acting as though basic mental entities exist.
My original point had more to do with the types of delusion that occur when we reason on the basis of preferences actually existing, rather than the idea simply being a projection of our own minds. However, the above will do for a start, as I believe my other conclusions can be easily reached from this point.
Thus, presuming that these “preferences” actually exist in the territory is supernaturalism, i.e., acting as though basic mental entities exist.
Do you think someone is advocating the position that goodness of properties of the territory is an inherent property of territory (that sounds like a kind of moral realism)? This looks like the lack of distinction between 1-place and 2-place words. You could analogize preference (and knowledge) as a relation between the mind and the (possible states of the) territory, that is neither a property of the mind alone, nor of the territory alone, but a property of them being involved in a certain interaction.
Do you think someone is advocating the position that goodness of properties of the territory is an inherent property of territory?
No, I assume that everybody who’s been seriously participating has at least got that part straight.
This looks like the lack of distinction between 1-place and 2-place words. You could analogize preference (and knowledge) as a relation between the mind and the (possible states of the) territory, that is neither a property of the mind alone, nor of the territory alone, but a property of them being involved in a certain interaction.
Now you’re getting close to what I’m saying, but on the wrong logical level. What I’m saying is that the logical error is that you can’t express a 2-place relationship between a map, and the territory covered by that map, within that same map, as that amounts to claiming the territory is embedded within that map.
If I assert that my preferences are “about” the real world, I am making a category error because my preferences are relationships between portions of my map, some of which I have labeled as representing the territory.
The fact that there is a limited isomorphism between that portion of my map, and the actual territory, does not make my preferences “about” the territory, unless you represent that idea in another map.
That is, I can represent the idea that “your” preferences are about the territory in my map… in that I can posit a relationship between the part of my map referring to “you”, and the part of my map referring to “the territory”. But that “aboutness” relationship is only contained in my map; it doesn’t exist in reality either.
That’s why it’s always a mind projection fallacy to assert that preferences are “about” territory: one cannot assert it of one’s own preferences, because that implies the territory is inside the map. And if one asserts it of another person’s preferences, then that one is projecting their own map onto the territory.
I initially only picked on the specific case of self-applied projection, because understanding that case can be very practically useful for mind hacking. In particular, it helps to dissolve certain irrational fears that changing one’s preferences will necessarily result in undesirable futures. (That is, these fears are worrying that the gnomes and fairies will be destroyed by the truth, when in fact they were never there to start with.)
Now you’re getting close to what I’m saying, but on the wrong logical level. What I’m saying is that the logical error is that you can’t express a 2-place relationship between a map, and the territory covered by that map, within that same map, as that amounts to claiming the territory is embedded within that map.
How’s that? You can write Newton’s law of universal gravitation describing the orbit of the Earth around the Sun on a piece of paper located on the surface of a table standing in a house on the surface of the Earth. Where does this analogy break from your point of view?
″...but, you can’t fold up the territory and put it in your glove compartment”
How’s that? You can write Newton’s law of universal gravitation describing the orbit of the Earth around the Sun on a piece of paper located on the surface of a table standing in a house on the surface of the Earth. Where does this analogy break from your point of view?
The “aboutness” relationship between the written version of Newton’s law and the actual instances of it is something that lives in the map in your head.
IOW, the aboutness is not on the piece of paper. Nor does it exist in some supernatural link between the piece of paper and the objects acting in accordance with the expressed law.
The “aboutness” relationship between the written version of Newton’s law and the actual instances of it is something that lives in the map in your head.
Your head describes how your head rotates around the Sun.
What I’m saying is that the logical error is that you can’t express a 2-place relationship between a map, and the territory covered by that map, within that same map, as that amounts to claiming the territory is embedded within that map.
Your head describes how your head rotates around the Sun.
No, your head is rotating around the Sun, and it contains a description relating the ideas of “head” and “Sun”. You are confusing head 1 (the real head) with head 2 (the “head” pictured inside head 1), as well as Sun 1 (the real Sun) and Sun 2 (the “Sun” pictured inside head 1).
No, I’m not confusing them. They are different things. Yet the model simulates the real thing, which means the following (instead of magical aboutness): By examining the model it’s possible to discover new properties of its real counterpart, that were not apparent when the model was being constructed, and that can’t be observed directly (or it’s just harder to do), yet can be computed from the model.
By examining the model it’s possible to discover new properties of its real counterpart, that were not apparent when the model was being constructed, and that can’t be observed directly (or it’s just harder to do), yet can be computed from the model.
Indeed. Although more precisely, examining the model merely suggests or predicts these “new” (rather, previously undiscovered, unnoticed, or unobservable) properties.
That is what I mean by isomorphism between model and territory. The common usage of “about”, however, projects an intention onto this isomorphism—a link that can only exist in the mind of the observer, not the similarity of shapes between one physical process and another.
Since agent’s possible actions are one of the things in the territory captured by the model, it’s possible to use the model to select an action leading to a preferable outcome, and to perform thus selected action, determining the territory to conform with the plan. The correspondence between the preferred state of the world in the mind and the real world is ensured by this mechanism for turning plans into actuality. Pathologies aside, or course.
I don’t disagree with anything you’ve just said, but it does nothing to support the idea of an isomorphism inherently meaning that one thing is “about” another.
If I come across a near-spherical rock that resembles the moon, does this make the rock “about” the moon? If I find another rock that is shaped the same, does that mean it is about the moon? The first rock? Something else entirely?
The :”aboutness” of a thing can’t be in the thing, and that applies equally to thermostats and humans.
The (external) aboutness of a thermostat’s actions don’t reside in the thermostat’s map, and humans are deluded when they project that the (external) aboutness of their own actions actually resides within the same map they’re using to decide those actions. It is merely a sometimes-useful (but often harmful) fiction.
Taboo “aboutness” already. However unfathomably confused the philosophic and folk usage of this word is doesn’t interest me much. What I mean by this word I described in thesecomments, and this usage seems reasonably close to the usual one, which justifies highjacking the word for the semi-technical meaning rather than inventing a new one. This is also the way meaning/aboutness is developed in formal theories of semantics.
What I mean by this word I described in these comments, and this usage seems reasonably close to the usual one, which justifies highjacking the word for the semi-technical meaning rather than inventing a new one.
So, you are saying that you have no argument with my position, because you have not been using either “about” or “preference” with their common usage?
If that is the case, why couldn’t you simply say that, instead of continued argument and posturing about your narrower definition of the words? ISTM you could have pointed that out days ago and saved us all a lot of time.
This is also not the first time where I have been reducing the common usage of a word (e.g. “should”) and then had you argue that I was wrong, based on your own personal meaning of the word.
Since I have no way of knowing in advance all of the words you have chosen to redefine in your specialized vocabulary, would it be too much to ask if you point out which words you are treating as specialized when you argue that my objection to (or reduction of) the common meaning of the word is incorrect, because it does not apply to your already-reduced personal version of the word?
Then, I could simply nod, and perhaps ask for your reduction in the case where I do not have a good one already, and we would not need to have an extended argument where we are using utterly incompatible definitions for such words as “about”, “preference”, and “should.”
Actually, the point is that most of the other usages of these words are meaningless confusion, and the argument is that this particular semi-technical sense is what the word actually means, when you get the nonsense out of it. It’s not how it’s used, but it’s the only meaningful thing that fits the idea.
Since you don’t just describe the usage of the word, but argue for the confusion behind it, we have a disagreement. Presenting a clear definition is the easy part. Showing that ten volumes of the Encyclopedia of Astrology is utter nonsense is harder, and arguing with each point made in its chapters is a wrong approach. It should be debunked on meta-level, with an argument that doesn’t require the object-level details, but that requires the understanding of the general shape of the confusion.
Actually, the point is that most of the other usages of these words are meaningless confusion
Yes, but ones which most people do not understand to be confusion, and the only reason I started this discussion in the first place was because I was trying to clear up one point in that confusion.
Since you don’t just describe the usage of the word, but argue for the confusion behind it, we have a disagreement
I am arguing against the confusion, not for the confusion. So, as far as I can tell, there should be no disagreement.
In practice, however, you have been making arguments that sound like you are still confusing map and territory in your own thinking, despite seeming to agree with my reasoning on the surface. You are consistently treating “about” as a 2-way relationship, when to be minimally cohesive, it requires 3 entities: the 2 entities that have an isomorphism, and the third entity whose map ascribes some significance to this isomorphism.
You’ve consistently omitted the presence of the third entity, making it sound as though you do not believe it to be required, and thereby committing the mind projection fallacy.
Also, please stop using “mind projection fallacy”, you are misapplying the term.
How is that, precisely? My understanding is that it is mind projection when you mistakenly believe a property of an object to be intrinsic, rather than merely attributed.
I am pointing out that “aboutness” (whose definition I never agreed on, because you handwaved away the subject by saying it is I who should define it), is not an intrinsic property of isomorphic relationships.
Rather, it is a property being attributed to that relationship, a label that is being expressed in some map.
(Prediction: your next reply will still not address this point, nor clarify your definition of “about”, but simply handwave again why it is that I am doing something else wrong. Anything but actually admitting that you have been using a mind-projecting definition of “about” since the beginning of this conversation, right up until the point where you ducked the question by asking me to taboo it, rather than defend the imprecise definition you’ve been using, or clear up any of the other handwaving you’ve been using to beg the question. I base this prediction on the rapid increase in non-responsive replies that, instead of defending the weak points of your position, represent attempts to find new ways to attack me and/or my position. A rationalist should be able to attack their own weak points, let alone being able to defend them, without resorting to distraction, subject-changing, and playing to the gallery.)
There are natural categories, like “tigers”, that don’t require much of a mind to define. It’s not mind projection fallacy to say that something is a tiger.
P.S. I’m correcting self-censoring threshold, so expect silence where before I’d say something for the fifth time.
There are natural categories, like “tigers”, that don’t require much of a mind to define. It’s not mind projection fallacy to say that something is a tiger.
Is that actually an argument? ’cause it sounds like a random sentence injected into the conversation, perhaps as an invitation for me to waste time tearing “natural categories” to shreds, while leaving you still able to deny that your statement actually relates in any substantial way to your point… thereby once again relieving you of any need to actually defend your position.
That is, are you actually claiming aboutness to be a natural category? Or just trying to get me to treat your argument as if you were doing so?
P.S. I’m correcting self-censoring threshold, so expect silence
I already did and do expect it; see my “prediction” in the parent to your comment. I predicted that you would remain silent on any substantive issues, and avoid admitting anywhere where you were mistaken or incorrect. (I notice, for example, that you went back and deleted the comment where you said I was using “mind projection fallacy” incorrectly, rather than admit your attack was in error.)
And, as predicted, you avoided directly addressing the actual point of contention, instead choosing to enter a new piece of handwaving to imply that I am doing something else wrong.
That is, you appear to now be implying that I am using an overbroad definition of the MPF, without actually saying that I am doing it, or that your statement is in any way connected to your own position. This is a nice double bind, since either way I interpret the statement, you can retreat… and throw in more irrelevancies.
I don’t know if “troll” is a natural category, but you’re sure getting close to where I’d mind-project your behavior as matching that of one. ;-)
For the record, I thought it obvious that my argument above implied that I claim aboutness to be a natural category (although I’m not perfectly sure it’s a sound argument). I deleted my comment because I deemed it low-quality, before knowing you responded to it.
I claim aboutness to be a natural category (although I’m not perfectly sure it’s a sound argument)
It’s not.
First, the only way it can be one is if “natural category” has the reductionist meaning of “a category based on distinctions that humans are biased towards using as discriminators”, rather than “a category that ‘naturally’ exists in the territory”. (Categories are abstractions, not physical entities, after all.)
And second, even if you do use the reductionist meaning of “natural category”, then this does not in any way undermine the conclusion that “aboutness” is mind projection when you omit the entity mapping that aboutness from the description.
In other words, this argument appears to result in only one of two possibilities: either “aboutness” is not a natural category per the reductionist definition, and thus inherently a mind projection when the attribution source is omitted, or “aboutness” is a natural category per the reductionist definition… in which case the attribution source has to be a human brain (i.e., in another map).
Finally, if we entirely reject the reductionist definition of “natural category”, then “natural category” is itself an instance of the mind projection fallacy, since the description omits any definition of for whom the category is “natural”.
In short, QED: the argument is not sound. (I just didn’t want to bother typing all this if you were going to retreat to a claim this was never your argument.)
To the (unknowable*) extent that the portion of my map labelled “territory” is an accurate reflection of the relevant portion of the territory, do I get to say that my preferences are “about” the territory (implicitly including disclaimers like “as mediated by the map”)?
* due at the very least to Matrix/simulation scenarios
To the (unknowable*) extent that the portion of my map labelled “territory” is an accurate reflection of the relevant portion of the territory, do I get to say that my preferences are “about” the territory (implicitly including disclaimers like “as mediated by the map”)?
You can say it all you want, it just won’t make it true. ;-) Your preference is “about” your experience, just as the thermostat’s heating and cooling preferences are “about” the temperature of its sensor, relative to its setting.
For there to be an “about”, there has to be another observer, projecting a relationship of intention onto the two things. It’s a self-applied mind projection—a “strange loop” in your model—to assert that you can make such statements about your own preferences, like a drawing of Escher wherein Escher is pictured, making the drawing. The whole thing only makes sense within the surface of the paper.
(Heck, it’s probably a similar strange loop to make statements about one’s self in general, but this probably doesn’t lead to the same kind of confusion and behavioral problems that result from making assertions about one’s preferences.… No, wait, actually, yes it does! Self-applied nominalizations, like “I’m bad at math” are an excellent example. Huh. I keep learning interesting new things in this discussion.)
I feel your frustration, but throwing the word “magical” in there is just picking a fight, IMO. Anyway, I too would like to see P.J. Eby summarize his position in this format.
I have a certain technical notion of magic in mind. This particular comment wasn’t about frustration (some of the others were), I’m trying out something different of which I might write a post later.
And what if he did ask?
Then, as I said, he cares about something closer to the reality.
The major point I’ve been trying to make in this thread is that because human preferences are not just in the map but of the map, is that it allows people to persist in delusions about their motivations. And not asking the question is a perfect example of the sort of decision error this can produce!
However, asking the question doesn’t magically make the preference about the territory either; in order to prefer the future include his sister’s best interests, he must first have an experience of the sister and a reason to wish well of her. But it’s still better than not asking, which is basically wireheading.
The irony I find in this discussion is that people seem to think I’m in favor of wireheading because I point out that we’re all doing it, all the time. When in fact, the usefulness of being aware that it’s all wireheading, is that it makes you better at noticing when you’re doing it less-usefully.
The fact that he hadn’t asked his sister, or about his sister’s actual well-being instantly jumped off the screen at me, because it was (to me) obvious wireheading.
So, you could say that I’m biased by my belief to notice wireheading more, but that’s an advantage for a rationalist, not a disadvantage.
Is human knowledge also not just in the map, but exclusively of the map? If not, what’s the difference?
Any knowledge about the actual territory can in principle be reduced to mechanical form without the presence of a human being in the system.
To put it another way, a preference is not a procedure, process, or product. The very use of the word “preference” is a mind projection—mechanical systems do not have “preferences”—they just have behavior.
The only reason we even think we have preferences in the first place (let alone that they’re about the territory!) is because we have inbuilt mind projection. The very idea of having preferences is hardwired into the model we use for thinking about other animals and people.
You never answered my question.
You said, “if not, what’s the difference”, and I gave you the difference. i..e, we can have “knowledge” of the territory.
So, knowledge exists in the structure of map and is about the territory, while preference can’t be implemented in natural artifacts. Preference is a magical property of subjective experience, and it is over maps, or about subjective experience, but not, for example, about the brain. Saying that preference exists in the structure of map or that it is about the territory is a confusion, that you call “mind projection” Does that summarize your position? What are the specific errors in this account?
No, “preference” is an illusory magical property projected by brains onto reality, which contains only behaviors.
Our brains infer “preferences” as a way of modeling expected behaviors of other agents: humans, animals, and anything else we perceive as having agency (e.g. gods, spirits, monsters). When a thing has a behavior, our brains conclude that the thing “prefers” to have either the behavior or the outcome of the behavior, in a particular circumstance. In other words, “preference” is a label attached to a clump of behavior-tendency observations and predictions in the brain—not a statement about the nature of the thing being observed.
Thus, presuming that these “preferences” actually exist in the territory is supernaturalism, i.e., acting as though basic mental entities exist.
My original point had more to do with the types of delusion that occur when we reason on the basis of preferences actually existing, rather than the idea simply being a projection of our own minds. However, the above will do for a start, as I believe my other conclusions can be easily reached from this point.
Do you think someone is advocating the position that goodness of properties of the territory is an inherent property of territory (that sounds like a kind of moral realism)? This looks like the lack of distinction between 1-place and 2-place words. You could analogize preference (and knowledge) as a relation between the mind and the (possible states of the) territory, that is neither a property of the mind alone, nor of the territory alone, but a property of them being involved in a certain interaction.
No, I assume that everybody who’s been seriously participating has at least got that part straight.
Now you’re getting close to what I’m saying, but on the wrong logical level. What I’m saying is that the logical error is that you can’t express a 2-place relationship between a map, and the territory covered by that map, within that same map, as that amounts to claiming the territory is embedded within that map.
If I assert that my preferences are “about” the real world, I am making a category error because my preferences are relationships between portions of my map, some of which I have labeled as representing the territory.
The fact that there is a limited isomorphism between that portion of my map, and the actual territory, does not make my preferences “about” the territory, unless you represent that idea in another map.
That is, I can represent the idea that “your” preferences are about the territory in my map… in that I can posit a relationship between the part of my map referring to “you”, and the part of my map referring to “the territory”. But that “aboutness” relationship is only contained in my map; it doesn’t exist in reality either.
That’s why it’s always a mind projection fallacy to assert that preferences are “about” territory: one cannot assert it of one’s own preferences, because that implies the territory is inside the map. And if one asserts it of another person’s preferences, then that one is projecting their own map onto the territory.
I initially only picked on the specific case of self-applied projection, because understanding that case can be very practically useful for mind hacking. In particular, it helps to dissolve certain irrational fears that changing one’s preferences will necessarily result in undesirable futures. (That is, these fears are worrying that the gnomes and fairies will be destroyed by the truth, when in fact they were never there to start with.)
How’s that? You can write Newton’s law of universal gravitation describing the orbit of the Earth around the Sun on a piece of paper located on the surface of a table standing in a house on the surface of the Earth. Where does this analogy break from your point of view?
″...but, you can’t fold up the territory and put it in your glove compartment”
The “aboutness” relationship between the written version of Newton’s law and the actual instances of it is something that lives in the map in your head.
IOW, the aboutness is not on the piece of paper. Nor does it exist in some supernatural link between the piece of paper and the objects acting in accordance with the expressed law.
Located on the planet Earth.
And this helps your position how?
Your head describes how your head rotates around the Sun.
No, your head is rotating around the Sun, and it contains a description relating the ideas of “head” and “Sun”. You are confusing head 1 (the real head) with head 2 (the “head” pictured inside head 1), as well as Sun 1 (the real Sun) and Sun 2 (the “Sun” pictured inside head 1).
No, I’m not confusing them. They are different things. Yet the model simulates the real thing, which means the following (instead of magical aboutness): By examining the model it’s possible to discover new properties of its real counterpart, that were not apparent when the model was being constructed, and that can’t be observed directly (or it’s just harder to do), yet can be computed from the model.
Indeed. Although more precisely, examining the model merely suggests or predicts these “new” (rather, previously undiscovered, unnoticed, or unobservable) properties.
That is what I mean by isomorphism between model and territory. The common usage of “about”, however, projects an intention onto this isomorphism—a link that can only exist in the mind of the observer, not the similarity of shapes between one physical process and another.
Since agent’s possible actions are one of the things in the territory captured by the model, it’s possible to use the model to select an action leading to a preferable outcome, and to perform thus selected action, determining the territory to conform with the plan. The correspondence between the preferred state of the world in the mind and the real world is ensured by this mechanism for turning plans into actuality. Pathologies aside, or course.
I don’t disagree with anything you’ve just said, but it does nothing to support the idea of an isomorphism inherently meaning that one thing is “about” another.
If I come across a near-spherical rock that resembles the moon, does this make the rock “about” the moon? If I find another rock that is shaped the same, does that mean it is about the moon? The first rock? Something else entirely?
The :”aboutness” of a thing can’t be in the thing, and that applies equally to thermostats and humans.
The (external) aboutness of a thermostat’s actions don’t reside in the thermostat’s map, and humans are deluded when they project that the (external) aboutness of their own actions actually resides within the same map they’re using to decide those actions. It is merely a sometimes-useful (but often harmful) fiction.
Taboo “aboutness” already. However unfathomably confused the philosophic and folk usage of this word is doesn’t interest me much. What I mean by this word I described in these comments, and this usage seems reasonably close to the usual one, which justifies highjacking the word for the semi-technical meaning rather than inventing a new one. This is also the way meaning/aboutness is developed in formal theories of semantics.
So, you are saying that you have no argument with my position, because you have not been using either “about” or “preference” with their common usage?
If that is the case, why couldn’t you simply say that, instead of continued argument and posturing about your narrower definition of the words? ISTM you could have pointed that out days ago and saved us all a lot of time.
This is also not the first time where I have been reducing the common usage of a word (e.g. “should”) and then had you argue that I was wrong, based on your own personal meaning of the word.
Since I have no way of knowing in advance all of the words you have chosen to redefine in your specialized vocabulary, would it be too much to ask if you point out which words you are treating as specialized when you argue that my objection to (or reduction of) the common meaning of the word is incorrect, because it does not apply to your already-reduced personal version of the word?
Then, I could simply nod, and perhaps ask for your reduction in the case where I do not have a good one already, and we would not need to have an extended argument where we are using utterly incompatible definitions for such words as “about”, “preference”, and “should.”
Actually, the point is that most of the other usages of these words are meaningless confusion, and the argument is that this particular semi-technical sense is what the word actually means, when you get the nonsense out of it. It’s not how it’s used, but it’s the only meaningful thing that fits the idea.
Since you don’t just describe the usage of the word, but argue for the confusion behind it, we have a disagreement. Presenting a clear definition is the easy part. Showing that ten volumes of the Encyclopedia of Astrology is utter nonsense is harder, and arguing with each point made in its chapters is a wrong approach. It should be debunked on meta-level, with an argument that doesn’t require the object-level details, but that requires the understanding of the general shape of the confusion.
Yes, but ones which most people do not understand to be confusion, and the only reason I started this discussion in the first place was because I was trying to clear up one point in that confusion.
I am arguing against the confusion, not for the confusion. So, as far as I can tell, there should be no disagreement.
In practice, however, you have been making arguments that sound like you are still confusing map and territory in your own thinking, despite seeming to agree with my reasoning on the surface. You are consistently treating “about” as a 2-way relationship, when to be minimally cohesive, it requires 3 entities: the 2 entities that have an isomorphism, and the third entity whose map ascribes some significance to this isomorphism.
You’ve consistently omitted the presence of the third entity, making it sound as though you do not believe it to be required, and thereby committing the mind projection fallacy.
So you are saying that my definition with which you’ve just agreed is unreasonable. Pick something tangible.
(Also, please stop using “mind projection fallacy”, you are misapplying the term.)
How is that, precisely? My understanding is that it is mind projection when you mistakenly believe a property of an object to be intrinsic, rather than merely attributed.
I am pointing out that “aboutness” (whose definition I never agreed on, because you handwaved away the subject by saying it is I who should define it), is not an intrinsic property of isomorphic relationships.
Rather, it is a property being attributed to that relationship, a label that is being expressed in some map.
That sounds like a textbook case of the mind projection fallacy, i.e. “the error of projecting your own mind’s properties into the external world.”
(Prediction: your next reply will still not address this point, nor clarify your definition of “about”, but simply handwave again why it is that I am doing something else wrong. Anything but actually admitting that you have been using a mind-projecting definition of “about” since the beginning of this conversation, right up until the point where you ducked the question by asking me to taboo it, rather than defend the imprecise definition you’ve been using, or clear up any of the other handwaving you’ve been using to beg the question. I base this prediction on the rapid increase in non-responsive replies that, instead of defending the weak points of your position, represent attempts to find new ways to attack me and/or my position. A rationalist should be able to attack their own weak points, let alone being able to defend them, without resorting to distraction, subject-changing, and playing to the gallery.)
There are natural categories, like “tigers”, that don’t require much of a mind to define. It’s not mind projection fallacy to say that something is a tiger.
P.S. I’m correcting self-censoring threshold, so expect silence where before I’d say something for the fifth time.
Is that actually an argument? ’cause it sounds like a random sentence injected into the conversation, perhaps as an invitation for me to waste time tearing “natural categories” to shreds, while leaving you still able to deny that your statement actually relates in any substantial way to your point… thereby once again relieving you of any need to actually defend your position.
That is, are you actually claiming aboutness to be a natural category? Or just trying to get me to treat your argument as if you were doing so?
I already did and do expect it; see my “prediction” in the parent to your comment. I predicted that you would remain silent on any substantive issues, and avoid admitting anywhere where you were mistaken or incorrect. (I notice, for example, that you went back and deleted the comment where you said I was using “mind projection fallacy” incorrectly, rather than admit your attack was in error.)
And, as predicted, you avoided directly addressing the actual point of contention, instead choosing to enter a new piece of handwaving to imply that I am doing something else wrong.
That is, you appear to now be implying that I am using an overbroad definition of the MPF, without actually saying that I am doing it, or that your statement is in any way connected to your own position. This is a nice double bind, since either way I interpret the statement, you can retreat… and throw in more irrelevancies.
I don’t know if “troll” is a natural category, but you’re sure getting close to where I’d mind-project your behavior as matching that of one. ;-)
For the record, I thought it obvious that my argument above implied that I claim aboutness to be a natural category (although I’m not perfectly sure it’s a sound argument). I deleted my comment because I deemed it low-quality, before knowing you responded to it.
It’s not.
First, the only way it can be one is if “natural category” has the reductionist meaning of “a category based on distinctions that humans are biased towards using as discriminators”, rather than “a category that ‘naturally’ exists in the territory”. (Categories are abstractions, not physical entities, after all.)
And second, even if you do use the reductionist meaning of “natural category”, then this does not in any way undermine the conclusion that “aboutness” is mind projection when you omit the entity mapping that aboutness from the description.
In other words, this argument appears to result in only one of two possibilities: either “aboutness” is not a natural category per the reductionist definition, and thus inherently a mind projection when the attribution source is omitted, or “aboutness” is a natural category per the reductionist definition… in which case the attribution source has to be a human brain (i.e., in another map).
Finally, if we entirely reject the reductionist definition of “natural category”, then “natural category” is itself an instance of the mind projection fallacy, since the description omits any definition of for whom the category is “natural”.
In short, QED: the argument is not sound. (I just didn’t want to bother typing all this if you were going to retreat to a claim this was never your argument.)
Indeed. If this didn’t work then there wouldn’t be any practical point in modeling physics!
To the (unknowable*) extent that the portion of my map labelled “territory” is an accurate reflection of the relevant portion of the territory, do I get to say that my preferences are “about” the territory (implicitly including disclaimers like “as mediated by the map”)?
* due at the very least to Matrix/simulation scenarios
You can say it all you want, it just won’t make it true. ;-) Your preference is “about” your experience, just as the thermostat’s heating and cooling preferences are “about” the temperature of its sensor, relative to its setting.
For there to be an “about”, there has to be another observer, projecting a relationship of intention onto the two things. It’s a self-applied mind projection—a “strange loop” in your model—to assert that you can make such statements about your own preferences, like a drawing of Escher wherein Escher is pictured, making the drawing. The whole thing only makes sense within the surface of the paper.
(Heck, it’s probably a similar strange loop to make statements about one’s self in general, but this probably doesn’t lead to the same kind of confusion and behavioral problems that result from making assertions about one’s preferences.… No, wait, actually, yes it does! Self-applied nominalizations, like “I’m bad at math” are an excellent example. Huh. I keep learning interesting new things in this discussion.)
That’s one way of writing. Another is to edit what you intend to post before you click ‘comment’.
I feel your frustration, but throwing the word “magical” in there is just picking a fight, IMO. Anyway, I too would like to see P.J. Eby summarize his position in this format.
I have a certain technical notion of magic in mind. This particular comment wasn’t about frustration (some of the others were), I’m trying out something different of which I might write a post later.