Examples of the Mind Projection Fallacy?
I suspect that achieving a clear mental picture of the sheer depth and breadth of the mind projection fallacy is a powerful mental tool. It’s hard for me to state this in clearer terms, though, because I don’t have a wide collection of good examples of the mind projection fallacy.
In a discussion yesterday, we all had trouble finding actual example of the mind projection fallacy. Overall, we had essentially two examples:
Taste. People frequently confuse “I like this” and “this is good.” (This really subsumes the attractiveness example.)
Probability. This seems like a pretty good just-so-story for where frequentist probability comes from, as opposed to Bayesian probability.
Searching for “mind projection fallacy” on Less Wrong, I also see:
Thinking that purpose is an inherent property of something, instead of it having been placed there by someone for some reason. (here)
Mulling or arguing over definitions to solve object-level problems. (actually, most the ways words can be wrong sequence)
Young children commit the mind projection fallacy. If you introduce a puppet figure Mary who sees that a sweet is put under box A, then take Mary away, swap the location of the sweet with Box B and then ask the children where Mary would look for the sweet, very young children tend to say it’s B. Only older children realize Mary would look under Box A because she doesn’t know otherwise.
This seems like a fertile source of examples—anywhere that ‘theory of mind’ is lacking. (I think that’s the right keyword to search for papers with.)
I wonder if the fact that it’s a puppet confounds this at all. If the child doesn’t realize that the puppet is meant to be a separate entity from the puppeteer who moved the sweet, their answer is correct. That said, I expect the experiment has been done with actual people, not just puppets—but if not, it’s something to look at.
Pointer to literature-n-keywords: Sally-Anne Test :-)
Thanks! I just did the experiment with my three-years-old. She didn’t pass, and she was quite confident in her wrong answer.
She interrupted the experiment twice. First at the very beginning, when she realized that poor Anne has no marbles, and went and brought her another one. We explained to her that in this story there is only one marble. Later she interrupted the play to give the marble back to its rightful owner. Right now, she is in the process of giving one marble (actually, Lego brick) to each of her dozens of plush toys.
Awwwwww!
Thanks for the link! I knew about the experiment but had forgotten its name. My attempts at search failed me.
Good good—that was my extremely vague recollection from having previously heard about such experiments, but I wasn’t the least bit confident in it.
A friend of mine once said that often people would be astonished how rich he must be, however, what happens is that my friend values things differently (e.g. he does not own a car which is quite unusual in Stuttgart). So when people see the high-tech gadgets he buys they think about all the other stuff that they would buy before that and then sum over that completely imagined spending.
Odd, human-centric example:
I used to think that everyone had the same favorite internal color-experience and we all just grew up calling the colors different names, blissfully unaware that your “red” is in fact my yellow, or your cousin’s green. After all, how could someone NOT like my favorite color as much as I did? Clearly, they all liked purple and just grew up calling it a different color...
It’s weird how I managed to both avert and run smack right into the mind projection fallacy in the same thought. I realized that everyone could, in theory, have a different internal experience and attach it to the same outer word or thing, and yet I still insisted that the “favorite” attribute was universal.
I don’t believe it anymore, but I still think about the mind projection fallacy in terms of it. There really are attributes for colors that are near-universal, for humans. Red has very good reasons for being associated with passion and aggressiveness, being the color of blood. But think if my pet theory had been true, and someone else experienced it as a calm sky blue? It wouldn’t BE calm for them—they’d have the same ingrained emotional reaction for it that I have for my version of red. So however much it feels like red is a passionate and aggressive color in and of itself, the passion and aggressiveness really only comes from me.
Incidentally, we can prove to some extent that different people do perceive colours differently. If you get a lamp producing a single-wavelength red light, another lamp producing a single-wavelength blue light, and a third lamp producing a single-wavelength violet light, then you can point the red and blue lamps at the same piece of white paper, and adjust their brightnesses until the combination looks just like the pure purple light. But then there will be people who disagree with you! They’ll think that you need more blue, or more red!
EDIT: The technical details above are wrong, it’s not possible to mix two pure wavelengths to match the colour of another pure wavelength. However there are multi-wavelength mixtures that look the same to one person but not to another.
Neat! Link?
References are irritatingly difficult to find. Many papers mention this result or seek to explain it, but I just can’t find a reference to an actual experiment. This paper is close though: “Factors underlying individual differences in the color matches of normal observers”
Colors-as-near-universal-attributes is really a false claim. Consider examples of the varieties of color blindness, tetrachromacy, and cultures in which certain colors go by names that other cultures distinguish as being different. Your last paragraph seems to indicate that you still hold to the Mind Projection Fallacy which you had assumed to have overcome by realizing your favorite isn’t everyone’s favorite. Well, even their “blue” might be your “green”. Generally, this goes unnoticed because we tend to acculturate and inhabit more or less similar linguistic spaces.
When did I say that color was a near-universal attribute? I said that there were near-universal attributes associated with certain parts of the visible light spectrum, not that colors themselves were universal. You are right though—for that claim to make sense colors also have to be assumed to be near-universal. And near-universal is probably too strong a term to describe the kind of weak color assocations I’m thinking of. Any studies that showing such effects (like red and yellow being associated with hunger) were probably Western-culture-based and should be taken with a grain of salt and a Big Mac.
I do know about the examples to the contrary that you mentioned. Color perception can vary from person to person, and naming conventions for colors are REALLY not universal. However, notice how color blindness and tetrachromacy are considered exceptions to the norm. These exceptions are largely the reason I specified near-universal for humans rather than simply universal for humans. And while different cultures divide their bleggs and rubes by different rules, it does not diminish their ability to perceive the variations of shades within the individual blegg and rube bins.
Unlike color-blindness. Colorblindness will diminish that ability.
Here’s what indicated as much:
An “attribute for color” is not much different from showing that a name is an attribute for a color. Again, you were making the same mistake by thinking that a name for a color is an absolute. Definitely not the case, which you recognize:
To continue –
– I further pointed out that humans do not live in a mono-culture with a universal language that predetermines the arrangement of linguistic space in connection to perceived colors. That is the norm, such that the claim of near-universality does not apply. (And were such a mono-culture present, all it would take is a small deviation to accumulate to undermine it. Think of the Tower of Babel.)
The objection I posited covers all cases, even the exceptions. It’s really the mind-projection fallacy, such that one human regards their “normal” experience as the “normal” experience of “normal” humans, more or less.
Here’s one that comes to mind:
I really don’t know anything about baseball, so if I’m going to bet on either the Red Socks or the Yankees, I’d have to go fifty-fifty on it. Therefore, the chance that either will win is fifty percent.
(Right at the “therefore” is the fallacy put forward as a veritable property of either of the teams winning, when in fact it is merely indicative of the ignorance of the gambler. The actual probability is most likely not 50-50.)
EDIT: Others might enjoy reading this PDF (“Probability Theory as Logic”) for additional background and ideas. There you’ll also see a bon mot by Montaigne: “Man is surely mad. He cannot make a worm; yet he makes Gods by the dozen.”
It’s surely a fallacy, but I’m not sure it’s the typical mind one.
“It’s either the typical mind fallacy, or it’s not. 50-50!”
EDIT Somewhere between reading the post and clicking comment I seem to have switched from “mind projection” to “typical mind”. Darn: that makes it 33-33-33 instead.
Funny. I thought of pointing that out as well, but I thought it probably wasn’t worth mentioning.
As I’ve imagined it being said before: “I’m either a genius or I’m not. That’s a 50% chance of my being a genius. Just pray luck isn’t on my side!” :)
Perception, in particular of ambiguity. The Spinning Dancer is now my favorite visual example of Mind Projection: we attribute to the world “out there” properties that are in fact only “in there”.
The idea that morality is objective if it comes from a deity is a mind projection fallacy. It can take two forms:
1) For the average person, assuming that if you think following God is a good idea, and God says it is a good idea, then it must objectively be a good idea. When prodded for details of how “God says it” turns into “it is objectively right,” you will find these people often have only vague and plainly incorrect ideas (“If you don’t you’ll go to hell” as if might makes right, “He created us” as if creators have complete moral authority over creations 100% of the time, etc.)
2) God is an all-powerful and omnipresent being, therefore, by magic, God’s subjective desires become objective desires.
Either God’s laws are good ideas in people’s heads or they’re good ideas in God’s head; neither makes them “objective,” and philosophers now accept that Divine Morality is a subjectivist theory of ethics.
Relatedly, the idea that something can be meaningful without a mind for it to be meaningful to. See William Lane Craig’s writings on Ultimate Meaning. Nothing can be meaningful to a rock. What many theists mean when they say God gives your life ultimate meaning is that it gives meaning subjective to God. Again, if you don’t care what God thinks—say you’re a paperclip maximizer and the only thing you care about is making paperclips—the heaven/hell endgame won’t be particularly meaningful to you.
Even in Goedel, Escher, Bach, when Hofstadter concludes that intrinsic meaning is possible in encoded messages, he downplays a bit the necessary caveat that he means it can only exist for minds like ours. If the universe had no minds, there would be no meaning.
The Labor Theory of Value is a mind projection fallacy. Value is something that can only exist in minds, and if everyone were to suddenly change their minds about that which is valuable, all the labor in the world used to produce a previously desired product wouldn’t mean a thing. Marxists recognize this, which is why they talk about “socially necessary labor-time,” a self-defeating addendum if there ever was one.
I know this isn’t typically a theology forum, but since we’re here.....
The counter-argument to this is that if there is an objective morality, then you could reasonably expect that an all-knowing God would know what it was. So when God (you believe) gives laws and tells you they apply universally, you might reasonably think they were objective, without necessarily knowing why.
Having said THAT, I’ve seen some theology textbooks that state that God has absolute freedom to make morality whatever he says it is, and if that’s not subjective I don’t know what is.
There is of course the argument that deities are mind projection fallacies in their entirety....
I’m also not sure about the idea that you need a mind in order to have meaning. If you make a robot that prefers to crawl towards lights to recharge itself through its solar panels, you’re making something on a continuum of more and more sophisticated feeding creatures, topped (arguably) by ourselves, who think that food is good and starvation, bad. Where does meaning begin? Arguably when you begin processing information—something sophisticated enough to be called a mind is not necessary to get started.
Fair enough; I was intending exactly a broad and unsophisticated definition of mind. An information processing unit should be all that’s required. It does still put a damper in “universal meaning” or, in an argument I had with a theist a long time back, the idea that the rocks and the trees and “creation” in general “groaned” when Adam and Eve sinned—as if these objects could care about such a thing were they not possessed by pixies.
Well, yes.....
That having been said, the passage your friend was referring to (Romans 8 22) is basically saying that the difference between good and evil is a matter of life and death, not just for us, but for everything. And singularitarians around here tend to think something quite similar. One group think there is a good God, and the others are trying to make one....
Presumably, if God is omnipotent he has the power to transform something from being subjective to being objective. I mean, once you’re already breaking the rules of physics, the rules of logic aren’t too far away.
Huge pet peeve: you’re eliding between separate concepts in Marxian political economy. “Use value” has the referent of what you’re calling “value;” while “value” simpliciter refers to SNLT. Of course if prior demand is a sufficiently poor estimator of future demand then LTV ceases to be a useful simplification of reality, but that’s an empirical question.
Hmm.
Oops!
This is mostly an argument about definitions. If everyone’s minds were modified so that people would start valuing the eating of babies, there is still a clear sense in which it won’t become a right thing. If you are talking about that which most people value at any given time, then certainly it depends on what most people value at that time, and people’s minds are part of your definition that controls its meaning. If instead you form a fixed designator for whatever people currently value, it will still be pointing to the same thing if people in the future start valuing different things, and minds of future people won’t be involved in this definition and won’t control its meaning.
The whole 2012 mythos is an example: the idea that a dead pre-technological civilization had a unique ability to forecast the coming apocalypse strikes me as an obvious case of the...
(puts on sunglasses)
...Mayan Projection Fallacy.
Arguments in philosophy along the lines of “so human language has existential commitment to {possible worlds, universals, types...}, hence they exist.”
Ontological Argument:
{X} is conceived of as perfectly {Y}.
To be perfectly {Y}, {X} must exist.
Therefore, {X} exists.
This is also reminiscent of Descartes’ cogito:
X cannot occur without Y. X occurs. Therefore, Y exists.
(X=thought; Y=a thinking thing)
Except that’s actually valid logic.
For all X, X implies Y; X, therefore Y.
As opposed to
For all X, Y is required of X; Y or not Y, therefore X.
<-- invalid logic is invalid.anytime you see a face (or any meaning) in a natural form: animals in clouds, faces in mountains, jesus on toast. (8 years later, i know but I was here reading the responses...)
The hope function is an interesting example where it’s very clear that many are not solving the problems correctly because they think the probabilities can’t change.