Exploiting the Typical Mind Fallacy for more accurate questioning?
I was reading Yvain’s Generalizing from One Example, which talks about the typical mind fallacy. Basically, it describes how humans assume that all other humans are like them. If a person doesn’t cheat on tests, they are more likely to assume others won’t cheat on tests either. If a person sees mental images, they’ll be more likely to assume that everyone else sees mental images.
As I’m wont to do, I was thinking about how to make that theory pay rent. It occurred to me that this could definitely be exploitable. If the typical mind fallacy is correct, we should be able to have it go the other way; we can derive information about a person’s proclivities based on what they think about other people.
Eg, most employers ask “have you ever stolen from a job before,” and have to deal with misreporting because nobody in their right mind will say yes. However, imagine if the typical mind fallacy was correct. The employers could instead ask “what do you think the percentage of employees who have stolen from their job is?” and know that the applicants who responded higher than average were correspondingly more likely to steal, and the applicants who responded lower than average were less likely to cheat. It could cut through all sorts of social desirability distortion effects. You couldn’t get the exact likelihood, but it would give more useful information than you would get with a direct question.
In hindsight, which is always 20⁄20, it seems incredibly obvious. I’d be surprised if professional personality tests and sociologists aren’t using these types of questions. My google-fu shows no hits, but it’s possible I’m just not using the correct term that sociologists use. I’m was wondering if anyone had heard of this questioning method before, and if there’s any good research data out there showing just how much you can infer from someone’s deviance from the median response.
But you would also punish accurate guessers if the average guess is too low.
Indeed. As a result it might be rational to signal unfamiliarity with crime by pretending to believe it is less common than it actually is.
It happens all the time. Whenever you hear “I can’t even imagine why someone would do that!”, stop and ask whether it’s really that hard to imagine. Usually it isn’t.
You know, I used to get bothered when people would say that. You’ve helped me recognize that it’s just signaling that was going over my head.
But are you sure that you are not now falling for typical mind fallacy?
The very premise of your original post is that it is not all signaling; that there is a substantial number of the honest-and-naive folk who not only don’t steal but assume that others don’t.
Um… yes? The typical mind fallacy refers to thinking people act similar to me, and I just mentioned I haven’t used that particular type of signaling nor noticed the signaling until jimmy pointed it out. In fact, I can’t even imagine why someone would use that form of signaling.
The second part of your statement seems to belie a misunderstanding of signalling. Honest folks signal. If you don’t steal, you also need to successfully signal that you don’t steal, otherwise nobody will know you don’t steal. Signaling isn’t just a special word for lie.
Commit the typical mind fallacy? I can’t imagine how anyone would do that...
To be honest, this is a perfect example of what is so off-putting about this community. This method is simply socially wrong—it works against both the people who stole, and people who had something stolen from, who get penalized for the honest answer and, if such methods are to be more widely employed, are now inclined to second-guess and say “no, I don’t think anyone steals” (and yes, this method is employed to some extent, subconsciously at least). The idea parasitizes on the social contract that is human language, with the honest naivete of the asocial. It’s as if a Dutch town was building a dike and someone was suggesting that anyone who needs materials for repairing the house should just take them from that weird pile in the sea. The only reason such method can work, is because others have been losing a little here and there to maintain some trust necessary for effective communication.
Real-life employment personality questionnaires are more subtle than this. They might ask things like, “Nobody could resist buying a stolen item they wanted if the price was low enough: Agree/DIsagree”, or “Wanting to steal something is a natural human reaction if a person is treated unfairly: Agree/Disagree”.
That is, they test for thieves’ typical rationalizations, rather than asking straight-up factual questions. Xachariah’s example question isn’t a good use of typical-mind fallacy, because it doesn’t ask a purely theory-of-mind question.
Real personality tests don’t ask how likely someone is to steal, they ask (in effect), how justified they think someone else would be in stealing. The more things you consider justifiable reasons for stealing, the greater the odds you’ll personally find one. ;-)
In any event, they do exploit the typical mind fallacy, they just do so directly, by asking about what people think other people think, from the perspective of a potential thief. If a person is honest, then they must disagree that “nobody could resist”, because they are a strong example of somebody who could, whereas the thief thinks that everyone else is just like them, and has motivated-cognition reasons for wanting to agree.
Good point. More intricate questions like this, with ‘nobody could resist’ wording, are also much more fair. The question as of what the person believes is the natural human state are more dubious.
Can you clarify how “Wanting to steal something is a natural human reaction if a person is treated unfairly” is not a factual question? I mean, I suppose I can see quibbling over what “natural” means here, but I would likely unpack it as meaning, roughly, that the reaction is common among humans in typical scenarios involving typical unfairness. And, well, either wanting to steal something is a common reaction to such a scenario, or it isn’t… just like stealing is (or isn’t).
It seems all the same concerns arise about whether I should give the answer I consider most likely to be true, or the answer I consider most normative, or something else.
First, working against people who stole is an unalloyed good. It eliminates the deadweight loss of the thieves not valuing the goods as much as the customers would have, the reduced investment & profit from theft (which directly affects honest employees’ employment as it turns hiring into more of a lemon market), and redistributes money towards the honest employees who do not help themselves to a five-finger discount. Reducing theft is ceteris paribus a good thing.
This is nowhere close to being an argument that this is a bad thing because it hurts the honest people, because you have not shown that the harm is disproportionate: the honest people are already being harmed. Even if you had any sort of evidence for that, which of course you don’t, this would hardly be a ‘perfect’ example.
Second, these methods—particularly the Bayesian truth serum—work only over groups, so no individual can be reliably identified as a liar or truth-teller in real-world contexts since one expects (in the limit) all answers to be represented at some frequency. This leads to some inherent limits to how much truth can be extracted. (Example: how do you know if someone is a habitual thief, or just watches a lot of cynical shows like The Wire?)
I’m not quite sure what is the essence of your disagreement, or what relation the honest people already being harmed has with the argument I made.
I’m not sure what you think my disagreement should have focused on—the technique outlined in the article can be effective and is used in practice, and there is no point to be made that it is bad for the persons employing it; I can not make an argument convincing an antisocial person not to use this technique. However, this technique depletes the common good that is the normal human communication; it is an example of tragedy of the commons—as long as most of us refrain from using this sort of approach, it can work for the few that do not understand the common good or do not care. Hence the Dutch dike example. This method is a social wrong, not necessarily a personal wrong—it may work well in general circumstances. Furthermore it is, in certain sense that is normally quite obvious, unfair. (edit: It is nonetheless widely used, of course—even with all the social mechanisms in human psyche, the human brain is a predictive system that will use any correlations it encounters—and people do often signal their pretend non-understanding -I suspect this silly game significantly discriminates against Aspergers spectrum disorders).
edit: For a more conspicuous example of how predictive methods are not in general socially acceptable, consider that if I am to train a predictor of criminality on the profile data complete with a photograph, the skin albedo estimate from the photograph will be a significant part of the predictor assuming that the data processed originates in north America. Matter of fact I have some experience with predictive software of the kind that processes interview answers. Let me assure you, my best guess from briefly reading your profile is that you are not in the category of people who benefit from this software that just uses all correlations it finds—pretty much all people that have non standard interests and do not answer questions in the standard ways are penalized, and I do not think it would be easy to fake the answers beneficially without access to the model being used.
I think the heart of the disagreement is that your beliefs about communication are wildly divergent from my beliefs about communication. You mention communication as some sort of innately good thing that can be corrupted by speakers.
I’m of the school of thought that all language is about deception and signaling. Our current language is part of a never ending arms race between lies, status relationships, and the need to get some useful information across. The whole idea that you could have a tragedy of the commons seems odd to me. A new lying or lie detection method is invented, we use it until people learn the signal or how to signal around it, then we adopt some other clever technique. It’s been this way since Ogg first learned he could trick his opponent by saying “Ogg no smash!”
If you hold that language is sacred, then this technique is bad. If you hold that language is a perpetual arms race of deception and counter-deception, this is just another technique that will be used until it’s no longer useful.
I do not see why are you even interested in asking that sort of question if you have such a view—surely under such view you will steal if you are sure you will get away with it, just as you would e.g. try to manipulate and lie. edit: I.e. the premise seems incoherent to me. You need honesty to exist for your method to be of any value; and you need honesty not to exist for your method to be harmless and neutral. If the language is all signaling, the most your method will do is weed you out at the selection process—you have yourself already sent the wrong signal that you believe language to be all deception and signaling.
Well, there’s two different questions here… the first is what is in fact true about human communication, and the second is what’s right and wrong.
I might believe, as Xachariah does, that language is fundamentally a mechanism for manipulating the behavior of others rather than for making true statements about the world… and still endorse using language to make true statements about the world, and reject using language to manipulate the behavior of others.
Indeed, if I were in that state, I might even find it useful to assert that language is fundamentally a mechanism for making true statements about the world, if I believed that doing so would cause others to use it that way, even though such an assertion would (by my hypothetical view) be a violation of moral principles I endorse (since it would be using language to manipulate the behavior of others).
Arms races waste utility. If you defect in the Prisoner’s Dilemma, then no matter what your opponent does, the sum of your and your opponent’s utilities will be lower than if you’d cooperated. (For example, if the payoffs are (1,1) (3,0) (0,3) (2,2), then the sum goes either from 4 to 3, or from 3 to 2.) You can view cooperators as those who create value, though not necessarily for themselves, and defectors as those who destroy value, though not for themselves. So it might make sense to consider the commons sacred, and scold those who abuse it.
Promoting defection also makes sense in situations where being seen to promote defection rather than cooperation earns me status within the community (e.g., it seems cool, or seems clever, or seems contrarian, or what-have-you), and I believe that promoting defection does not significantly affect utility otherwise (e.g., I don’t believe that anyone I care about might ever be in a prisoner’s dilemma where the results actually depend in any way on the stuff I promote now).
A fundamental technique of this arms race is honestly consciously believing in your lies. So given the OP’s proposal, which incentives conscious lying, people feel uncomfortable.
The thing that can be corrupted is the general level of trust in conversation—if you’re not lying and you’re reasonably certain the other person also isn’t lying (consciously), you can relax (consciously), and that feels good. So we don’t like proposals for new social techniques or conventions where to get ahead (be hired) we would have to lie more than we think we do now.
Thinking about this, I’m curious about the last part.
Naively, it seems to me that if I’m being evaluated by a system and i know that the system penalizes respondents who have non standard interests and do not answer questions in the standard ways, but I don’t have access to the model being used, then if I want to improve my score what I ought to do is pretend to have standard interests and to answer questions in standard ways (even when such standard answers don’t reflect my actual thoughts about the question).
I might or might not be able to do that, depending on how good my own model of standard behavior is, but it doesn’t seem that I would need to know about the model being used by the evaluators.
What am I missing?
Reflexive knowledge of the standard answers themselves.
Do you think majority of this community approves of the described method? Or are you put off just by having a discussion about the method, even if people mostly rejected it in the end?
It too closely approximates the way the herein proposed unfriendly AI would reason—get itself a goal (predict stealing on job for example) and then proceed to solving it, oblivious to the notion of fairness. I’ve seen several other posts over the time that rub in exactly same wrong way, but I do not remember exact titles. By the way, what if I am to use the same sort of reasoning as in this post on the people who have a rather odd mental model of artificial minds (or mind uploaded fellow humans) ?
Can you clarify? I am not sure what odd mental models and, more generally, situations you have in mind.
The basic point is still worthwhile: predictions about other people also reveal something about yourself.
The naive implementation could end up punishing honest participants, but slightly more sophisticated methods promote honesty. Bayesian truth serum (mentioned a couple times already) does this through payments to and from survey participants. Payments can also be socially inappropriate, so finding more practical and acceptable methods is an active area of research.
Can you expand on what you consider the terms of the social contract that is human language?
People lie. Getting them to reveal their lies is wrong? It seems like the only bad outcome is people just lying in a different way.
Her point was that honest people who know that many people do steal would be penalized.
I guess something similar is already used by people (consciously or unconsciously), and that honest people with exceptional knowledge/experience already are penalized. Reading this article may help them recognize why, and reduce the penalty.
Perhaps it is those who set such tests who should learn
Actually, that’s Yvain’s post, not mine...
Yep! This is actually a standard method: ask people to estimate what they think other people do. A version of this is the ‘Bayesian truth serum’ trick.
The ‘truth serum’ property of the method is only proved for infinite populations. Intuitively it seems quite clear to me that for small populations, the method can be gamed easily. Do you know of any results on the robustness of the method regarding population size when there is incentive to mislead?
Prelec’s formal results hold for large populations, but it held up well experimentally with 30-50 participants
Witkowski and Parkes develop a truth serum for binary questions with as few as 3 participants. Their mechanism also avoid the potentially unbounded payments required by Prelec’s BTS. Unfortunately the WP truth serum seems very sensitive to the common prior assumption.
Wait, wait, let me understand this. It’s the robust knowledge aggregation part that held up experimentally, not the truth serum part, right? In this experiment the participants had very few incentives to game the system, and they didn’t even have a full understanding of the system’s internals. In contrast, prediction markets are supposed to work even if everybody tries to game them constantly.
Manipulability is addressed experimentally in a different working paper. The participants weren’t told the internals and the manipulations were mostly hypothetical, but honesty was the highest scoring strategy in what they considered.
In some sense, it’s easy to manipulate BTS to give a particular answer. The only problem is you might end up owing the operator incredibly large sums of money. If payments to and from the mechanism aren’t being made, BTS is worthless if people try to game it. I should have a post up shortly about a better mechanism.
No. In one of the posts or papers, I know I saw some comments or discussion that it is deceivable (and so you wouldn’t necessarily want to explain the procedure) but the obvious way doesn’t work.
Whoops, edited.
And that’s exactly the search term I was missing. Good to see it is a real thing.
Great insight! Unsurprisingly, you’re not the first. To my knowledge though, this method doesn’t have a standard name and isn’t prevalent. Predictions about others might give more information, but are still manipulable and hard to interpret when comparing respondents to each other. Did this person say lots of others cheat because they cheat or because they are bad with probabilities?
Alternatively, if you have a question with a single underlying answer, predictions about opinions are potentially useful for filtering out bias. This is the idea behind Prelec’s Bayesian truth serum. Respondents maximize their payments from the system by being honest, and the group with the highest average scores tends be correct.
Or because they’d spent time around cheaters who talked about it?
I wonder what sort of answer a competent forensic accountant would give.
See http://measureofdoubt.com/2011/06/16/bayesian-truth-serum/
Also, a “rent-paying” belief in EY’s sense does not necessarily give rise to useful applications, it just has observational consequences, i.e. predicts something different than its negation.
Thank you for the link.
Rent-paying is more that I try to feel how the world would be if it were true, then feel how the world would be if it were false and see which one feels closer to reality.
Useful applications aren’t necessary, but they’re nice bonuses when they exist. The application is working or not working is a quick check for the theory. It’s a free, ready-made test.
This wouldn’t reward people who don’t steal because they don’t want to, but only those so naive that the thought of stealing doesn’t occur to them.
I’m not too comfortable with the idea of rewarding ignorance and punishing cynicism.
Worse than that, if most people happened to believe fewer others steal than is true, it would punish people for knowing the truth.
An example of this used as a textbook signal for abusive relationships is that people who frequently accuse partners of being unfaithful without evidence are generally those who are cheating themselves or have cheated in the past.
I would guess that people who were cheated upon by their former partners also have increased the prior probability of someone cheating on them.
That’s generally the problem with this type of reasoning. A strategy used to find a perpetrator by exploring familiarity with the crime, may also find a victim. If you overestimate the probability of people stealing, you may be a thief… or a victim of theft.
Is it true that people who frequently accuse partners of infidelity have been significantly less faithful than people who don’t?
Talmud
This seems like a ridiculously transparent move. If I knew no psychological theory whatsoever, it would still be obvious that “What do you think the percentage of employees who have stolen from their job is?” is a question primarily designed to test how likely I am to steal (and also possibly testing how likely I am to trust other employees at my future job).
I would try to figure out the lowest number I can write down without it being obvious that it’s the lowest number I can write down. Of course I am still going off of my experience here to estimate what sort of numbers are low. But at that point you are rewarding people for being good at lying.
I think negative 12% of all employees steal.
I remember reading a livejournal post by a guy (diagnosed with Asperger’s Syndrome) about coming up against these kind of questions in job interviews. The explanation on that post is that people who have imaginary peer pressure against stealing are less likely to steal.
http://bradhicks.livejournal.com/144425.html
An employer asking either of those questions of me would drastically increase the chance that I would steal from them. Priming, conveying that the norm for people in my situation is some amount of stealing, etc. At very least it would prompt me to speculate on how best to steal if I happened to want to do so.
I believe it was mentioned in on of my psychology courses’ modules on questionnaire design, but I can’t remember what if any technical name it had.
I remember reading that this effect is indeed used on real employment tests, but I don’t remember where or if it was an authoritative source.
I’ve had questions like this: “Do you agree or disagree with the following statement: Most people have stolen something during their adult lives”.
I wonder how (if at all) answers to that correlate with liability to steal things from one’s employer. We’ve already got a mechanism by which people who do it would be more likely to say “agree”. But it could just as well go the other way: a very scrupulous person who would never bring home a ballpoint pen that might belong to her employer because she’d consider that theft is probably more likely to agree, but less likely to steal.
Me too—my father thought it was a trap to detect dishonest answers, but from what I’ve read, giving the more absolute answers (for example, saying that procedures should always be followed) tends to get you a “better” score. If you want to game the test, pretend you’re a naive robot who never, ever does anything even remotely bad, even if nobody actually acts like that.
See also.
I was given a test like this once in a somewhat similar context, and I asked the woman giving the test if they were going to reward the answers that most closely matched accurate descriptive statements about the community under discussion, or reward the answers that most closely matched conventional prescriptive statements about it. She won points with me by thinking about that for a moment, unsuccessfully suppressing a grin, and asking “Does the fact that I’m not answering your question answer your question?”
I took the test on a computer, so I think the actual employees in the room (mostly cashiers and such) wouldn’t have known how the test was graded either.
(nods) I really just asked the question to be snarky; it was pretty clear from context they wanted the latter.
Did you win?
There wasn’t much at stake that I cared about. But I entertained myself, which is always a win in my book.
Should I read your link or will I just be exposing myself to made-up unresearched advice?
Probably the latter.
I might be remembering this wrong, but I read somewhere that if you want to get someone’s opinion about something, you should ask them what their friends think about the topic. The reasoning was that people will quite obviously try to guess your password if you ask them a question directly, but their close friends are much more likely to be closer to their true opinion than they let on, so you should ask them what their friends think about the topic. I can’t find where I read this at so take it with a grain of salt (anyone with better Google-fu able to find what I’m talking about?).
If true, this would seem to be a not-as-fallacious application of the typical mind fallacy.
It should work even better than that. I also seem to recall reading that people consistently overestimate how much they have in common with their friends (which is a useful cognitive bias for social bonding).
The method will work until it becomes broadly applied and thus well known. After that the correlation would disappear. (Related to Goodhart’s law.)
In the meantime, the group who profits are not the non-thieves in general, but the non-thieves who are most biased in the typical mind fallacy direction. I disapprove of methods which reward bias.
The method itself is dishonest communication. Universally applied it would be a Nash equilibrium (at least until Goodhart’s law strikes back) since each employer individually is better off applying the method, but dishonest communication is less effective and thus the situation would be Pareto suboptimal.
Obviously, none of the above means that your suggested method doesn’t work, but nevertheless I wish people don’t use it.
What do you mean by “the exact likelihood”?
I would suggest that an answer to “what do you think the percentage of employees who have stolen..” is a proxy question for, “just exactly how socially unacceptable to you is stealing from your employer”? It relates to, basically, your own levels of altruism, and what you perceive the local altruism levels to be. If you see that everyone around you is being altruistic, you feel a basic urge to keep the clean environment up, while if everyone around you is cheating, then you are less likely to keep up your own altruism.
I’ve had my bike stolen a few times. After getting a particularly nice bike stolen, I now am always on the lookout for unlocked bikes, and when I see one, the urge is most definitely there to grab it - ‘retribution’, if you will, for my own stolen bike. I don’t go through with it, but the possibility is a lot more present in my mind than if I had never had a bike stolen from me.
It’s not retribution if its not the person who stole your bike.
I imagine that’s why brilee puts it in scare quotes, and also why s/he doesn’t actually steal bikes.
It might get a bit suspicious if you are entirely asking people about what other people think. You’d have to mix it with more conventional “dummy” questions.
Assuming, of course, that this hypothesis is true. The great thing is that it’s easily testable.
“Anything is easy if you’re not the one that has to do it.” Claiming something is easy, without giving an actual means of doing it, is a cheap rhetorical trick, one of the “dark arts”.
Give a number of test subjects a questionnaire with this question and a number of distraction questions. Then ask them to wait a while before the next stage of the experiment. While they are waiting give them an opportunity to steal something. Compare answers to behavior.