I’m trying to track down a fallacy or effect that was once explained to me and which I found plausible: The idea that whoever has the more complex and detailed mental model of a topic under question wins a discussion about a question—independent of the actual truth of the matter (and assuming no malicious intent).
The example cited as I remember it was about visual (microscope) inspection of blood samples for some boolean factor (present or not). Two persons got the same samples and were trained to recognize the factor one was always told the truth and the other was lied to a certain fraction of times. After the learning period both had to decide on the factor of some samples together. The result: Even though the person who was lied to had the less accurate model he almost always dominated the decision.
The offered explanation was that the lied to candidate had the more complex model (it somehow had to incorporate factors representing the lies) and that led to the availability of arguments (criteria to look for supposedly explaining the difference) which could be used to convince the other person—despite the falsity of those arguments.
Problem is: I can’t find any studies or the like supporting this. Do you know of such a model strength effect? I think it is quite relevant as it seems to be behind the ability of liars or rhetorics to convice the audience by making up complex and impressive structures independent of their truth (the truth must just be inavailable enough).
Anecdotally I have observed this. Whomever is more invested in an argument that’s at a more-or-less casual level (say, on the internet) can muster more facts and win. Even when the evidence is radically one-sided, it doesn’t matter if the person on the right side of the facts doesn’t know the evidence well. Anyone of us could lose an argument with people over whether the Earth is flat, if the difference in preparedness was great enough.
Perhaps something like the representativeness heuristic? While more details make something sound more believable, each detail is another thing that could be incorrect.
In all cases where significant loss can be avoided by backing down early this again is exploitable by e.g. boasting, aggression, rhetorics, intimidation.
The interesting sub-case here is that this can have side-effects where it is not actively exploited but accidentally—as the net effect is that the team reaches a sub-optimal joint result.
Kind of a cognitive bias more like over-confidence where lack of communication of confidence results.
it is not actively exploited but accidentally—as the net effect is that the team reaches a sub-optimal joint result.
I don’t know—in more general terms Alice spent more resources (time, effort) at analyzing the problem and so feels more qualified than Bob who spent less resources. In this particular artificial setup this leads to suboptimal results, but I suspect that in most real-life situations, Alice would have better opinions/solutions/forecasts than Bob and so should have an advantage in a disagreement.
So I find that there’s one place this frequently comes up detrimentally in real life: The advocate of something invariably has spent more time studying it than the opponent. This creates a (to my mind) unhealthy bias in some situations in the advocate’s favor.
I’m trying to track down a fallacy or effect that was once explained to me and which I found plausible: The idea that whoever has the more complex and detailed mental model of a topic under question wins a discussion about a question—independent of the actual truth of the matter (and assuming no malicious intent).
The example cited as I remember it was about visual (microscope) inspection of blood samples for some boolean factor (present or not). Two persons got the same samples and were trained to recognize the factor one was always told the truth and the other was lied to a certain fraction of times. After the learning period both had to decide on the factor of some samples together. The result: Even though the person who was lied to had the less accurate model he almost always dominated the decision.
The offered explanation was that the lied to candidate had the more complex model (it somehow had to incorporate factors representing the lies) and that led to the availability of arguments (criteria to look for supposedly explaining the difference) which could be used to convince the other person—despite the falsity of those arguments.
Problem is: I can’t find any studies or the like supporting this. Do you know of such a model strength effect? I think it is quite relevant as it seems to be behind the ability of liars or rhetorics to convice the audience by making up complex and impressive structures independent of their truth (the truth must just be inavailable enough).
Anecdotally I have observed this. Whomever is more invested in an argument that’s at a more-or-less casual level (say, on the internet) can muster more facts and win. Even when the evidence is radically one-sided, it doesn’t matter if the person on the right side of the facts doesn’t know the evidence well. Anyone of us could lose an argument with people over whether the Earth is flat, if the difference in preparedness was great enough.
Perhaps something like the representativeness heuristic? While more details make something sound more believable, each detail is another thing that could be incorrect.
Looks to be a subtype of the general observation that whoever can establish her authority in an argument wins.
Yes. Call it authority or dominance or whatever.
In all cases where significant loss can be avoided by backing down early this again is exploitable by e.g. boasting, aggression, rhetorics, intimidation.
The interesting sub-case here is that this can have side-effects where it is not actively exploited but accidentally—as the net effect is that the team reaches a sub-optimal joint result.
Kind of a cognitive bias more like over-confidence where lack of communication of confidence results.
I don’t know—in more general terms Alice spent more resources (time, effort) at analyzing the problem and so feels more qualified than Bob who spent less resources. In this particular artificial setup this leads to suboptimal results, but I suspect that in most real-life situations, Alice would have better opinions/solutions/forecasts than Bob and so should have an advantage in a disagreement.
So I find that there’s one place this frequently comes up detrimentally in real life: The advocate of something invariably has spent more time studying it than the opponent. This creates a (to my mind) unhealthy bias in some situations in the advocate’s favor.