Arguing logically works on a much smaller proportion of the populace than generally believed. My experience is that people bow to authority and status and only strive to make it look like they were convinced logically.
That’s basically right, but I’d like to expand a bit.
Most people are fairly easily convinced “by argument” unless they have a status incentive to not agree. The problems here are that 1) people very often have status reasons to disagree, and 2) people are usually so bad at reasoning that you can find an argument to convince them of anything in absence of the first problem. It’s not quite that they don’t “care” about logical inconsistencies, but rather that they are bad at finding them because they don’t build concrete models and its easy enough to find a path where they don’t find an objection. (note that when you point them out, they have status incentives to not listen and it’ll come across that they don’t care—they just care less than the status loss they’d perceive)
People that I have the most productive conversations with are good at reasoning, but more importantly, when faced with a choice to interpret something as a status attack or a helpful correction, they perceive their status raising move as keeping peace and learning if at all possible. They also try to frame their own arguments in ways to minimize perceived status threat enough that their conversation partner will interpret it as helpful. This way, productive conversation can be a stable equilibrium in presence of status drives.
However, unilaterally adopting this strategy doesn’t always work. If you are on the blunt side of the spectrum, the other party can feel threatened enough to make discussion impossible even backing up n meta levels. If you’re on the walking on eggshells side, the other party can interpret it as allowing them to take the status high ground, give bad arguments and dismiss your arguments. Going to more extreme efforts not to project status threats only makes the problem worse, as (in combination with not taking offense) it is interpreted as submission. It’s like unconditional cooperation. (this appears to be exactly what is happening with the Muelhauser-Goertzel dialog, by the way—though the bitterness hints that he still perceives SIAI as a threat—just a threat he is winning a battle with).
I have a few thoughts on potential solutions (and have had some apparent success), but they aren’t well developed enough to be worth sharing yet.
So yes, all the real work is done in manipulating perceptions of status, but it’s more complicated than “be high status”—it’s getting them to buy into the frame where they are higher status when they agree—or at least don’t lose status.
I’m in the 1% that don’t think they are in the 1%. (It seems we have no choice but to be in one of the arrogant categories there!)
I usually get persuaded not so much by logic (because logical stuff I can think of already and quite frankly I’m probably better at it than than the arguer) but by being given information I didn’t previously have.
It is still flawed logic if the new data requires you to go substantially outside your estimated range rather than narrow down your uncertainty. (edit: and especially so if you don’t even have a range of some kind).
E.g. we had unproductive argument about whenever random-ish AGI ‘almost certainly’ just eats everyone, it’s not that I have some data that it is almost certain it is not eating everyone, it’s that you shouldn’t have this sort of certainty about such a topic. It’s fine if your estimated probability distribution centres there, it’s not fine if it is ultra narrow.
It is still flawed logic if the new data requires you to go substantially outside your estimated range rather than narrow down your uncertainty.
I gave nothing to indicate that this was the case. While the grandparent is more self deprecating on the behalf of both myself and the species than it is boastful your additional criticism doesn’t build upon it. The flaw you perceive in me (along the lines of disagreeing with you) is a different issue.
(edit: and especially so if you don’t even have a range of some kind).
I have a probability distribution, not a range. Usually not a terribly well specified probability distribution but that is the ideal to be approximated.
E.g. we had unproductive argument about whenever random-ish AGI ‘almost certainly’ just eats everyone, it’s not that I have some data that it is almost certain it is not eating everyone, it’s that you shouldn’t have this sort of certainty about such a topic. It’s fine if your estimated probability distribution centres there, it’s not fine if it is ultra narrow.
No. Our disagreement was not one of me assigning too much certainty. The ‘almost certainly’ was introduced by you, applied to something that I state has well under even chance of happening. (Specifically, regarding the probability of humans developing a worse-than-just-killing-us uFAI in the near vicinity to an FAI.)
You should also note that there is a world of difference between near certainty about what kind of AI will be selected and an unspecified level of certainty that the overwhelming majority of AGI goal systems would result in them killing us. The difference is akin to having 80% confidence that 99.9999% of balls in the jar are red. Don’t equivocate that with 99.9999% confidence. They represent entirely different indicators of assigned probability distributions.
In general meta-criticisms of my own reasoning that are founded on a specific personal disagreement with me should not be expected to be persuasive. Given that you already know I reject the premise (that you were right and I was wrong in some past dispute) why would you expect me to be persuaded by conclusions that rely on that premise?
Nah, my argument was “Well, the crux of the issue is that the random AIs may be more likely to leave us alone than near-misses at FAI.” , by the ‘may’, I meant, there is a notable probability that far less than 99.9999% of the balls in the jar are red, and consequently, far greater than 0.0001% probability of drawing a non-red ball.
edit: Furthermore, suppose we have a jar with 100 balls in which we know there is at least one blue ball (near-FAI space), and a huge jar with 100000 balls, about which we don’t know quite a lot, and which has substantial probability of having a larger fraction of non-red balls than the former jar.
edit: also, re probability distributions, that’s why i said a “range of some sort”. Humans don’t seem to quite do the convolutions and the like on probability distributions when thinking.
Arguing logically works on a much smaller proportion of the populace than generally believed. My experience is that people bow to authority and status and only strive to make it look like they were convinced logically.
That’s basically right, but I’d like to expand a bit.
Most people are fairly easily convinced “by argument” unless they have a status incentive to not agree. The problems here are that 1) people very often have status reasons to disagree, and 2) people are usually so bad at reasoning that you can find an argument to convince them of anything in absence of the first problem. It’s not quite that they don’t “care” about logical inconsistencies, but rather that they are bad at finding them because they don’t build concrete models and its easy enough to find a path where they don’t find an objection. (note that when you point them out, they have status incentives to not listen and it’ll come across that they don’t care—they just care less than the status loss they’d perceive)
People that I have the most productive conversations with are good at reasoning, but more importantly, when faced with a choice to interpret something as a status attack or a helpful correction, they perceive their status raising move as keeping peace and learning if at all possible. They also try to frame their own arguments in ways to minimize perceived status threat enough that their conversation partner will interpret it as helpful. This way, productive conversation can be a stable equilibrium in presence of status drives.
However, unilaterally adopting this strategy doesn’t always work. If you are on the blunt side of the spectrum, the other party can feel threatened enough to make discussion impossible even backing up n meta levels. If you’re on the walking on eggshells side, the other party can interpret it as allowing them to take the status high ground, give bad arguments and dismiss your arguments. Going to more extreme efforts not to project status threats only makes the problem worse, as (in combination with not taking offense) it is interpreted as submission. It’s like unconditional cooperation. (this appears to be exactly what is happening with the Muelhauser-Goertzel dialog, by the way—though the bitterness hints that he still perceives SIAI as a threat—just a threat he is winning a battle with).
I have a few thoughts on potential solutions (and have had some apparent success), but they aren’t well developed enough to be worth sharing yet.
So yes, all the real work is done in manipulating perceptions of status, but it’s more complicated than “be high status”—it’s getting them to buy into the frame where they are higher status when they agree—or at least don’t lose status.
I fully agree in the context of longer interactions or multiple interactions.
Roughly speaking, from what I can tell, it is generally believed that it works on 10% of the populace but really it works on less than 1%.
and 99% believe they are in 1%.
I’m in the 1% that don’t think they are in the 1%. (It seems we have no choice but to be in one of the arrogant categories there!)
I usually get persuaded not so much by logic (because logical stuff I can think of already and quite frankly I’m probably better at it than than the arguer) but by being given information I didn’t previously have.
It is still flawed logic if the new data requires you to go substantially outside your estimated range rather than narrow down your uncertainty. (edit: and especially so if you don’t even have a range of some kind).
E.g. we had unproductive argument about whenever random-ish AGI ‘almost certainly’ just eats everyone, it’s not that I have some data that it is almost certain it is not eating everyone, it’s that you shouldn’t have this sort of certainty about such a topic. It’s fine if your estimated probability distribution centres there, it’s not fine if it is ultra narrow.
I gave nothing to indicate that this was the case. While the grandparent is more self deprecating on the behalf of both myself and the species than it is boastful your additional criticism doesn’t build upon it. The flaw you perceive in me (along the lines of disagreeing with you) is a different issue.
I have a probability distribution, not a range. Usually not a terribly well specified probability distribution but that is the ideal to be approximated.
No. Our disagreement was not one of me assigning too much certainty. The ‘almost certainly’ was introduced by you, applied to something that I state has well under even chance of happening. (Specifically, regarding the probability of humans developing a worse-than-just-killing-us uFAI in the near vicinity to an FAI.)
You should also note that there is a world of difference between near certainty about what kind of AI will be selected and an unspecified level of certainty that the overwhelming majority of AGI goal systems would result in them killing us. The difference is akin to having 80% confidence that 99.9999% of balls in the jar are red. Don’t equivocate that with 99.9999% confidence. They represent entirely different indicators of assigned probability distributions.
In general meta-criticisms of my own reasoning that are founded on a specific personal disagreement with me should not be expected to be persuasive. Given that you already know I reject the premise (that you were right and I was wrong in some past dispute) why would you expect me to be persuaded by conclusions that rely on that premise?
Nah, my argument was “Well, the crux of the issue is that the random AIs may be more likely to leave us alone than near-misses at FAI.” , by the ‘may’, I meant, there is a notable probability that far less than 99.9999% of the balls in the jar are red, and consequently, far greater than 0.0001% probability of drawing a non-red ball.
edit: Furthermore, suppose we have a jar with 100 balls in which we know there is at least one blue ball (near-FAI space), and a huge jar with 100000 balls, about which we don’t know quite a lot, and which has substantial probability of having a larger fraction of non-red balls than the former jar.
edit: also, re probability distributions, that’s why i said a “range of some sort”. Humans don’t seem to quite do the convolutions and the like on probability distributions when thinking.
Robin just had an interesting post on this: http://www.overcomingbias.com/2012/03/disagreement-experiment.html