[...] you could prove that (A ⇒ B) and (B ⇒ C) and (C ⇒ D) and (D ⇒ F) Justice would nod its head and agree, but then, when you turned to claim your coup de grace, A ⇒ F irrevocably, Justice would demur and revoke the axiom of transitivity, for Justice will not be told when F stands for freedom.
I think Justice really, really should let emself be told when F stands for freedom.
It seems to me Assange is more or less saying that he will follow logic steps only as far as they lead to a conclusion he likes. Am I the only one reading him this way?
Transitivity is evoked when Justice imagines F and finding the dream a pleasurable one sets about gathering cushions to prop up their slumber.
This sounds like searching for arguments to a foregone conclusion.
Here then is the truth about the Truth; the Truth is not bridge, sturdy to every step, a marvel of bound planks and supports from the known into the unknown, but a surging sea of smashed wood, flotsam and drowning sailors.
This reminds me of a guy, having lost an argument to me fatally, who resorted to saying, “consistency is overrated”. He’d rather have two mutually exclusive ideas and acknowledge this as fact than change his mind.
I think Justice really, really should let emself be told when F stands for freedom. It seems to me Assange is more or less saying that he will follow logic steps only as far as they lead to a conclusion he likes. Am I the only one reading him this way?
It took me a while to figure this out, but Assange isn’t talking about improving his own model of reality; by my reading he’s more or less given up on that. He’s talking about ways of convincing people that he’s right, and accepts logic only in the service of that goal.
Specifically, he’s saying that reductionist arguments are unconvincing when trying to change minds, and that it works better to raise such a pedestal under the ultimate aim of your argument that your audience will do the hard work of building an inductive chain for you.
From this I suspect that Assange hasn’t recently spent much time trying to prove things to people that don’t already think he’s a rockstar. He describes a rather effective way of exploiting halo effects, but that only works when there’s a halo to exploit: either Assange’s personal halo (probably more likely), or one around a shared ideology or goal. Try that trick with someone that accepts neither, and they’re more likely to laugh you off as a deluded hippie than to blithely construct an argument for you.
Try that trick with someone that accepts neither, and they’re more likely to laugh you off as a deluded hippie than to blithely construct an argument for you.
But is logical reasoning any more likely to work in this case (when arguing with a person who isn’t exceptionally rational)?
Usually. There are other exploits that would work better, though; the point I was trying to make is that Assange’s recommendation relies entirely on having a large pool of positive affect that you can entangle with whatever statement you’re trying to prove. There’s still a term for that kind of entanglement in the effectiveness function for reductionist arguments, but it’s considerably less important.
you could prove that (A ⇒ B) and (B ⇒ C) and (C ⇒ D) and (D ⇒ F) Justice would nod its head and agree, but then, when you turned to claim your coup de grace, A ⇒ F irrevocably, Justice would demur and revoke the axiom of transitivity, for Justice will not be told when F stands for freedom.
I think Justice really, really should let emself be told when F stands for freedom.
Since we overestimate the strength of conjunctions, transitive chains may be weaker than they appear. So unless the issue is entirely clear-cut, it’s reasonable for people to fail to accept A ⇒ F. (Of course, it is true that ideally a rational person would at least consider A ⇒ F and adjust probabilities accordingly.)
Transitivity is evoked when Justice imagines F and finding the dream a pleasurable one sets about gathering cushions to prop up their slumber.
This sounds like searching for arguments to a foregone conclusion.
True. But it also sounds like the gathering of evidence using emotional tags. Direct evidence, in some areas, overwhelmingly beats a transitive chain. So although the evidence is not being gathered evenhandedly by Justice, there is a justification for this manner of thinking. I do think the “gathering [of] cushions to prop up a slumber” is adaptive and a fair representation of how people think.
What I found interesting about this blog post is that a successful person, who has tried to persuade others of his political ideas, has identified models/strategies for persuasion which strongly mirror the LW posts I have read.
I suppose there are far superior guides to persuasion with actual empirical evidence. Admittedly, those are more appropriate for Less Wrong. You people probably already find LW resonances in much of what you read anyway.
I think Justice really, really should let emself be told when F stands for freedom. It seems to me Assange is more or less saying that he will follow logic steps only as far as they lead to a conclusion he likes. Am I the only one reading him this way?
This sounds like searching for arguments to a foregone conclusion.
This reminds me of a guy, having lost an argument to me fatally, who resorted to saying, “consistency is overrated”. He’d rather have two mutually exclusive ideas and acknowledge this as fact than change his mind.
It took me a while to figure this out, but Assange isn’t talking about improving his own model of reality; by my reading he’s more or less given up on that. He’s talking about ways of convincing people that he’s right, and accepts logic only in the service of that goal.
Specifically, he’s saying that reductionist arguments are unconvincing when trying to change minds, and that it works better to raise such a pedestal under the ultimate aim of your argument that your audience will do the hard work of building an inductive chain for you.
From this I suspect that Assange hasn’t recently spent much time trying to prove things to people that don’t already think he’s a rockstar. He describes a rather effective way of exploiting halo effects, but that only works when there’s a halo to exploit: either Assange’s personal halo (probably more likely), or one around a shared ideology or goal. Try that trick with someone that accepts neither, and they’re more likely to laugh you off as a deluded hippie than to blithely construct an argument for you.
The entire post leaves a bad taste in my mouth.
But is logical reasoning any more likely to work in this case (when arguing with a person who isn’t exceptionally rational)?
Usually. There are other exploits that would work better, though; the point I was trying to make is that Assange’s recommendation relies entirely on having a large pool of positive affect that you can entangle with whatever statement you’re trying to prove. There’s still a term for that kind of entanglement in the effectiveness function for reductionist arguments, but it’s considerably less important.
That makes more sense than my reading, and is more likely what he meant.
Since we overestimate the strength of conjunctions, transitive chains may be weaker than they appear. So unless the issue is entirely clear-cut, it’s reasonable for people to fail to accept A ⇒ F. (Of course, it is true that ideally a rational person would at least consider A ⇒ F and adjust probabilities accordingly.)
True. But it also sounds like the gathering of evidence using emotional tags. Direct evidence, in some areas, overwhelmingly beats a transitive chain. So although the evidence is not being gathered evenhandedly by Justice, there is a justification for this manner of thinking. I do think the “gathering [of] cushions to prop up a slumber” is adaptive and a fair representation of how people think.
What I found interesting about this blog post is that a successful person, who has tried to persuade others of his political ideas, has identified models/strategies for persuasion which strongly mirror the LW posts I have read.
I suppose there are far superior guides to persuasion with actual empirical evidence. Admittedly, those are more appropriate for Less Wrong. You people probably already find LW resonances in much of what you read anyway.