I think Justice really, really should let emself be told when F stands for freedom. It seems to me Assange is more or less saying that he will follow logic steps only as far as they lead to a conclusion he likes. Am I the only one reading him this way?
It took me a while to figure this out, but Assange isn’t talking about improving his own model of reality; by my reading he’s more or less given up on that. He’s talking about ways of convincing people that he’s right, and accepts logic only in the service of that goal.
Specifically, he’s saying that reductionist arguments are unconvincing when trying to change minds, and that it works better to raise such a pedestal under the ultimate aim of your argument that your audience will do the hard work of building an inductive chain for you.
From this I suspect that Assange hasn’t recently spent much time trying to prove things to people that don’t already think he’s a rockstar. He describes a rather effective way of exploiting halo effects, but that only works when there’s a halo to exploit: either Assange’s personal halo (probably more likely), or one around a shared ideology or goal. Try that trick with someone that accepts neither, and they’re more likely to laugh you off as a deluded hippie than to blithely construct an argument for you.
Try that trick with someone that accepts neither, and they’re more likely to laugh you off as a deluded hippie than to blithely construct an argument for you.
But is logical reasoning any more likely to work in this case (when arguing with a person who isn’t exceptionally rational)?
Usually. There are other exploits that would work better, though; the point I was trying to make is that Assange’s recommendation relies entirely on having a large pool of positive affect that you can entangle with whatever statement you’re trying to prove. There’s still a term for that kind of entanglement in the effectiveness function for reductionist arguments, but it’s considerably less important.
It took me a while to figure this out, but Assange isn’t talking about improving his own model of reality; by my reading he’s more or less given up on that. He’s talking about ways of convincing people that he’s right, and accepts logic only in the service of that goal.
Specifically, he’s saying that reductionist arguments are unconvincing when trying to change minds, and that it works better to raise such a pedestal under the ultimate aim of your argument that your audience will do the hard work of building an inductive chain for you.
From this I suspect that Assange hasn’t recently spent much time trying to prove things to people that don’t already think he’s a rockstar. He describes a rather effective way of exploiting halo effects, but that only works when there’s a halo to exploit: either Assange’s personal halo (probably more likely), or one around a shared ideology or goal. Try that trick with someone that accepts neither, and they’re more likely to laugh you off as a deluded hippie than to blithely construct an argument for you.
The entire post leaves a bad taste in my mouth.
But is logical reasoning any more likely to work in this case (when arguing with a person who isn’t exceptionally rational)?
Usually. There are other exploits that would work better, though; the point I was trying to make is that Assange’s recommendation relies entirely on having a large pool of positive affect that you can entangle with whatever statement you’re trying to prove. There’s still a term for that kind of entanglement in the effectiveness function for reductionist arguments, but it’s considerably less important.
That makes more sense than my reading, and is more likely what he meant.