I used to habitually leave debates more sure of my position than when I went in—yet this can’t possibly be right, unless my opposition were so inept as to argue against their own position.
This isn’t quite right—for example, the more I search and find only bad arguments against cryonics, the more evidence I have that the good arguments just aren’t out there.
This isn’t quite right—for example, the more I search and find only bad arguments against cryonics, the more evidence I have that the good arguments just aren’t out there.
If all you did was argue with stupid people you would become erroneously self-confident. Also, two people who argued and didn’t convert would both walk away feeling better about their own positions. Something seems wrong here. What am I missing? Doesn’t this only make sense of there was some sort of weight attached to the argument your opponent used that was detached during the argument?
Apply Bayes theorem. P(you don’t find a good argument among stupid people | there is a good argument) is high. P(you don’t find a good argument when you’ve made a true effort to scour high and low | there is one) is lower. Obviously the existence or otherwise of a good argument is only indirect information about the truth, but it still helps.
That it seems there are no two people who cannot disagree without both having the strong feeling that their argument was clearly the stronger must of course be borne in mind when weighing this evidence, but it’s evidence all the same.
You’re right that it’s not as simple as that—if you set out to talk to idiots, you may well find that you can demolish all of them—but if you search for the strongest arguments from the best qualified and most intelligent proponents, and they’re rubbish? But they still persistently get cited as the best arguments, in the face of all criticism? That’s fairly strong evidence that the field might be bogus.
But why does dealing with intelligent design increase your probability in the alternative? Why were you assigning weight to intelligent design?
This isn’t meant to be nitpicky. I suppose the question behind the question is this: When dividing up probability mass for X, how do you allot P(~X)? Do you try divvying it up amongst competing theories or do you simply assign it to ~X?
For some reason I thought that divvying it up amongst competing theories was Wrong. Was this foolish of me?
But why does dealing with intelligent design increase your probability in the alternative? Why were you assigning weight to intelligent design?
Not much—it increased slightly when I saw it proposed, and decreased precipitously when I saw it refuted.
This isn’t meant to be nitpicky. I suppose the question behind the question is this: When dividing up probability mass for X, how do you allot P(~X)? Do you try divvying it up amongst competing theories or do you simply assign it to ~X?
Well, it has to be divvied up. It’s just that there are so many theories encompassed in ~X that it is not easy to calculate the contribution to any specific theory except when the network is pretty clear already.
The sum of your probabilities must add to 1. If you reduce the probability assigned to one theory, the freed probability mass must flow into other theories to preserve the sum.
But why are we assigning probability across a spectrum of competing theories? I thought we were supposed to be assigning probability to the theories themselves.
In other words, P(X) is my best guess at X being true. P(Y) is my best guess at Y being true. In the case of two complex theories trying to explain a particular phenomenon, why does P(X) + P(Y) + P(other theories) need to equal 1?
Or am I thinking of theories that are too complex? Are you thinking of X and Y as irreducible and mutually exclusive objects?
Or am I thinking of theories that are too complex? Are you thinking of X and Y as irreducible and mutually exclusive objects?
...yes? It’s not a matter of complexity, though; the problem you might be alluding to is that the groups of theories we describe when we enunciate our thoughts can overlap.
This isn’t quite right—for example, the more I search and find only bad arguments against cryonics, the more evidence I have that the good arguments just aren’t out there.
If all you did was argue with stupid people you would become erroneously self-confident. Also, two people who argued and didn’t convert would both walk away feeling better about their own positions. Something seems wrong here. What am I missing? Doesn’t this only make sense of there was some sort of weight attached to the argument your opponent used that was detached during the argument?
Apply Bayes theorem. P(you don’t find a good argument among stupid people | there is a good argument) is high. P(you don’t find a good argument when you’ve made a true effort to scour high and low | there is one) is lower. Obviously the existence or otherwise of a good argument is only indirect information about the truth, but it still helps.
That it seems there are no two people who cannot disagree without both having the strong feeling that their argument was clearly the stronger must of course be borne in mind when weighing this evidence, but it’s evidence all the same.
You’re right that it’s not as simple as that—if you set out to talk to idiots, you may well find that you can demolish all of them—but if you search for the strongest arguments from the best qualified and most intelligent proponents, and they’re rubbish? But they still persistently get cited as the best arguments, in the face of all criticism? That’s fairly strong evidence that the field might be bogus.
(Obvious example: intelligent design creationism. It’s weaksauce religion and incompetent science.)
But why does dealing with intelligent design increase your probability in the alternative? Why were you assigning weight to intelligent design?
This isn’t meant to be nitpicky. I suppose the question behind the question is this: When dividing up probability mass for X, how do you allot P(~X)? Do you try divvying it up amongst competing theories or do you simply assign it to ~X?
For some reason I thought that divvying it up amongst competing theories was Wrong. Was this foolish of me?
Not much—it increased slightly when I saw it proposed, and decreased precipitously when I saw it refuted.
Well, it has to be divvied up. It’s just that there are so many theories encompassed in ~X that it is not easy to calculate the contribution to any specific theory except when the network is pretty clear already.
Not to be a chore, but can you explain why?
The sum of your probabilities must add to 1. If you reduce the probability assigned to one theory, the freed probability mass must flow into other theories to preserve the sum.
But why are we assigning probability across a spectrum of competing theories? I thought we were supposed to be assigning probability to the theories themselves.
In other words, P(X) is my best guess at X being true. P(Y) is my best guess at Y being true. In the case of two complex theories trying to explain a particular phenomenon, why does P(X) + P(Y) + P(other theories) need to equal 1?
Or am I thinking of theories that are too complex? Are you thinking of X and Y as irreducible and mutually exclusive objects?
...yes? It’s not a matter of complexity, though; the problem you might be alluding to is that the groups of theories we describe when we enunciate our thoughts can overlap.