There is an interesting variation of rule 1 (truth-seeking). According to common understanding, this rule seems to imply that if we argue for X, we should only do so if we believe X to more than 50%. Similarly rule 3. But recently a number of philosophers have argued that (at least in academic context) you can actually argue for interesting hypotheses without believing in them. This is sometimes called “championing”, and described as a form of epistemic group-rationality, which says that sometimes individually irrational arguments can be group-rational.
The idea is that truth seeking is viewed as a competitive-collaborative process, which has benefits when people specialize in certain outsider theories and champion them. In some contexts it is fairly likely that some outsider theory is true, even though each individual outsider theory has a much lower probability than the competing mainstream theory. If everyone argued for the most likely (mainstream) theory, there would be too little “intellectual division of labor”; hardly anyone would bother arguing for individually unlikely theories.
(This recent essay might be interpreted as an argument for championing.)
It might be objected that the championers should be honest and report that they find the interesting theory they champion ultimately unlikely to be true. But this could have bad effects for the truth-seeking process of the group: Why should anyone feel challenged by someone advocating a provocative hypothesis when the advocators themselves don’t believe it? The hypothesis would lose much of its provocativeness, and the challenged people wouldn’t really feel challenged. It wouldn’t encourage fruitful debate.
(This is can also be viewed as a solution to the disagreement paradox: Why could it ever be rational to disagree with our epistemic peers? Shouldn’t we average our opinions? Answer: Averaging might be individually rational, but not group-rational.)
this rule seems to imply that if we argue for X, we should only do so if we believe X to more than 50%
Being an “argument for” is anti-inductive, an argument stops working in either direction once it’s understood. You believe what you believe, at a level of credence you happen to have. You can make arguments. Others can change either understanding or belief in response to that. These things don’t need to be related. And there is nothing special about 50%.
I don’t get what you mean. Assuming you argue for X, but you don’t believe X, it would seem something is wrong, at least from the individual rationality perspective. For example, you argue that it raining outside without you believing that it is raining outside. This could e.g. be classified as lying (deception) or bullshitting (you don’t care about the truth).
What does “arguing for” mean? There’s expectation that a recipient changes their mind in some direction. This expectation goes away for a given argument, once it’s been considered, whether it had that effect or not. Repeating the argument won’t present an expectation of changing the mind of a person who already knows it, in either direction, so the argument is no longer an “argument for”. This is what I mean by anti-inductive.
Assuming you argue for X, but you don’t believe X
Suppose you don’t believe X, but someone doesn’t understand an aspect of X, such that you expect its understanding to increase their belief in X. Is this an “argument for” X? Should it be withheld, keeping the other’s understanding avoidably lacking?
What does “arguing for” mean? There’s expectation that a recipient changes their mind in some direction. This expectation goes away for a given argument, once it’s been considered, whether it had that effect or not.
Here is a proposal: A argues with Y for X iff A 1) claims that Y, and 2) that Y is evidence for X, in the sense that P(X|Y)>P(X|-Y). The latter can be considered true even if you already believe in Y.
Suppose you don’t believe X, but someone doesn’t understand an aspect of X, such that you expect its understanding to increase their belief in X. Is this an “argument for” X? Should it be withheld, keeping the other’s understanding avoidably lacking?
It seems that arguments provide evidence, and Y is evidence for X if and only if P(X|Y)>P(X|¬Y). That is, when X and Y are positively probabilistically dependent. If I think that they are positively dependent, and you think that they are not, then this won’t convince you of course.
Assuming you argue for X, but you don’t believe X, it would seem something is wrong, at least from the individual rationality perspective.
Belief is a matter of degree. If someone else thinks it’s 10% likely to be raining, and you believe it’s 40% likely to be raining, then we could summarize that as “both of you think it’s not raining”. And if you share some of your evidence and reasoning for thinking the probability is more like 40% than 10%, then we could maybe say that this isn’t really arguing for the proposition “it’s raining”, but rather the proposition “rain is likelier than you think” or “rain is 40% likely” or whatever.
But in both cases there’s something a bit odd about phrasing things this way, something that cuts a bit skew to reality. In reality there’s nothing special about the 50% point, and belief isn’t a binary. So I think part of the objection here is: maybe what you’re saying about belief and argument is technically true, but it’s weird to think and speak that way because in fact the cognitive act of assigning 40% probability to something is very similar to the act of assigning 60% probability to something, and the act of citing evidence for rain when you have the former belief is often just completely identical to the act of citing evidence for rain when you have the latter belief.
The issue for discourse is that beliefs do come in degrees, but when expressing them they lose this feature. Declarative statements are mostly discrete. (Saying “It’s raining outside” doesn’t communicate how strongly you believe it, except to more than 50% -- but again, the fan of championing will deny even that in certain discourse contexts.)
Talking explicitly about probabilities is a workaround, a hack where we still make binary statements, just about probabilities. But talking about probabilities is kind of unnatural, and people (even rationalists) rarely do it. Notice how both of us made a lot of declarative statements without indicating our degrees of belief in them. The best we can do, without using explicit probabilities, is using qualifiers like “I believe that”, “It might be that”, “It seems that”, “Probably”, “Possibly”, “Definitely”, “I’m pretty sure that” etc.
See
https://raw.githubusercontent.com/zonination/perceptions/master/joy1.png
Examples of truth-seeking making you “give reasons for X” even though you don’t “believe X”: - Everyone believes X is 2% but you think X is 15% because of reasons they aren’t considering, and you tell them why - Everyone believes X is 2%. You do science and tell everyone all your findings, some of which support X (and some of which don’t support X).
We should bet on moonshots (low chance, high EV). This is what venture capitalists and startup founders do. I imagine this is what some artists, philosophers, comedians, and hipsters do as well, and I think it is truth-tending on net. But I hate the norm that champions should lie. Instead, champions should only say untrue things if everyone knows the norms around that. Like lawyers in court or comedians on stage, or all fiction.
Yeah, championing seems to border on deception, bullshitting, or even lying. But the group rationality argument says that it can be optimal when a few members of a group “over focus” (from an individual perspective) on an issue. These pull in different directions.
I think people can create an effectively unlimited number of “outsider theories” if they aren’t concerned with how likely they are. Do you think ALL of those should get their own champions? If not, what criteria do you propose for which ones get champions and which don’t?
Maybe it would be better to use a frame of “which arguments should we make?” rather than “which hypotheses should we argue for?” Can we just say that you should only make arguments that you think are true, without talking about which camp those arguments favor?
(Though I don’t want to ban discussions following the pattern “I can’t spot a flaw in this argument, but I predict that someone else can, can anyone help me out?” I guess I think you should be able to describe arguments you don’t believe if you do it in quotation marks.)
There is an interesting variation of rule 1 (truth-seeking). According to common understanding, this rule seems to imply that if we argue for X, we should only do so if we believe X to more than 50%. Similarly rule 3. But recently a number of philosophers have argued that (at least in academic context) you can actually argue for interesting hypotheses without believing in them. This is sometimes called “championing”, and described as a form of epistemic group-rationality, which says that sometimes individually irrational arguments can be group-rational.
The idea is that truth seeking is viewed as a competitive-collaborative process, which has benefits when people specialize in certain outsider theories and champion them. In some contexts it is fairly likely that some outsider theory is true, even though each individual outsider theory has a much lower probability than the competing mainstream theory. If everyone argued for the most likely (mainstream) theory, there would be too little “intellectual division of labor”; hardly anyone would bother arguing for individually unlikely theories.
(This recent essay might be interpreted as an argument for championing.)
It might be objected that the championers should be honest and report that they find the interesting theory they champion ultimately unlikely to be true. But this could have bad effects for the truth-seeking process of the group: Why should anyone feel challenged by someone advocating a provocative hypothesis when the advocators themselves don’t believe it? The hypothesis would lose much of its provocativeness, and the challenged people wouldn’t really feel challenged. It wouldn’t encourage fruitful debate.
(This is can also be viewed as a solution to the disagreement paradox: Why could it ever be rational to disagree with our epistemic peers? Shouldn’t we average our opinions? Answer: Averaging might be individually rational, but not group-rational.)
Being an “argument for” is anti-inductive, an argument stops working in either direction once it’s understood. You believe what you believe, at a level of credence you happen to have. You can make arguments. Others can change either understanding or belief in response to that. These things don’t need to be related. And there is nothing special about 50%.
I don’t get what you mean. Assuming you argue for X, but you don’t believe X, it would seem something is wrong, at least from the individual rationality perspective. For example, you argue that it raining outside without you believing that it is raining outside. This could e.g. be classified as lying (deception) or bullshitting (you don’t care about the truth).
What does “arguing for” mean? There’s expectation that a recipient changes their mind in some direction. This expectation goes away for a given argument, once it’s been considered, whether it had that effect or not. Repeating the argument won’t present an expectation of changing the mind of a person who already knows it, in either direction, so the argument is no longer an “argument for”. This is what I mean by anti-inductive.
Suppose you don’t believe X, but someone doesn’t understand an aspect of X, such that you expect its understanding to increase their belief in X. Is this an “argument for” X? Should it be withheld, keeping the other’s understanding avoidably lacking?
Here is a proposal: A argues with Y for X iff A 1) claims that Y, and 2) that Y is evidence for X, in the sense that P(X|Y)>P(X|-Y). The latter can be considered true even if you already believe in Y.
I agree, that’s a good argument.
The best arguments confer no evidence, they guide you in putting together the pieces you already hold.
Yeah, aka Socratic dialogue.
Alice: I don’t believe X.
Bob: Don’t you believe Y? And don’t you believe If Y then X?
Alice: Okay I guess I do believe X.
The point is, conditional probability doesn’t capture the effect of arguments.
It seems that arguments provide evidence, and Y is evidence for X if and only if P(X|Y)>P(X|¬Y). That is, when X and Y are positively probabilistically dependent. If I think that they are positively dependent, and you think that they are not, then this won’t convince you of course.
Belief is a matter of degree. If someone else thinks it’s 10% likely to be raining, and you believe it’s 40% likely to be raining, then we could summarize that as “both of you think it’s not raining”. And if you share some of your evidence and reasoning for thinking the probability is more like 40% than 10%, then we could maybe say that this isn’t really arguing for the proposition “it’s raining”, but rather the proposition “rain is likelier than you think” or “rain is 40% likely” or whatever.
But in both cases there’s something a bit odd about phrasing things this way, something that cuts a bit skew to reality. In reality there’s nothing special about the 50% point, and belief isn’t a binary. So I think part of the objection here is: maybe what you’re saying about belief and argument is technically true, but it’s weird to think and speak that way because in fact the cognitive act of assigning 40% probability to something is very similar to the act of assigning 60% probability to something, and the act of citing evidence for rain when you have the former belief is often just completely identical to the act of citing evidence for rain when you have the latter belief.
The issue for discourse is that beliefs do come in degrees, but when expressing them they lose this feature. Declarative statements are mostly discrete. (Saying “It’s raining outside” doesn’t communicate how strongly you believe it, except to more than 50% -- but again, the fan of championing will deny even that in certain discourse contexts.)
Talking explicitly about probabilities is a workaround, a hack where we still make binary statements, just about probabilities. But talking about probabilities is kind of unnatural, and people (even rationalists) rarely do it. Notice how both of us made a lot of declarative statements without indicating our degrees of belief in them. The best we can do, without using explicit probabilities, is using qualifiers like “I believe that”, “It might be that”, “It seems that”, “Probably”, “Possibly”, “Definitely”, “I’m pretty sure that” etc. See https://raw.githubusercontent.com/zonination/perceptions/master/joy1.png
Examples of truth-seeking making you “give reasons for X” even though you don’t “believe X”:
- Everyone believes X is 2% but you think X is 15% because of reasons they aren’t considering, and you tell them why
- Everyone believes X is 2%. You do science and tell everyone all your findings, some of which support X (and some of which don’t support X).
We should bet on moonshots (low chance, high EV). This is what venture capitalists and startup founders do. I imagine this is what some artists, philosophers, comedians, and hipsters do as well, and I think it is truth-tending on net.
But I hate the norm that champions should lie. Instead, champions should only say untrue things if everyone knows the norms around that. Like lawyers in court or comedians on stage, or all fiction.
Yeah, championing seems to border on deception, bullshitting, or even lying. But the group rationality argument says that it can be optimal when a few members of a group “over focus” (from an individual perspective) on an issue. These pull in different directions.
I think people can create an effectively unlimited number of “outsider theories” if they aren’t concerned with how likely they are. Do you think ALL of those should get their own champions? If not, what criteria do you propose for which ones get champions and which don’t?
Maybe it would be better to use a frame of “which arguments should we make?” rather than “which hypotheses should we argue for?” Can we just say that you should only make arguments that you think are true, without talking about which camp those arguments favor?
(Though I don’t want to ban discussions following the pattern “I can’t spot a flaw in this argument, but I predict that someone else can, can anyone help me out?” I guess I think you should be able to describe arguments you don’t believe if you do it in quotation marks.)