“Not having a side” doesn’t have to mean being unable to argue a side, it can instead mean being able to argue several different sides. If I can do that, then if someone insists that I argue a position I can pick one and argue it, even knowing perfectly well that I could just as readily argue a conflicting position.
In the real world, knowing that there are several different plausible positions is actually pretty useful. What I generally find professionally is that there’s rarely “the right answer” so much as there are lots of wrong answers. If I can agree with someone to avoid the wrong answers, I’m usually pretty happy to accept their preferred right answer even if it isn’t mine.
When you can equally well argue that X is true, and that X is false, it means that your arguing is quite entirely decoupled from truth, and as such, both of those arguments are piles of manure that shouldn’t affect anyone’s beliefs. It is only worth making one of such arguments to counter the other for the sake of keeping the undecided audience undecided. Ideally, you should instead present a very strong meta-ish argument that the argumentation for both sides which you would be able to make, is complete nonsense.
(Unfortunately that gets both sides of argument pissed off at you.)
When you can equally well argue that X is true, and that X is false, it means that your arguing is quite entirely decoupled from truth
That’s not actually true.
In most real-world cases, both true statements and false statements have evidence in favor of them, and the process of assembling and presenting that evidence can be a perfectly functional argument. And it absolutely should affect your belief: if I present you with novel evidence in favor of A, your confidence in A should decrease.
Ideally, I should weigh my evidence in favor of A against my evidence in favor of not-A and come to a decision as to which one I believe. In cases where one side is clearly superior to the other, I do that. In cases where it’s not so clear, I generally don’t do that.
Depending on what’s going on, I will also often present all of the available evidence. This is otherwise known as “arguing both sides of the issue” and yes, as you say, it tends to piss everyone off.
Let me clarify with an example. I did 100 tests testing new drug against placebo (in each test i had 2 volunteers), and I got very lucky to get exactly neutral result: in 50 of them, it performed better than placebo, while in other 50 it performed worse.
I can construct ‘argument’ that drug is better than placebo, by presenting data from 50 cases where it performed better, or construct ‘argument’ that drug is worse than placebo, by presenting data from 50 cases where placebo performed better. Neither of ‘arguments’ should sway anyone’s belief about the drug in any direction, in case of perfect knowledge of the process that has led to the acquisition of that data, even if the data from other 50 cases has been irreversibly destroyed and are not part of the knowledge (it is only known that there were 100 trials, and 50 outcomes have been destroyed because they didn’t support the notion that drug is good; the actual 50 outcomes are not available). That’s what I meant. Each 50-case data set tells absolutely nothing about truth of drug>placebo, by itself, is a weak evidence that effects of the drugs are small in the perfect knowledge of the extent of cherry picking, and only sways the opinion on “is drug better than placebo” if there’s a false belief related to the degree of cherry picking.
Furthermore, unlike mathematics of decision theory, qualitative comparison of two verbal arguments in their vaguely determined ‘strength’ yield complete junk unless the strength differs very significantly (edit: which happens when one argument is actually good and other is junk). Due to cherry picking as per above example.
Sure. More generally, in cases where evaluating all the evidence I have leads me unambiguously to conclude A, and then I pick only that subset of the evidence that leads me to conclude NOT A, I’m unambiguously lying. But in cases like that, the original problem doesn’t arise… picking a side to argue is easy, I should argue A. That’s very different (at least to my mind) from the case where I’m genuinely ambivalent between A and NOT A but am expected to compellingly argue some position rather than asserting ignorance.
Well, the drug example, there is no unambiguous conclusion, and one can be ambivalent, yet it is a lie to ‘argue’ A or not A , and confusing to argue both rather than to integrate it into conclusion that both pro and anti arguments are complete crap (it IS the case in the drug example that both the pro, and anti data, even taken alone in isolation, shouldn’t update the beliefs, i.e. shouldn’t be effective arguments). The latter though really pisses people off.
It is usually the case in such cases that while you can’t be sure in the truth value of proposition, you can be pretty darn sure that the arguments presented aren’t linked to it in any way. But people don’t understand the distinction between that, and really don’t like if you attack their argument rather than position. In both side’s eyes you’re just being a jerk who doesn’t even care who’s right.
All while, the positions are wrong statistically half of the time (counting both sides of argument once), while arguments are flawed far more than half. Even in math, if you just guess at truth value of the idk Fermat’s last theorem using a coin flip, you have 50% chance of being wrong about the truth value, but if you were to make a proof, you would have something around 99% chance of entirely botching proof up, unless you are real damn good at it. And a botched proof is zero evidence. If you know the proof is botched (or if you have proof of the opposite that also pass your verification, implying that verification is botched), it’s not weak bayesian evidence about any truth values, it’s just a data about human minds, language, fallacies, etc.
Agreed that evaluating the relevance of an argument A to the truth-value T of a proposition doesn’t depend on knowing T.
Agreed that pointing out to people who are invested in a particular value of T and presenting A to justify T that in fact A isn’t relevant to T generally pisses them off.
Agreed that if T is binary, there are more possible As unrelated to T than there are wrong possible values for T, which means my chances of randomly getting a right answer about T are higher than my chances of randomly constructing an argument that’s relevant to T. (But I’ll note that not all interesting T’s are binary.)
the drug example, there is no unambiguous conclusion, and one can be ambivalent,
This statement confuses me.
If I look at all my data in this example, I observe that the drug did better than placebo half the time, and worse than placebo half the time. This certainly seems to unambiguously indicate that the drug is no more effective than the placebo, on average.
Is that false for some reason I’m not getting? If so, then I’m confused
If that’s true, though, then it seems my original formulation applies. That is, evaluating all the evidence in this case leads me unambiguously to conclude “the drug is no more effective than the placebo, on average”. I could pick subsets of that data to argue both “the drug is more effective than the placebo” and “the drug is less effective than the placebo” but doing so would be unambiguously lying.
Which seems like a fine example of “in cases where evaluating all the evidence I have leads me unambiguously to conclude A, and then I pick only that subset of the evidence that leads me to conclude NOT A, I’m unambiguously lying.” No? (In this case, the A to which my evidence unambiguously leads me is “the drug is no more effective than the placebo, on average”
If I look at all my data in this example, I observe that the drug did better than placebo half the time, and worse than placebo half the time. This certainly seems to unambiguously indicate that the drug is no more effective than the placebo, on average.
but would it seem if it was 10 trials, 5 win 5 lose? It just sets some evidence that effect is small. If the drug is not some homoeopathy thats pure water, you shouldn’t privilege zero effect. Exercise for the reader: calculate 95% ci for 100 placebo-controlled trials.
Ah, I misunderstood your point. Sure, agreed that if there’s a data set that doesn’t justify any particular conclusion, quoting a subset of it that appears to justify a conclusion is also lying.
Well, the same should apply to arguing a point when you could as well have argued opposite with same ease.
Note, as you said:
In most real-world cases, both true statements and false statements have evidence in favor of them
and i made an example where both true and false statements got “evidence in favour of them” − 50 trials one way, 50 trials other way. Both of those evidences are subset of evidence, that appears to justify a conclusion, and is a lie.
If I can do that, then if someone insists that I argue a position I can pick one and argue it, even knowing perfectly well that I could just as readily argue a conflicting position.
This is the part that I can’t do. It’s almost like I can’t argue for stuff I don’t believe because I feel like I’m lying. (I’m also terrible at actually lying.)
I figured out a long time ago that I don’t like lying. As a result, I constructed some personal policies to minimize the amount of lying I would need to do. In that, we most likely are the same. However, I also practiced the skill enough that when a necessity arose I would be able to do it right the first time.
“Not having a side” doesn’t have to mean being unable to argue a side, it can instead mean being able to argue several different sides. If I can do that, then if someone insists that I argue a position I can pick one and argue it, even knowing perfectly well that I could just as readily argue a conflicting position.
In the real world, knowing that there are several different plausible positions is actually pretty useful. What I generally find professionally is that there’s rarely “the right answer” so much as there are lots of wrong answers. If I can agree with someone to avoid the wrong answers, I’m usually pretty happy to accept their preferred right answer even if it isn’t mine.
When you can equally well argue that X is true, and that X is false, it means that your arguing is quite entirely decoupled from truth, and as such, both of those arguments are piles of manure that shouldn’t affect anyone’s beliefs. It is only worth making one of such arguments to counter the other for the sake of keeping the undecided audience undecided. Ideally, you should instead present a very strong meta-ish argument that the argumentation for both sides which you would be able to make, is complete nonsense.
(Unfortunately that gets both sides of argument pissed off at you.)
That’s not actually true.
In most real-world cases, both true statements and false statements have evidence in favor of them, and the process of assembling and presenting that evidence can be a perfectly functional argument. And it absolutely should affect your belief: if I present you with novel evidence in favor of A, your confidence in A should decrease.
Ideally, I should weigh my evidence in favor of A against my evidence in favor of not-A and come to a decision as to which one I believe. In cases where one side is clearly superior to the other, I do that. In cases where it’s not so clear, I generally don’t do that.
Depending on what’s going on, I will also often present all of the available evidence. This is otherwise known as “arguing both sides of the issue” and yes, as you say, it tends to piss everyone off.
Let me clarify with an example. I did 100 tests testing new drug against placebo (in each test i had 2 volunteers), and I got very lucky to get exactly neutral result: in 50 of them, it performed better than placebo, while in other 50 it performed worse.
I can construct ‘argument’ that drug is better than placebo, by presenting data from 50 cases where it performed better, or construct ‘argument’ that drug is worse than placebo, by presenting data from 50 cases where placebo performed better. Neither of ‘arguments’ should sway anyone’s belief about the drug in any direction, in case of perfect knowledge of the process that has led to the acquisition of that data, even if the data from other 50 cases has been irreversibly destroyed and are not part of the knowledge (it is only known that there were 100 trials, and 50 outcomes have been destroyed because they didn’t support the notion that drug is good; the actual 50 outcomes are not available). That’s what I meant. Each 50-case data set tells absolutely nothing about truth of drug>placebo, by itself, is a weak evidence that effects of the drugs are small in the perfect knowledge of the extent of cherry picking, and only sways the opinion on “is drug better than placebo” if there’s a false belief related to the degree of cherry picking.
Furthermore, unlike mathematics of decision theory, qualitative comparison of two verbal arguments in their vaguely determined ‘strength’ yield complete junk unless the strength differs very significantly (edit: which happens when one argument is actually good and other is junk). Due to cherry picking as per above example.
Sure. More generally, in cases where evaluating all the evidence I have leads me unambiguously to conclude A, and then I pick only that subset of the evidence that leads me to conclude NOT A, I’m unambiguously lying. But in cases like that, the original problem doesn’t arise… picking a side to argue is easy, I should argue A. That’s very different (at least to my mind) from the case where I’m genuinely ambivalent between A and NOT A but am expected to compellingly argue some position rather than asserting ignorance.
Well, the drug example, there is no unambiguous conclusion, and one can be ambivalent, yet it is a lie to ‘argue’ A or not A , and confusing to argue both rather than to integrate it into conclusion that both pro and anti arguments are complete crap (it IS the case in the drug example that both the pro, and anti data, even taken alone in isolation, shouldn’t update the beliefs, i.e. shouldn’t be effective arguments). The latter though really pisses people off.
It is usually the case in such cases that while you can’t be sure in the truth value of proposition, you can be pretty darn sure that the arguments presented aren’t linked to it in any way. But people don’t understand the distinction between that, and really don’t like if you attack their argument rather than position. In both side’s eyes you’re just being a jerk who doesn’t even care who’s right.
All while, the positions are wrong statistically half of the time (counting both sides of argument once), while arguments are flawed far more than half. Even in math, if you just guess at truth value of the idk Fermat’s last theorem using a coin flip, you have 50% chance of being wrong about the truth value, but if you were to make a proof, you would have something around 99% chance of entirely botching proof up, unless you are real damn good at it. And a botched proof is zero evidence. If you know the proof is botched (or if you have proof of the opposite that also pass your verification, implying that verification is botched), it’s not weak bayesian evidence about any truth values, it’s just a data about human minds, language, fallacies, etc.
Agreed that evaluating the relevance of an argument A to the truth-value T of a proposition doesn’t depend on knowing T.
Agreed that pointing out to people who are invested in a particular value of T and presenting A to justify T that in fact A isn’t relevant to T generally pisses them off.
Agreed that if T is binary, there are more possible As unrelated to T than there are wrong possible values for T, which means my chances of randomly getting a right answer about T are higher than my chances of randomly constructing an argument that’s relevant to T. (But I’ll note that not all interesting T’s are binary.)
This statement confuses me.
If I look at all my data in this example, I observe that the drug did better than placebo half the time, and worse than placebo half the time. This certainly seems to unambiguously indicate that the drug is no more effective than the placebo, on average.
Is that false for some reason I’m not getting? If so, then I’m confused
If that’s true, though, then it seems my original formulation applies. That is, evaluating all the evidence in this case leads me unambiguously to conclude “the drug is no more effective than the placebo, on average”. I could pick subsets of that data to argue both “the drug is more effective than the placebo” and “the drug is less effective than the placebo” but doing so would be unambiguously lying.
Which seems like a fine example of “in cases where evaluating all the evidence I have leads me unambiguously to conclude A, and then I pick only that subset of the evidence that leads me to conclude NOT A, I’m unambiguously lying.” No? (In this case, the A to which my evidence unambiguously leads me is “the drug is no more effective than the placebo, on average”
Non-binary T: quite so, but can be generalized.
but would it seem if it was 10 trials, 5 win 5 lose? It just sets some evidence that effect is small. If the drug is not some homoeopathy thats pure water, you shouldn’t privilege zero effect. Exercise for the reader: calculate 95% ci for 100 placebo-controlled trials.
Ah, I misunderstood your point. Sure, agreed that if there’s a data set that doesn’t justify any particular conclusion, quoting a subset of it that appears to justify a conclusion is also lying.
Well, the same should apply to arguing a point when you could as well have argued opposite with same ease.
Note, as you said:
and i made an example where both true and false statements got “evidence in favour of them” − 50 trials one way, 50 trials other way. Both of those evidences are subset of evidence, that appears to justify a conclusion, and is a lie.
...
You are absolutely correct.
Point taken.
This is the part that I can’t do. It’s almost like I can’t argue for stuff I don’t believe because I feel like I’m lying. (I’m also terrible at actually lying.)
I figured out a long time ago that I don’t like lying. As a result, I constructed some personal policies to minimize the amount of lying I would need to do. In that, we most likely are the same. However, I also practiced the skill enough that when a necessity arose I would be able to do it right the first time.