In fact, let’s try to consider your example from a Bayesian perspective:
(A) Bush was really awful.
(B) You’re just saying that because you’re a liberal, and liberals hate Bush.
Now, of course, you’re right that (A) “doesn’t address” (B) -- in the sense that (A) and (B) could both be true. But suppose instead that the conversation proceeded in the following way:
(A) Bush was really awful.
(B’) No he wasn’t.
In this case (B’) directly contradicts (A); which is about the most extreme form of “addressing” there is. Yet, this hardly seems an improvement.
The reason is that, at least for Bayesians, the purpose of such a conversation is not to arrive at logical contradictions; it’s to arrive at accurate beliefs.
You’ll notice, in this example, that (A) itself isn’t much of an argument; it just consists of a statement of the speaker’s belief. The actual implied argument is something like this:
(A1) I say that Bush was really awful.
(A2) Something I say is likely to be true.
(A3) Therefore, it is likely that Bush was really awful.
The response,
(B) You’re just saying that because you’re a liberal, and liberals hate Bush.
should in turn be analyzed like this:
(B1) You belong to a set of people (“liberals”) whose emotions tend to get in the way of their forming accurate beliefs.
(B2) As a consequence, (A2) is likely to be false.
(B3) You have therefore failed to convince me of (A3).
So, why are political arguments dangerous? Basically, because people tend to say (A) and (B) (or (A) and (B’)) -- which are widely-recognized tribal-affiliation-signals—rather than (A1)-(A3) and (B1)-(B3), at which point the exchange of words becomes merely a means of acting out standard patterns of hostile social interaction. It’s true that (A) and (B) have the Bayesian interpretations (A1)-(A3) and (B1)-(B3), but the habit of interpreting them that way is something that must be learned (indeed, here I am explaining the interpretation to you!).
I probably should have inserted the word “practical” in that sentence. Bayesianism would seem to be formalized, but how practical is it for daily use? Is it possible to meaningfully (and with reasonable levels of observable objectivity) assign the necessary values needed by the Bayesian algorithm(s)?
More importantly, perhaps, would it be at least theoretically possible to write software to mediate the process of Bayesian discussion and analysis? If so, then I’m interested in trying to figure out how that might work. (I got pretty hopelessly lost trying to do explicit Bayesian analysis on one of my own beliefs.)
The process I’m proposing is one that is designed specifically to be manageable via software, with as few “special admin powers” as possible.
...
“Bush was really awful” was intended more as {an arbitrary “starter claim” for me to use in showing how {rational debate on political topics} becomes “politicized”} than {an argument I would expect to be persuasive}.
If a real debate had started that way, I would have expected the very first counterargument to be something like “you provide no evidence for this claim”, which would then defeat it until I provided some evidence… which itself might then become the subject of further counterarguments, and so on.
In this structure, “No he isn’t.” would not be a valid counterargument—but it does highlight the fact that the system will need some way to distinguish valid counterarguments from invalid ones, otherwise it has the potential to degenerate into posting “fgfgfgfgf” as an argument, and the system wouldn’t know any better than to accept it.
I’m thinking that the solution might be some kind of voting system (like karma points, but more specific) where a supermajority can rule that an argument is invalid, with some sort of consequence to the arguer’s ability to participate further if they post too many arguments ruled as invalid.
At the risk of harping on what is after all a major theme of this site, we do in fact have one—it’s called Bayesianism.
How should a debate look? Well, here is how I think it should begin, at least. (Still waiting to see how this will work, if Rolf ever does decide to go through with it.)
In fact, let’s try to consider your example from a Bayesian perspective:
(A) Bush was really awful.
(B) You’re just saying that because you’re a liberal, and liberals hate Bush.
Now, of course, you’re right that (A) “doesn’t address” (B) -- in the sense that (A) and (B) could both be true. But suppose instead that the conversation proceeded in the following way:
(A) Bush was really awful.
(B’) No he wasn’t.
In this case (B’) directly contradicts (A); which is about the most extreme form of “addressing” there is. Yet, this hardly seems an improvement.
The reason is that, at least for Bayesians, the purpose of such a conversation is not to arrive at logical contradictions; it’s to arrive at accurate beliefs.
You’ll notice, in this example, that (A) itself isn’t much of an argument; it just consists of a statement of the speaker’s belief. The actual implied argument is something like this:
(A1) I say that Bush was really awful.
(A2) Something I say is likely to be true.
(A3) Therefore, it is likely that Bush was really awful.
The response,
(B) You’re just saying that because you’re a liberal, and liberals hate Bush.
should in turn be analyzed like this:
(B1) You belong to a set of people (“liberals”) whose emotions tend to get in the way of their forming accurate beliefs.
(B2) As a consequence, (A2) is likely to be false.
(B3) You have therefore failed to convince me of (A3).
So, why are political arguments dangerous? Basically, because people tend to say (A) and (B) (or (A) and (B’)) -- which are widely-recognized tribal-affiliation-signals—rather than (A1)-(A3) and (B1)-(B3), at which point the exchange of words becomes merely a means of acting out standard patterns of hostile social interaction. It’s true that (A) and (B) have the Bayesian interpretations (A1)-(A3) and (B1)-(B3), but the habit of interpreting them that way is something that must be learned (indeed, here I am explaining the interpretation to you!).
I probably should have inserted the word “practical” in that sentence. Bayesianism would seem to be formalized, but how practical is it for daily use? Is it possible to meaningfully (and with reasonable levels of observable objectivity) assign the necessary values needed by the Bayesian algorithm(s)?
More importantly, perhaps, would it be at least theoretically possible to write software to mediate the process of Bayesian discussion and analysis? If so, then I’m interested in trying to figure out how that might work. (I got pretty hopelessly lost trying to do explicit Bayesian analysis on one of my own beliefs.)
The process I’m proposing is one that is designed specifically to be manageable via software, with as few “special admin powers” as possible.
...
“Bush was really awful” was intended more as {an arbitrary “starter claim” for me to use in showing how {rational debate on political topics} becomes “politicized”} than {an argument I would expect to be persuasive}.
If a real debate had started that way, I would have expected the very first counterargument to be something like “you provide no evidence for this claim”, which would then defeat it until I provided some evidence… which itself might then become the subject of further counterarguments, and so on.
In this structure, “No he isn’t.” would not be a valid counterargument—but it does highlight the fact that the system will need some way to distinguish valid counterarguments from invalid ones, otherwise it has the potential to degenerate into posting “fgfgfgfgf” as an argument, and the system wouldn’t know any better than to accept it.
I’m thinking that the solution might be some kind of voting system (like karma points, but more specific) where a supermajority can rule that an argument is invalid, with some sort of consequence to the arguer’s ability to participate further if they post too many arguments ruled as invalid.