The arguability of the claim that “Republicans are slightly better” was significantly higher in 2000, especially given the absence of much information which only became widely available later on. Still, I would be interested in hearing a defense of that statement, if Eliezer still believes it.
(Belated worry: what is the site policy on political discussion? Is it still discouraged?)
Eliezer wrote about his 2000 misjudgment a couple years ago, let’s see … here:
In 2000, the comic Melonpool showed a character pondering, “Bush or Gore… Bush or Gore… it’s like flipping a two-headed coin.” Well, how were they supposed to know? In 2000, based on history, it seemed to me that the Republicans were generally less interventionist and therefore less harmful than the Democrats, so I pondered whether to vote for Bush to prevent Gore from getting in. Yet it seemed to me that the barriers to keep out third parties were a raw power grab, and that I was therefore obliged to vote for third parties wherever possible, to penalize the Republicrats for getting grabby. And so I voted Libertarian, though I don’t consider myself one (at least not with a big “L”). I’m glad I didn’t do the “sensible” thing. Less blood on my hands.
It’s interesting to consider an alternative universe where Gore won that election and there was a win-win scenario: no invasion of Iraq and no extra publicity for global warming alarmists due to An Inconvenient Truth never being made.
It’s also quite possible however that Gore would have jumped on the bandwagon even if he’d been elected in which case he might have done far more damage than Bush did by enacting some kind of cap and trade legislation.
Despite the many and varied catastrophic policy choices of the Bush administration it’s still far from obvious that Gore would have been a better choice.
Gore has a long history of concern about global warming, and it’s pretty clear that he would’ve at least tried to enact restrictions on carbon emissions if he’d been President. But let’s not turn this thread into a debate over whether that would’ve been a good or bad policy, or over global warming or Al Gore more generally.
I think we can safely assume that Eliezer disagrees with Eliezer_2000 with regards to that statement.
Political discussion was discouraged, but I think we’ve probably all practiced rationality enough to talk politics now without degenerating into shouting matches. Thanks for starting the discussion.
Political discussion was discouraged, but I think we’ve probably all practiced rationality enough to talk politics now without degenerating into shouting matches. Thanks for starting the discussion.
Maybe not shouting matches (it becomes too easy for oponents to get away with mass downvotes). But political discussions here degenerate the quality of discussion and the quality of thinking drastically. This applies to some of the conversations on mainstream political issues. It was frustratingly obvious when it came to any conversations about Knox after the people who didn’t care finished using it as a case study. But when politics really becomes the mind killer is when it comes to actual lesswrong social politics, explicit and otherwise.
Mind killing isn’t about shouting matches. It’s about bullshit.
But political discussions here degenerate the quality of discussion and the quality of thinking drastically...It was frustratingly obvious when it came to any conversations about Knox after the people who didn’t care finished using it as a case study
Thinking about why I disagree with the latter sentence has led me to discover another reason why I agree with the former.
There is really nothing political about the Knox case; it’s simply a question of what did or did not happen at Via della Pergola 7 in Perugia between November 1 and 2 of 2007. And yet, almost everywhere it was discussed, people were unable to avoid turning it into a political issue: it was always about the Italian legal system, anti-Americanism, American arrogance, sexual mores, white-middle-class privilege, or what have you. (When Senator Maria Cantwell reacted to the verdict, did she dare express outrage that the life of one of her constituents had been ruined by the failure of eight people to understand probability theory? No, she spoke of “anti-Americanism”.)
Everywhere, that is, except on Less Wrong—where there was little or no discussion of these perhaps-interesting but strictly tangential matters. Here, it was pretty much exclusively about the facts of the case and the epistemic issues involved. (Contrast the discussion in the Richard Dawkins Forum, where people could not resist the temptation to lapse into ad-hominem attacks on the nationality—stated or supposed—of their opponents; there was nothing like that here at all.)
Now, I don’t know for sure that our informal policy of discouraging political discussions was causally decisive in keeping the quality high in this instance. But I can’t escape the conclusion that people have a natural tendency to see tribal politics in everything—so that Less Wrong’s “taboo” against politics not only prevents standard political flamewars but also, through learned cognitive habit, helps us avoid turning our ordinary discussions into political disputes.
What we’re wanting to avoid, in other words, is not just political talk but also the political mindset. Our unusually positive experience with the Knox case suggests to me that restricting the former may actually help fight the latter.
This is a good example of why we need a formalized process for debate—so that irrelevant politicizations can be easily spotted before they grow into partisan rhetoric.
Part of the problem also may be that people often seem to have a hard time recognizing and responding to the actual content of an argument, rather than [what they perceive as] its implications.
For example (loosely based on the types of arguments you mention regarding Knox, but using a topic I’m more familiar with):
[me] Bush was really awful.
[fictional commenter] You’re just saying that because you’re a liberal, and liberals hate Bush.
The reply might be true, but it doesn’t address the claim that “Bush was awful”; it is an ad hominem based on an assumption about me (that I am a liberal), my intellectual honesty (that I would make an assertion just to be agreeing with a group to which I belong), and the further presumption that there aren’t any good reasons for loathing Bush.
As a rational argument, it is plainly terrible—it doesn’t address the content to which it is responding. I suspect this was also the problem with the politicalism that happened regarding the Knox issue—if respondents had
It should be easier to identify arguments of that nature, and “take them down” before they spawn the kind of discussion we all want to avoid.
“Vote down” is presumably one way to do that—if enough people vote down such comments, then they get automatically “folded” by the comment system, and are more likely to be ignored (hopefully preventing further politicalism) -- but apparently that mechanism hasn’t been having the desired effect.
Another problem with “Vote down” is that many people seem to be using it as a way of indicating their disagreement with a comment, rather than to indicate that the comment was inappropriate or invalid.
Are there any ongoing discussions about improving/redesigning/altering the comment-voting/karma system here at LessWrong?
(I was going to type more, but there were interruptions and I’ve lost the thread… will come back later if there’s more.)
Another problem with “Vote down” is that many people seem to be using it as a way of indicating their disagreement with a comment, rather than to indicate that the comment was inappropriate or invalid.
I’ve always felt that a valid use of the karma system is to vote up things that you believe are less wrong and vote down things that you believe to be more wrong.
This seems a valid interpretation to me—but is “wrongness” a one-dimensional concept?
A comment can be wrong in the sense of having incorrect information (as RobinZ points out) but right in the sense of arriving at correct conclusions based on that data—in which case I would still count it as a valuable contribution by offering the chance to correct that data, and by extension anyone who arrived at that same conclusion by believing that same incorrect data.
By the same token, a comment might include only true factual statements but arrive at a wrong conclusion by faulty logic.
I think would be inclined, in any ambiguous case such as that (or its opposite), to base an up-or-down vote on the question of whether I thought the commenter was honestly trying to seek truth, however poorly s/he might be doing so.
Should commenters be afraid to repeat false information which they currently believe to be true, for fear of being voted down? (This may sound like a rhetorical question, but it isn’t.)
I think would be inclined, in any ambiguous case such as that (or its opposite), to base an up-or-down vote on the question of whether I thought the commenter was honestly trying to seek truth, however poorly s/he might be doing so.
I don’t think that is in keeping with the overall goals of this site. You should get points for winning (making true statements) not for effort. “If you fail to achieve a correct answer, it is futile to protest that you acted with propriety.”
This doesn’t necessarily mean instantly downvoting anyone who is confused but it does mean that I’m not inclined to award upvotes for well meaning but wrong comments.
Should commenters be afraid to repeat false information which they currently believe to be true, for fear of being voted down? (This may sound like a rhetorical question, but it isn’t.)
Yes. Commenters should assume their comments will be read by multiple people and so should make a reasonable effort to check their facts before posting. A few minutes spent fact checking any uncertain claims to avoid wasted time on the part of readers is something I expect of commenters here and punishing factual inaccuracies with a downvote signals that expectation.
‘Reasonable effort’ is obviously somewhat open to interpretation but if one’s readers can find evidence of factual inaccuracy in a minute or two of googling then one has failed to clear the bar.
I would suggest that it makes no sense to reward getting the right answer without documenting the process you used, because then nobody benefits from your discovery that this process leads (in at least that one case) to the right answer.
Similarly, I don’t see the benefit of punishing someone for getting the wrong answer while sincerely trying to follow the right process. Perhaps a neutral response is appropriate, but we are still seeing a benefit from such failed attempts: we learn how the process can be misunderstood (because if the process is right, and followed correctly, then by definition it will arrive at the right answer), and thus how we need to refine the process (e.g. by re-wording its instructions) to prevent such errors.
Perhaps “Rationality is the art of winning the truth.”?
Actually, I really don’t like the connotations of the word “winning” (it reminds me too much of “arguments are soldiers”); I’d much rather say something like “Rationality is the art of gradually teasing the truth from the jaws of chaos.” Karma points should reflect whether the commenter has pulled out more truth—including truth about flaws in our teasing-process—or (the opposite) has helped feed the chaos-beast.
In fact, let’s try to consider your example from a Bayesian perspective:
(A) Bush was really awful.
(B) You’re just saying that because you’re a liberal, and liberals hate Bush.
Now, of course, you’re right that (A) “doesn’t address” (B) -- in the sense that (A) and (B) could both be true. But suppose instead that the conversation proceeded in the following way:
(A) Bush was really awful.
(B’) No he wasn’t.
In this case (B’) directly contradicts (A); which is about the most extreme form of “addressing” there is. Yet, this hardly seems an improvement.
The reason is that, at least for Bayesians, the purpose of such a conversation is not to arrive at logical contradictions; it’s to arrive at accurate beliefs.
You’ll notice, in this example, that (A) itself isn’t much of an argument; it just consists of a statement of the speaker’s belief. The actual implied argument is something like this:
(A1) I say that Bush was really awful.
(A2) Something I say is likely to be true.
(A3) Therefore, it is likely that Bush was really awful.
The response,
(B) You’re just saying that because you’re a liberal, and liberals hate Bush.
should in turn be analyzed like this:
(B1) You belong to a set of people (“liberals”) whose emotions tend to get in the way of their forming accurate beliefs.
(B2) As a consequence, (A2) is likely to be false.
(B3) You have therefore failed to convince me of (A3).
So, why are political arguments dangerous? Basically, because people tend to say (A) and (B) (or (A) and (B’)) -- which are widely-recognized tribal-affiliation-signals—rather than (A1)-(A3) and (B1)-(B3), at which point the exchange of words becomes merely a means of acting out standard patterns of hostile social interaction. It’s true that (A) and (B) have the Bayesian interpretations (A1)-(A3) and (B1)-(B3), but the habit of interpreting them that way is something that must be learned (indeed, here I am explaining the interpretation to you!).
I probably should have inserted the word “practical” in that sentence. Bayesianism would seem to be formalized, but how practical is it for daily use? Is it possible to meaningfully (and with reasonable levels of observable objectivity) assign the necessary values needed by the Bayesian algorithm(s)?
More importantly, perhaps, would it be at least theoretically possible to write software to mediate the process of Bayesian discussion and analysis? If so, then I’m interested in trying to figure out how that might work. (I got pretty hopelessly lost trying to do explicit Bayesian analysis on one of my own beliefs.)
The process I’m proposing is one that is designed specifically to be manageable via software, with as few “special admin powers” as possible.
...
“Bush was really awful” was intended more as {an arbitrary “starter claim” for me to use in showing how {rational debate on political topics} becomes “politicized”} than {an argument I would expect to be persuasive}.
If a real debate had started that way, I would have expected the very first counterargument to be something like “you provide no evidence for this claim”, which would then defeat it until I provided some evidence… which itself might then become the subject of further counterarguments, and so on.
In this structure, “No he isn’t.” would not be a valid counterargument—but it does highlight the fact that the system will need some way to distinguish valid counterarguments from invalid ones, otherwise it has the potential to degenerate into posting “fgfgfgfgf” as an argument, and the system wouldn’t know any better than to accept it.
I’m thinking that the solution might be some kind of voting system (like karma points, but more specific) where a supermajority can rule that an argument is invalid, with some sort of consequence to the arguer’s ability to participate further if they post too many arguments ruled as invalid.
There’s a dynamic in conversations I’m noticing here, which is probably obvious to everyone else. I think for any given conversation, there are some “attractors”—directions the conversation could go which would be easy for many of the participants, but which would ultimately end all the interesting and useful parts of the conversation. And good moderation/guidance/curation involves steering the conversation away from those attractors.
For example, the talking heads shows I saw when the NYT ran the big story about massive, warrantless wiretapping by the NSA tended to quickly go from a potentially informative discussion about the specifics of the case, to a much easier-to-have discussion[1] about whether the NYT should have published the story, perhaps even about whether publishing it amounted to treason or should have gotten someone arrested.
I’m not sure that the karma system needs to be redesigned—there’s a limit to how much you can say with a number. It might help to have a “that was fun” category, but I think part of the point of karma is that it’s easy to do, and having a bunch of karma categories might mean that people won’t use it at all or will spend a lot of time fiddling with the categories.
We may have reached the point in this group where enough of us can recognize and defuse those conversations which merely wander around the usual flowchart and encourage people to add information.
One method for dealing with such would be to have designated posts for threads on observed attractors, indexed on the wiki, and fork tangents into those threads.
In keeping with General Order Six: other methods include, as suggested, downvoting any derail into a recognized attractor, with explanation; adding known attractors to a list of banned subjects … it might be best to combine some of these, actually.
Before we start planning solutions should we perhaps establish whether there is a consensus that we even have a problem? One vote for ‘no problem’ from me.
That’s a good reason to continue permitting such discussions, but given the continuing influx of new posters, I suspect there will still be repetition.
The existence of conversational attractors is why I think any discussion tool needs to be hierarchical—so any new topic can instantly be “quarantined” in its own space.
The LW comment system does this in theory—every new comment can be the root of a new discussion—but apparently in practice some of the same “problem behaviors” (as we say here in the High Energy Children Research Laboratory) still take place.
Moreover, I don’t understand why it still happens. If you see the conversation going off in directions that aren’t interesting (however popular they may be), can’t you just press the little [-] icon to make that subthread disappear? I haven’t encountered this problem here myself, so I don’t know if there might be some reason that this doesn’t work for that purpose.
Just now I tried using that icon—not because I didn’t like the thread, but just to see what happened—and it very nicely collapsed the whole thing into a single line showing the commenter’s name, timestamp, karma points, and how many “children” the comment has. What would be nice, perhaps, is if it showed the first line of content—or even a summary which I could add to remind myself why I closed the branch. That doesn’t seem crucial, however.
Oh, I think that’s probably fine. If by meta-politics you mean something like political philosophy. Like, “Fascists believe X” “No they don’t, they clearly believe Y, which is inconsistent with X”
The arguability of the claim that “Republicans are slightly better” was significantly higher in 2000, especially given the absence of much information which only became widely available later on. Still, I would be interested in hearing a defense of that statement, if Eliezer still believes it.
(Belated worry: what is the site policy on political discussion? Is it still discouraged?)
Eliezer wrote about his 2000 misjudgment a couple years ago, let’s see … here:
It’s interesting to consider an alternative universe where Gore won that election and there was a win-win scenario: no invasion of Iraq and no extra publicity for global warming alarmists due to An Inconvenient Truth never being made.
It’s also quite possible however that Gore would have jumped on the bandwagon even if he’d been elected in which case he might have done far more damage than Bush did by enacting some kind of cap and trade legislation.
Despite the many and varied catastrophic policy choices of the Bush administration it’s still far from obvious that Gore would have been a better choice.
Gore has a long history of concern about global warming, and it’s pretty clear that he would’ve at least tried to enact restrictions on carbon emissions if he’d been President. But let’s not turn this thread into a debate over whether that would’ve been a good or bad policy, or over global warming or Al Gore more generally.
I would not be surprised if in that universe, it became the first full-length movie made by a president in office.
I think we can safely assume that Eliezer disagrees with Eliezer_2000 with regards to that statement.
Political discussion was discouraged, but I think we’ve probably all practiced rationality enough to talk politics now without degenerating into shouting matches. Thanks for starting the discussion.
Maybe not shouting matches (it becomes too easy for oponents to get away with mass downvotes). But political discussions here degenerate the quality of discussion and the quality of thinking drastically. This applies to some of the conversations on mainstream political issues. It was frustratingly obvious when it came to any conversations about Knox after the people who didn’t care finished using it as a case study. But when politics really becomes the mind killer is when it comes to actual lesswrong social politics, explicit and otherwise.
Mind killing isn’t about shouting matches. It’s about bullshit.
Thinking about why I disagree with the latter sentence has led me to discover another reason why I agree with the former.
There is really nothing political about the Knox case; it’s simply a question of what did or did not happen at Via della Pergola 7 in Perugia between November 1 and 2 of 2007. And yet, almost everywhere it was discussed, people were unable to avoid turning it into a political issue: it was always about the Italian legal system, anti-Americanism, American arrogance, sexual mores, white-middle-class privilege, or what have you. (When Senator Maria Cantwell reacted to the verdict, did she dare express outrage that the life of one of her constituents had been ruined by the failure of eight people to understand probability theory? No, she spoke of “anti-Americanism”.)
Everywhere, that is, except on Less Wrong—where there was little or no discussion of these perhaps-interesting but strictly tangential matters. Here, it was pretty much exclusively about the facts of the case and the epistemic issues involved. (Contrast the discussion in the Richard Dawkins Forum, where people could not resist the temptation to lapse into ad-hominem attacks on the nationality—stated or supposed—of their opponents; there was nothing like that here at all.)
Now, I don’t know for sure that our informal policy of discouraging political discussions was causally decisive in keeping the quality high in this instance. But I can’t escape the conclusion that people have a natural tendency to see tribal politics in everything—so that Less Wrong’s “taboo” against politics not only prevents standard political flamewars but also, through learned cognitive habit, helps us avoid turning our ordinary discussions into political disputes.
What we’re wanting to avoid, in other words, is not just political talk but also the political mindset. Our unusually positive experience with the Knox case suggests to me that restricting the former may actually help fight the latter.
This is a good example of why we need a formalized process for debate—so that irrelevant politicizations can be easily spotted before they grow into partisan rhetoric.
Part of the problem also may be that people often seem to have a hard time recognizing and responding to the actual content of an argument, rather than [what they perceive as] its implications.
For example (loosely based on the types of arguments you mention regarding Knox, but using a topic I’m more familiar with):
[me] Bush was really awful.
[fictional commenter] You’re just saying that because you’re a liberal, and liberals hate Bush.
The reply might be true, but it doesn’t address the claim that “Bush was awful”; it is an ad hominem based on an assumption about me (that I am a liberal), my intellectual honesty (that I would make an assertion just to be agreeing with a group to which I belong), and the further presumption that there aren’t any good reasons for loathing Bush.
As a rational argument, it is plainly terrible—it doesn’t address the content to which it is responding. I suspect this was also the problem with the politicalism that happened regarding the Knox issue—if respondents had
It should be easier to identify arguments of that nature, and “take them down” before they spawn the kind of discussion we all want to avoid.
“Vote down” is presumably one way to do that—if enough people vote down such comments, then they get automatically “folded” by the comment system, and are more likely to be ignored (hopefully preventing further politicalism) -- but apparently that mechanism hasn’t been having the desired effect.
Another problem with “Vote down” is that many people seem to be using it as a way of indicating their disagreement with a comment, rather than to indicate that the comment was inappropriate or invalid.
Are there any ongoing discussions about improving/redesigning/altering the comment-voting/karma system here at LessWrong?
(I was going to type more, but there were interruptions and I’ve lost the thread… will come back later if there’s more.)
I’ve always felt that a valid use of the karma system is to vote up things that you believe are less wrong and vote down things that you believe to be more wrong.
I have voted this comment up because I think this idea should be discussed.
Agreed—I often downvote because I believe a comment contains wrong data, such that believing the comment would be harmful to the reader.
This seems a valid interpretation to me—but is “wrongness” a one-dimensional concept?
A comment can be wrong in the sense of having incorrect information (as RobinZ points out) but right in the sense of arriving at correct conclusions based on that data—in which case I would still count it as a valuable contribution by offering the chance to correct that data, and by extension anyone who arrived at that same conclusion by believing that same incorrect data.
By the same token, a comment might include only true factual statements but arrive at a wrong conclusion by faulty logic.
I think would be inclined, in any ambiguous case such as that (or its opposite), to base an up-or-down vote on the question of whether I thought the commenter was honestly trying to seek truth, however poorly s/he might be doing so.
Should commenters be afraid to repeat false information which they currently believe to be true, for fear of being voted down? (This may sound like a rhetorical question, but it isn’t.)
I don’t think that is in keeping with the overall goals of this site. You should get points for winning (making true statements) not for effort. “If you fail to achieve a correct answer, it is futile to protest that you acted with propriety.”
This doesn’t necessarily mean instantly downvoting anyone who is confused but it does mean that I’m not inclined to award upvotes for well meaning but wrong comments.
Yes. Commenters should assume their comments will be read by multiple people and so should make a reasonable effort to check their facts before posting. A few minutes spent fact checking any uncertain claims to avoid wasted time on the part of readers is something I expect of commenters here and punishing factual inaccuracies with a downvote signals that expectation.
‘Reasonable effort’ is obviously somewhat open to interpretation but if one’s readers can find evidence of factual inaccuracy in a minute or two of googling then one has failed to clear the bar.
I would suggest that it makes no sense to reward getting the right answer without documenting the process you used, because then nobody benefits from your discovery that this process leads (in at least that one case) to the right answer.
Similarly, I don’t see the benefit of punishing someone for getting the wrong answer while sincerely trying to follow the right process. Perhaps a neutral response is appropriate, but we are still seeing a benefit from such failed attempts: we learn how the process can be misunderstood (because if the process is right, and followed correctly, then by definition it will arrive at the right answer), and thus how we need to refine the process (e.g. by re-wording its instructions) to prevent such errors.
Perhaps “Rationality is the art of winning the truth.”?
Actually, I really don’t like the connotations of the word “winning” (it reminds me too much of “arguments are soldiers”); I’d much rather say something like “Rationality is the art of gradually teasing the truth from the jaws of chaos.” Karma points should reflect whether the commenter has pulled out more truth—including truth about flaws in our teasing-process—or (the opposite) has helped feed the chaos-beast.
At the risk of harping on what is after all a major theme of this site, we do in fact have one—it’s called Bayesianism.
How should a debate look? Well, here is how I think it should begin, at least. (Still waiting to see how this will work, if Rolf ever does decide to go through with it.)
In fact, let’s try to consider your example from a Bayesian perspective:
(A) Bush was really awful.
(B) You’re just saying that because you’re a liberal, and liberals hate Bush.
Now, of course, you’re right that (A) “doesn’t address” (B) -- in the sense that (A) and (B) could both be true. But suppose instead that the conversation proceeded in the following way:
(A) Bush was really awful.
(B’) No he wasn’t.
In this case (B’) directly contradicts (A); which is about the most extreme form of “addressing” there is. Yet, this hardly seems an improvement.
The reason is that, at least for Bayesians, the purpose of such a conversation is not to arrive at logical contradictions; it’s to arrive at accurate beliefs.
You’ll notice, in this example, that (A) itself isn’t much of an argument; it just consists of a statement of the speaker’s belief. The actual implied argument is something like this:
(A1) I say that Bush was really awful.
(A2) Something I say is likely to be true.
(A3) Therefore, it is likely that Bush was really awful.
The response,
(B) You’re just saying that because you’re a liberal, and liberals hate Bush.
should in turn be analyzed like this:
(B1) You belong to a set of people (“liberals”) whose emotions tend to get in the way of their forming accurate beliefs.
(B2) As a consequence, (A2) is likely to be false.
(B3) You have therefore failed to convince me of (A3).
So, why are political arguments dangerous? Basically, because people tend to say (A) and (B) (or (A) and (B’)) -- which are widely-recognized tribal-affiliation-signals—rather than (A1)-(A3) and (B1)-(B3), at which point the exchange of words becomes merely a means of acting out standard patterns of hostile social interaction. It’s true that (A) and (B) have the Bayesian interpretations (A1)-(A3) and (B1)-(B3), but the habit of interpreting them that way is something that must be learned (indeed, here I am explaining the interpretation to you!).
I probably should have inserted the word “practical” in that sentence. Bayesianism would seem to be formalized, but how practical is it for daily use? Is it possible to meaningfully (and with reasonable levels of observable objectivity) assign the necessary values needed by the Bayesian algorithm(s)?
More importantly, perhaps, would it be at least theoretically possible to write software to mediate the process of Bayesian discussion and analysis? If so, then I’m interested in trying to figure out how that might work. (I got pretty hopelessly lost trying to do explicit Bayesian analysis on one of my own beliefs.)
The process I’m proposing is one that is designed specifically to be manageable via software, with as few “special admin powers” as possible.
...
“Bush was really awful” was intended more as {an arbitrary “starter claim” for me to use in showing how {rational debate on political topics} becomes “politicized”} than {an argument I would expect to be persuasive}.
If a real debate had started that way, I would have expected the very first counterargument to be something like “you provide no evidence for this claim”, which would then defeat it until I provided some evidence… which itself might then become the subject of further counterarguments, and so on.
In this structure, “No he isn’t.” would not be a valid counterargument—but it does highlight the fact that the system will need some way to distinguish valid counterarguments from invalid ones, otherwise it has the potential to degenerate into posting “fgfgfgfgf” as an argument, and the system wouldn’t know any better than to accept it.
I’m thinking that the solution might be some kind of voting system (like karma points, but more specific) where a supermajority can rule that an argument is invalid, with some sort of consequence to the arguer’s ability to participate further if they post too many arguments ruled as invalid.
How google translation works “n practice, languages are used to say the same things over and over again. ”
How potentially informative conversations go redundant
These attractors happen both because they’re easy conversation and because they’re useful for propagandists to set up
I’m not sure that the karma system needs to be redesigned—there’s a limit to how much you can say with a number. It might help to have a “that was fun” category, but I think part of the point of karma is that it’s easy to do, and having a bunch of karma categories might mean that people won’t use it at all or will spend a lot of time fiddling with the categories.
We may have reached the point in this group where enough of us can recognize and defuse those conversations which merely wander around the usual flowchart and encourage people to add information.
Ahem.
A fair example.
I may have overestimated the skill level of the group. Or maybe bringing up redundancy as a problem is the first move in developing that skill.
One method for dealing with such would be to have designated posts for threads on observed attractors, indexed on the wiki, and fork tangents into those threads.
In keeping with General Order Six: other methods include, as suggested, downvoting any derail into a recognized attractor, with explanation; adding known attractors to a list of banned subjects … it might be best to combine some of these, actually.
Before we start planning solutions should we perhaps establish whether there is a consensus that we even have a problem? One vote for ‘no problem’ from me.
Good question—let’s watch for attractors for a month, and pay attention to how many turn up.
Atrractors aren’t just subjects, they’re subjects which are commonly discussed in a way that couldn’t pass a Turing test.
If we can manage to bring out new material on one of those subjects, so much the better for us.
That’s a good reason to continue permitting such discussions, but given the continuing influx of new posters, I suspect there will still be repetition.
The existence of conversational attractors is why I think any discussion tool needs to be hierarchical—so any new topic can instantly be “quarantined” in its own space.
The LW comment system does this in theory—every new comment can be the root of a new discussion—but apparently in practice some of the same “problem behaviors” (as we say here in the High Energy Children Research Laboratory) still take place.
Moreover, I don’t understand why it still happens. If you see the conversation going off in directions that aren’t interesting (however popular they may be), can’t you just press the little [-] icon to make that subthread disappear? I haven’t encountered this problem here myself, so I don’t know if there might be some reason that this doesn’t work for that purpose.
Just now I tried using that icon—not because I didn’t like the thread, but just to see what happened—and it very nicely collapsed the whole thing into a single line showing the commenter’s name, timestamp, karma points, and how many “children” the comment has. What would be nice, perhaps, is if it showed the first line of content—or even a summary which I could add to remind myself why I closed the branch. That doesn’t seem crucial, however.
I disagree.
Politics is something we’re wired to care about waaay too much, and “talking politics” is just not a good idea.
ETA: I avoid political discussions for my own sanity.
By talking politics, I meant talking meta-politics.
Oh, I think that’s probably fine. If by meta-politics you mean something like political philosophy. Like, “Fascists believe X” “No they don’t, they clearly believe Y, which is inconsistent with X”
Then… so? They are ists. ists believe inconsistent things all the time! That’s how they signal how truly ist they are. @#%$ing ists.
I attempted to type up a humorous response and found it to be not very humorous, and then my touchpad ate it. Feel free to imagine a funnier response.
For the record, I started sporadically checking this site around mid January.
I meant that the community as a whole has practiced enough and reached a certain standard of discourse.