So why is the advice “behave as if your interlocutors are also aiming for convergence on truth”, rather than “seek out conversations where you don’t think your interlocutors are aiming to converge on truth, because those are exactly the conversations where you have something substantive to say instead of already having converged”?
[...] To see why, substitute “making money on prediction markets” for “moving closer to truth”, “betting” for “updating”, and “trying to make money on prediction markets” for “seeking truth”
The one should not be substituted for the other, because there are important differences in the goals.
On a betting market, if you have a knowledge edge, it’s in your interest to keep it that way, to the extent possible. Obviously, the fact of your making bets leaks information, but you don’t want information to leak via any other means. If you have a brilliant weather model that’s 10x more accurate than everyone else’s, you definitely don’t want to publish it on your website; you want to keep winning bets against people with worse models. In fact, if you have the opportunity to verbally praise and promote the wrong models, it’s in your interest to do so; and if, for some reason, you have to publish the details of your weather model, it’s in your interest to make your writeup as confusing, inscrutable, and hard to implement as possible.
If, on a forum, you think you “win points” solely by writing correct arguments when others are wrong, then it’s in your interest to make sure no one else learns from the things you write, so you can keep winning. If you have an opportunity to phrase something more offensively, take it, so your opponents are more likely to get angry, reject your correct arguments, and stay wrong. And, for that matter, why explain your reasoning? Why not just say “You’re wrong, you stupid f***; X is the truth”?
I don’t think you actually believe that you “win points” solely by writing correct arguments when others are wrong. I suspect you have a notion of what “making proper arguments” is—and it involves clearly explaining your reasoning and such—and view participation in the forum as a game in which participants are trying to be the best at “making proper (and novel) arguments”. Well, it seems like we could choose whatever notion of a “proper argument” we liked, and upvote arguments to the extent that they match the ideal, and at least in theory we’d end up with posts of the type we’re rewarding—so we need to decide what we want to reward, and presumably “clearly stated arguments that aren’t deliberately trying to enrage people” are part of what we’d like to end up with.
So, exactly what type of posts do we want people to be trying to write? One strategic decision, which I think Duncan makes and I’m not sure of your opinion on, is to try to get lots of value from participants who are fairly good but imperfect—specifically, are at least somewhat prone to turn arguments into slap fights if they feel like they’ve been slapped (and evolutionary processes have created memes that encourage people to view lots of things as slaps)—and therefore to have the “ideal posting goals” call for error-correcting mechanisms and stuff that make this less likely.
(An alternate strategy would be “Assume that all participants we care about are the platonic ideal, who won’t take any bait and never let anger or any other emotion bring them to any wrong decisions; rely on downvotes to purge any bad behavior.” This could be a good approach, especially if you think this platonic ideal is easy to achieve. However, if there are actually quite a lot of imperfect participants, this could go badly. I will merely say that this would be more appropriate for a website called Never Wrong.)
[Why not] “seek out conversations where you don’t think your interlocutors are aiming to converge on truth, because those are exactly the conversations where you have something substantive to say instead of already having converged”?
It depends somewhat on one’s model here. Ideally, interlocutors who aren’t aiming to converge on the truth, and write bad posts as a result, will get downvoted, and then we don’t need to care about them. Or maybe the socially enforced rules will end up pushing them into writing posts that are actually good even if they didn’t mean them to be; that’s a fine outcome. Also, given that somewhat bad posts exist, another strategy is to find them and write a really good reply that enlightens the readers and may even push the authors of the bad posts to write better replies themselves; that also seems like a good outcome, therefore one we’d want to reward. (A possible downside: replying at all does attract more eyes to the conversation—e.g. the frontpage does show recent comments—and if the conversation leading up to your post is bad enough, it may be net negative to the reader if your great reply wasn’t so great as to outweigh that.)
So, yes, it may in fact make sense to seek out conversational cesspools and write comments to improve them. The difference with betting markets is: with a betting market, you hope this keeps happening so you can keep profiting off others’ ignorance; but on a forum where the goals are what I think they are, you hope that the participants and observers learn to stop creating cesspools—or, well, you hope for whatever you want[1], but you act as though that’s your goal, and do your best in your comment to encourage future good behavior, because that’s what the forum ideally rewards.
There is potentially the issue, pointed out in some fictional stories and sometimes in real life, where if someone’s identity / fulfillment / most profitable career path is “swooping in to save everyone from instances of problem X”, then they may have the perverse incentive to discourage anyone else from solving X in general. Luckily, the tragedy of the commons can help us here: though it might e.g. benefit cardiologists collectively if everyone had horrible nutrition, it’s unlikely to be worthwhile to any individual cardiologist to spend the effort lobbying for that.
I suspect you have a notion of what “making proper arguments” is—and it involves clearly explaining your reasoning and such—and view participation in the forum as a game in which participants are trying to be the best at “making proper (and novel) arguments”.
Right!
specifically, are at least somewhat prone to turn arguments into slap fights if they feel like they’ve been slapped
I guess I’m OK with a little bit of slap-fighting when I don’t think it’s interfering too much with the “make proper and novel arguments” game, and that on the current margin (in the spaces where people are reading this post and the one it’s responding to), I’m worried that the cure is worse than the disease (even though this is a weird problem to have relative to the rest of the internet)?
The standard failure mode where fighting and insults get in the way of making-proper-and-novel-arguments is definitely bad. But in the spaces I inhabit, I’m much worried about the failure mode where people form a hugbox/echo-chamber where they/we congratulate them/our-selves on being such good “collaborative truth-seekers”, while implicitly colluding to shut out proper and novel arguments on the pretext that the speaker is being insufficiently “collaborative”, “charitable”, &c.
In particular, if I make a criticism that is itself wrong, I think it’s great and fine for people to just criticize my criticism right back, even if the process of litigating that superficially looks like a slap-fight. I think that’s more intellectually productive than (for example) expecting critics to pre-emptively pass someone’s Intellectual Turing Test.
therefore to have the “ideal posting goals” call for error-correcting mechanisms and stuff that make this less likely.
I’m in favor of ideal posting goals and error-correcting mechanisms, but I think that “rationalist” goals need to justify themselves in terms of correctness and only correctness, and I’m extremely wary of norm-enforcement attempts that I see as compromising correctness in favor of politeness (even when the people making such an attempt don’t think of themselves as compromising correctness in favor of politeness).
If someone thinks I’m mistaken in my claim that a particular norm-enforcement attempt is sacrificing correctness in favor of politeness, I’m happy to argue the details and explain why I think that, but it’s frustrating when attempts to explain problems with proposed norms are themselves subjected to attempts to enforce the norms that are being objected to!
The difference with betting markets is: with a betting market, you hope this keeps happening so you can keep profiting off others’ ignorance; but on a forum where the goals are what I think they are, you hope that the participants and observers learn to stop creating cesspools
Thanks, this is an important disanalogy that my post as originally written does not adequately address!
The one should not be substituted for the other, because there are important differences in the goals.
On a betting market, if you have a knowledge edge, it’s in your interest to keep it that way, to the extent possible. Obviously, the fact of your making bets leaks information, but you don’t want information to leak via any other means. If you have a brilliant weather model that’s 10x more accurate than everyone else’s, you definitely don’t want to publish it on your website; you want to keep winning bets against people with worse models. In fact, if you have the opportunity to verbally praise and promote the wrong models, it’s in your interest to do so; and if, for some reason, you have to publish the details of your weather model, it’s in your interest to make your writeup as confusing, inscrutable, and hard to implement as possible.
If, on a forum, you think you “win points” solely by writing correct arguments when others are wrong, then it’s in your interest to make sure no one else learns from the things you write, so you can keep winning. If you have an opportunity to phrase something more offensively, take it, so your opponents are more likely to get angry, reject your correct arguments, and stay wrong. And, for that matter, why explain your reasoning? Why not just say “You’re wrong, you stupid f***; X is the truth”?
I don’t think you actually believe that you “win points” solely by writing correct arguments when others are wrong. I suspect you have a notion of what “making proper arguments” is—and it involves clearly explaining your reasoning and such—and view participation in the forum as a game in which participants are trying to be the best at “making proper (and novel) arguments”. Well, it seems like we could choose whatever notion of a “proper argument” we liked, and upvote arguments to the extent that they match the ideal, and at least in theory we’d end up with posts of the type we’re rewarding—so we need to decide what we want to reward, and presumably “clearly stated arguments that aren’t deliberately trying to enrage people” are part of what we’d like to end up with.
So, exactly what type of posts do we want people to be trying to write? One strategic decision, which I think Duncan makes and I’m not sure of your opinion on, is to try to get lots of value from participants who are fairly good but imperfect—specifically, are at least somewhat prone to turn arguments into slap fights if they feel like they’ve been slapped (and evolutionary processes have created memes that encourage people to view lots of things as slaps)—and therefore to have the “ideal posting goals” call for error-correcting mechanisms and stuff that make this less likely.
(An alternate strategy would be “Assume that all participants we care about are the platonic ideal, who won’t take any bait and never let anger or any other emotion bring them to any wrong decisions; rely on downvotes to purge any bad behavior.” This could be a good approach, especially if you think this platonic ideal is easy to achieve. However, if there are actually quite a lot of imperfect participants, this could go badly. I will merely say that this would be more appropriate for a website called Never Wrong.)
It depends somewhat on one’s model here. Ideally, interlocutors who aren’t aiming to converge on the truth, and write bad posts as a result, will get downvoted, and then we don’t need to care about them. Or maybe the socially enforced rules will end up pushing them into writing posts that are actually good even if they didn’t mean them to be; that’s a fine outcome. Also, given that somewhat bad posts exist, another strategy is to find them and write a really good reply that enlightens the readers and may even push the authors of the bad posts to write better replies themselves; that also seems like a good outcome, therefore one we’d want to reward. (A possible downside: replying at all does attract more eyes to the conversation—e.g. the frontpage does show recent comments—and if the conversation leading up to your post is bad enough, it may be net negative to the reader if your great reply wasn’t so great as to outweigh that.)
So, yes, it may in fact make sense to seek out conversational cesspools and write comments to improve them. The difference with betting markets is: with a betting market, you hope this keeps happening so you can keep profiting off others’ ignorance; but on a forum where the goals are what I think they are, you hope that the participants and observers learn to stop creating cesspools—or, well, you hope for whatever you want[1], but you act as though that’s your goal, and do your best in your comment to encourage future good behavior, because that’s what the forum ideally rewards.
There is potentially the issue, pointed out in some fictional stories and sometimes in real life, where if someone’s identity / fulfillment / most profitable career path is “swooping in to save everyone from instances of problem X”, then they may have the perverse incentive to discourage anyone else from solving X in general. Luckily, the tragedy of the commons can help us here: though it might e.g. benefit cardiologists collectively if everyone had horrible nutrition, it’s unlikely to be worthwhile to any individual cardiologist to spend the effort lobbying for that.
Right!
I guess I’m OK with a little bit of slap-fighting when I don’t think it’s interfering too much with the “make proper and novel arguments” game, and that on the current margin (in the spaces where people are reading this post and the one it’s responding to), I’m worried that the cure is worse than the disease (even though this is a weird problem to have relative to the rest of the internet)?
The standard failure mode where fighting and insults get in the way of making-proper-and-novel-arguments is definitely bad. But in the spaces I inhabit, I’m much worried about the failure mode where people form a hugbox/echo-chamber where they/we congratulate them/our-selves on being such good “collaborative truth-seekers”, while implicitly colluding to shut out proper and novel arguments on the pretext that the speaker is being insufficiently “collaborative”, “charitable”, &c.
In particular, if I make a criticism that is itself wrong, I think it’s great and fine for people to just criticize my criticism right back, even if the process of litigating that superficially looks like a slap-fight. I think that’s more intellectually productive than (for example) expecting critics to pre-emptively pass someone’s Intellectual Turing Test.
I’m in favor of ideal posting goals and error-correcting mechanisms, but I think that “rationalist” goals need to justify themselves in terms of correctness and only correctness, and I’m extremely wary of norm-enforcement attempts that I see as compromising correctness in favor of politeness (even when the people making such an attempt don’t think of themselves as compromising correctness in favor of politeness).
If someone thinks I’m mistaken in my claim that a particular norm-enforcement attempt is sacrificing correctness in favor of politeness, I’m happy to argue the details and explain why I think that, but it’s frustrating when attempts to explain problems with proposed norms are themselves subjected to attempts to enforce the norms that are being objected to!
Thanks, this is an important disanalogy that my post as originally written does not adequately address!