Agree, Retort, or Ignore? A Post From the Future
My friend Sasha, the software archaeology major, informed me the other day that there was once a widely used operating system, which, when it encountered an error, would often get stuck in a loop and repeatedly present to its user the options Abort, Retry, and Ignore. I thought this was probably another one of her often incomprehensible jokes, and gave a nervous laugh. After all, what interface designer would present “Ignore” as a possible user response to a potentially catastrophic system error without any further explanation?
Sasha quickly assured me that she wasn’t joking. She told me that early 21st century humans were quite different from us. Not only did they routinely create software like that, they could even ignore arguments that contradicted their positions or pointed out flaws in their ideas, and did so publicly without risking any negative social consequences. Discussions even among self-proclaimed truth-seekers would often conclude, not by reaching a rational consensus or an agreement to mutually reassess positions and approaches, or even by an unilateral claim that further debate would be unproductive, but when one party simply fails to respond to the arguments or questions of another without giving any indication of the status of their disagreement.
At this point I was certain that she was just yanking my chain. Why didn’t the injured party invoke rationality arbitration and get a judgment on the offender for failing to respond to a disagreement in a timely fashion, I asked? Or publicize the affair and cause the ignorer to become a social outcast? Or, if neither of these mechanisms existed or provided sufficient reparation, challenge the ignorer to a duel to the death? For that matter, how could those humans, only a few generations removed from us, not feel an intense moral revulsion at the very idea of ignoring an argument?
At that, she launched into a long and convoluted explanation. I recognized some of the phrases she used, like “status signaling”, “multiple equilibria”, and “rationality-enhancing norms and institutions”, from the Theory of Rationality class that I took a couple of quarters ago, but couldn’t follow most of it. (I have to admit I didn’t pay much attention in that class. I mean, we’ve had the “how” of rationality drummed into us since kindergarten, so what’s the point of spending so much time on the “what” and “why” of it now?) I told her to stop showing off, and just give me some evidence that this actually happened, because my readers and I will want to see it for ourselves.
She said that there are plenty of examples in the back archives of Google Scholar, but most of them are probably still quarantined for me. As it happens, one of her class projects is to reverse engineer a recently discovered “blogging” site called “Less Wrong”, and to build a proper search index for it. She promised that once she is done with that she will run some queries against the index and show me the uncensored historical data.
I still think this is just an elaborate joke, but I’m not so sure now. We’re all familiar with the vastness of mindspace and have been warned against anthropomorphism and the mind projection fallacy, so I have no doubt that minds this alien could exist, in theory. But our own ancestors, as recently as the 21st century? My dear readers, what do you think? She’s just kidding… right?
[Editor’s note: I found this “blog” post sitting in my drafts folder today, perhaps the result of a temporal distortion caused by one of Sasha’s reverse engineering tools. I have only replaced some of the hypertext links, which failed to resolve, for obvious reasons.]
- Reasons for someone to “ignore” you by 8 Oct 2012 19:50 UTC; 37 points) (
- 29 Jan 2010 9:16 UTC; 23 points) 's comment on Logical Rudeness by (
- 7 Dec 2021 5:00 UTC; 18 points) 's comment on Leaving Orbit by (
- 27 Apr 2019 18:59 UTC; 15 points) 's comment on Speaking for myself (re: how the LW2.0 team communicates) by (
- 16 Apr 2023 20:28 UTC; 14 points) 's comment on Moderation notes re: recent Said/Duncan threads by (
- 5 Sep 2017 18:47 UTC; 2 points) 's comment on Welcome to Lesswrong 2.0 by (
- 15 Mar 2010 23:39 UTC; 2 points) 's comment on Hedging our Bets: The Case for Pursuing Whole Brain Emulation to Safeguard Humanity’s Future by (
- 11 Oct 2010 15:38 UTC; 1 point) 's comment on Strategies for dealing with emotional nihilism by (
- 20 Mar 2010 10:10 UTC; 0 points) 's comment on Open Thread: March 2010, part 3 by (
- 16 Jan 2010 22:59 UTC; 0 points) 's comment on Dennett’s heterophenomenology by (
- 28 Sep 2013 4:39 UTC; 0 points) 's comment on Inferential silence by (
- 10 Jun 2010 1:06 UTC; 0 points) 's comment on UDT agents as deontologists by (
- 25 Nov 2009 3:50 UTC; 0 points) 's comment on Agree, Retort, or Ignore? A Post From the Future by (
- 15 Dec 2009 15:06 UTC; -1 points) 's comment on A question of rationality by (
- 21 Apr 2012 5:17 UTC; -7 points) 's comment on How can we get more and better LW contrarians? by (
I think you need a proposal that is a lot clearer before it stands any substantial chance of becoming a social norm. Surely we can’t expect every comment here (or anywhere) to end with a full explicit probability distribution over all related issues. Nor can we demand that everyone reply to any reply to them. So you need some easily implementable and verifiable standards on who is expected to reply to what, and who is supposed to summarize their opinions when on what. I’m not that optimistic about his approach, but I’ll keep my mind open.
I forgot to ask, why are you not optimistic about this approach?
The ambiguities seem difficult to surmount. Conversation is a highly evolved system and random changes are usually for the worse.
What ambiguities are you referring to? My proposed norm seems quite clear, compared to other standard norms such as “don’t flame or troll”.
Online conversations, being recent invention, probably has lots of room for improvement.
Agreed, which why I try to suggest non-random changes. :)
As a starting point, I’d be satisfied with getting this feature proposal implemented, and just having a norm that says that every comment that presents a contrary argument should have either an explicit reply or a disagreement status indicator set by the author of the parent of that comment, with common sense exceptions.
I realize that it might not work as well as I hope, or might even make things worse, but the cost seems low enough (compared to the potential benefits, especially if the idea catches on elsewhere) to be worth a try. It would be easy to turn the feature off if it turns out not to help.
There’s the cost to implement it and the cost to use it, and neither is trivial. A fine idea for fans to try, but not ready as a norm for non-fans.
The cost to use it is, on the face of it, at most a couple of mouse clicks. How could that be higher than the benefit of letting every reader know why the conversation ended? Perhaps I’m leaving out some hidden costs here, in which case, what do you think they are?
As for the cost to implement, I volunteer to code the feature myself, if I can get a commitment that it will be accepted (and if someone more qualified/familiar with the codebase doesn’t volunteer).
When I read a comment. I may have a vague sense of not-worth-more-time-ness. So I don’t respond.
I expect actually resolving that sense into a concrete reason to be effortful. It seems like it’d be worth it to do in many cases, but not always.
A version of this feature that sounds more likely to succeed to me, would be if it takes a mouse-click to request a reason for end of argument. I’d expect that to dramatically cut down on the number of times I’d have to resolve a vague sense into a concrete reason.
How do you propose to evaluate whether this feature, if and when implemented, has achieved the desired aim (or made things worse) ?
I don’t have any good ideas here, and guess that it will have to be a judgment call. We agree that the karma system has made things better, right? I hope this change will have an effect that’s similarly obvious.
Plausible deniability all over again. If you don’t reply, it can always be seen as if you’ve forgot/left on vacation/got eaten by a giant squid. If you do reply, you signal with the quality of your reply, and so if you don’t do your best in the context of the conversation at that point, it’s a negative signal based on what’s expected. A sloppy reply or a “this is what I believe if not why and I won’t continue this conversation” will even signal that you don’t respect your interlocutor enough to give a proper response and explain your disagreement, an inference worse than if you didn’t reply at all. Deciding to reply carries a sentence of having to reply well and also to probably have to reply to the follow-ups. Not replying at all is the only way out.
We need a norm not for stating your last position, which is a burden on the person who has to reply, but for accepting irresponsible declarations, as last words in a conversation (and some way of signaling that this comment is in the “last word” mode). This is somewhat in conflict with mental hygiene: you shouldn’t normally expose yourself to statements you can’t sufficiently verify for yourself, but for this particular problem the balance seems to be in the other direction.
I tend to non-reply for this reason often.
So I appreciate this insight.
I agree completely with your first paragraph. This is what I meant by the ease of ignoring an argument being a distorting effect of online or academic (i.e. paper publishing) conversation. In a real-time conversation or debate, this kind of plausible deniability doesn’t exist as an option. I think my suggestion would help remove this plausible deniability and moves the online conversation form closer to real-time conversation.
I think this is a good idea. Perhaps it can be implemented by allowing each commenter to set a status with the following options:
I request a reply.
No reply is necessary.
And having a norm that encourages selecting the second option when appropriate.
I can’t make much sense out of this sentence. I expose myself to statements that I can’t verify all the time, just by browsing the web for news and ideas. Do you want to try to restate this… (or not, it doesn’t seem central to the issue at hand).
ETA2: Found a link for http://wiki.lesswrong.com/wiki/Epistemic_hygiene
It does: you can often change the subject, give a non-answer or even be silent for a few moments and then try to continue the conversation without giving the answer.
Not just a reply, but a bare position statement (that’s the right term, should also work as a signal for this mode: “My position [statement]: …”), possibly without explanation.
Does the link to epistemic hygiene (sorry for using a nonstandard term) resolve this misunderstanding? The point is that knowledge about assertions leaks through, biasing intuition about facts (blog:”do we believe everything we’re told?”, wiki:”Dangerous knowledge”), so it’s a bad idea to fill your mind with hypotheses that you have no reason to believe—it’s knowably miscalibrated availability. As a result, observing unexplained assertions is a pointless or sometimes even harmful activity, but in this case it’s exactly what is asked of the last-worder.
No, the link does not help at all. The second quoted sentence is clear, but it doesn’t seem remotely like the wiki. If that is what you (and others) mean by the phrase, then you should change the wiki. One difference is that the wiki is written as if it is about specific procedures (hand-washing), while the point here is the problem (hygiene).
Yes, you’re right. My statement was too strong. It still seems to me that its easier to ignore arguments online. In a real-time conversation you can remind someone that he hasn’t responded to your argument, in which case he loses much of his plausible deniability. Online, such reminders seem to work very poorly, in my experience, to the extent that almost nobody even bothers to try them.
I’m not sure what exactly you are proposing here. Can you describe how you think the feature should work?
Actually, no. Thanks for asking.
Isn’t the fact that someone else believes in it strong enough to have stated it in public sufficient reason for me to put some weight into that hypotheses?
I can understand this if you mean random assertions, but I think that observing unexplained assertions made by others in good faith would be beneficial on average, even if sometimes harmful. Do you disagree?
I don’t think “I request a reply” and “No reply is necessary” is enough statuses. A lot of comment replies invite perfectly reasonable interjection from third parties—does requesting a reply mean you want the parent to answer, or that you’re trying to collect lots of data points, or anybody who can answer the question is welcome to do so?
I think anybody who can answer the question or point out a flaw in the argument should always be welcome to do so, regardless, but “I request a reply” means the author of the parent should at least set a disagreement status indicator (perhaps with “someone already answered it for me” as an additional option).
“I’m trying to collect lots of data points” seems to be rare enough, that it doesn’t need to be an option. You can just say that in the comment.
In other words, you have a hypothesis that the purpose of a “LessWrong”-type site is to provide a space for debate, and the data disagrees with your hypothesis. Whereupon you’ve decided to challenge the data...
I’m a relative newcomer to LW, having stumbled across it a couple months ago. Take it from this newbie: it’s entirely unclear at the start what you are supposed to do in top-level posts and comments. This community operates on norms that (as far as I could tell) are tacit rather than explicit, and the post above suggests that these norms are unclear to the community itself.
For instance, at one point I thought that comments provided an opportunity for readers to offer constructive feedback to authors. In this view, a comment isn’t intended to signal agreement or disagreement; rather its job is to help the author improve his or her exposition of the point they wanted to make, to be as clear, informative and convincing as possible.
My advice: authors of top-level posts could clarify what kind of response they expect (constructive feedback, agreement/disagreement, having their mistakes pointed out and corrected, elaboration of their theses by people who agree, etc.). Authors would then take responsibility for declaring cloture of this process, if appropriate.
That doesn’t seem like a valid interpretation of my post. Less Wrong is obviously a place for debate, as well as other things. My point is that that the form of the debate isn’t as optimal as I think it could be.
Why shouldn’t comments do all of those things? Can you give an example of a situation where an author might want to rule out one of those categories of comments?
Perhaps you misinterpreted my suggestion as trying to turn Less Wrong into a place only for debate. That’s certainly not my intention.
Yes, observations will confirm that debate happens here.
However to say that it is a forum intended to support debate is a different matter, and there’s nothing “obvious” about that. Where is the forum’s charter ? The most available place to look for it is the “About” page.
What content guidelines does that page offer ? “We suggest submitting links with a short description.” That is hardly an encouragement to debate.
More generally, I respectfully submit that the “About” page, as written, is more misleading than helpful as to the type of top-level posts that will be appreciated by the LW audience, and that newbies could be given more appropriate guidance. If you (I mean you-the-LW-community, not you-Wei) wanted posters and commenters to abide by certain norms, you could probably improve your results by setting out those norms explicitly in the “About” page.
Yes. If I submit a “fiction” type post, then the feedback I want is going to be one particular kind (in my case, I would want to know how readers have interpreted my writing, so that I can improve the writing to convey a more appropriate message). If I submit a post to further my own learning, I’m not interested in agreement or disagreement, I’m interested in new ideas and pointers to information. If I submit a post to put together a virtual study group to study Jaynes’ book, the response I want is yet again dfferent.
Generally, the author knows better than commenters what kind of feedback is appropriate, because the author is best judge of their own intent. So it seems more effective to leave it up to the author to encourage a certain type of comment, seek cloture or not, and recognize cloture when it has occurred.
Well, the motto of this blog (right in the header graphic) is “refining the art of human rationality”. How could we possibly do that without debate and argument?
I think the intended meaning of this sentence is “If you submit a link, then please include a short description.” and not “We most welcome submissions that consist of a link with a short description.” This should probably be clarified.
I disagree, because the comments are visible not just to the author, but to other readers as well, and are often intended more for the benefit of other readers than for the author. Debates often occur in the comments between persons who are not the post author, and I think those debates should be supported as well.
I hope I’m not crossing some LW norm by posting too many comments (I remember some guideline to this effect from the period when these discussions happened on OB), but anyway, here goes.
I can think of at least two ways to achieve this, other than debate and argument. I will list them below, rot13′d. What I would like you to do is to think of two by yourself before reading mine.
My intent in setting you this challenge is to show that “How could we possibly...” questions are a habit best repressed, as they tend to be rhetorical questions. I strongly suspect you can think of other ways yourself, and if you went to that effort you would agree with me that “obvious” doesn’t apply.
Additionally, this may show that debate could be structured in ways different (and possibly more effective) than “I (dis)agree because X” statements.
Bar jnl gb ersvar gur neg bs engvbanyvgl vf gb fghql eryrinag grkgf. Guvf erdhverf ab qrongr be nethzrag, whfg chggvat gbtrgure n tebhc bs yrnearef, vqrnyyl jvgu fbzrbar (n grnpure be zber nqinaprq fghqrag) jub pna uryc gurz bire qvssvphyg gbcvpf be rkrepvfrf.
Nabgure jnl gb ersvar gur neg bs engvbanyvgl vf gb qrivfr tnzrf be rkcrevragvny fvzhyngvbaf juvpu punyyratr gur cynlref’ fxvyyf va engvbanyvgl. Ab qrongr be nethzrag vf gura erdhverq, va snpg nethzragf jvyy or frggyrq abg ireonyyl ohg ol ybbxvat ng zrnfhenoyr bhgpbzrf.
I agree that “debate” can be useful. However, the form it typically takes in online fora can be quite distorted. To take one example, we are both using selective quoting of each other’s comments. This selection is an obvious source of bias if we aren’t careful.
So, I agree with your points (and the original post) in some respects: a) that norms and standards of behaviour can make debate more useful than it usually is, and b) that the LW “welcome” pages could be clarified to convey existing norms better.
My point about authors should also be clarified: “the author knows better than commenters what kind of feedback the author wants”. That is why I think authors are responsible for framing and moderating discussion arising from their post.
This generalizes to commenters: if you write a comment saying “I disagree, because X” you know better than anyone else what your aim is in doing that: helping the author improve their post, or starting a debate aiming at some kind of resolution, or perhaps signaling the extent of your knowledge, or signaling your feelings about the author or about some other commenter. If those aims are unstated, you’re confusing the authors and possibly other commenters.
(One possible reason people may have for choosing the “Ignore” option, by the way, is that a discussion without explicit framing and moderation becomes overly confusing, and the best rational use of their time is to simply walk away from it.)
Therefore, it seems to me, both authors and commenters are better served by the author’s framing the discussion. If the commenters want to have a debate and that isn’t what the original author wanted, all they have to do is write a new top-level post.
It should be clear that I don’t think debate is the only way to refine human rationality (or I wouldn’t have posted a piece of fiction), but I can’t think of anything that can replace the essential function that debate serves, in clarifying people’s positions and arguments, getting them to reassess in light of new information, and possibly reaching a consensus that updates on all relevant information. We’re certainly not at a point where we already know what rationality is, and the only remaining task is for everyone to learn how to practice it.
I agree with you that online debates can easily become distorted due to the nature of the medium, and I think we should try to find ways to reduce that effect. I submit that one of these distorting effects is how easy it is to ignore a contrary argument. (I think it’s probably even easier to ignore contrary arguments in academia, and we should try to fix that as well, although that’s a much more difficult job.)
I still don’t like this idea (of having the author moderate the discussion), but others might, and it seems largely independent of my suggestion of a disagreement status indicator. You should probably propose this in a more prominent place and get other people’s feedback.
As I still think of myself as a newbie here, I’ll wait and observe a little while first, but thanks for the encouragement.
If posting on a blog required committing to spend time in the future answering replies, then I wouldn’t post there. I treat blogging as a leisure activity, which means that I should be able to stop doing it at any time and for any duration without consequences. I think most non-prominent posters feel the same way.
I’m actually divided about this. On the one hand, it’s a really good point—this isn’t something which should be a commitment. On the other hand, it makes sense as a norm—it’s worth a lot to see an argument end in a satisfactory fashion, even if that fashion is, “I don’t see any value in continuing this debate”.
On the gripping hand, though, people will drop out of arguments without conclusion—through burning out, getting slammed with other commitments, or even through the simple decision that the community is not worth their time. And all these are legitimate reasons, even if “I can’t win this argument otherwise” is not.
“Encourage it” is about where I’m at, now.
Your request seems reasonable, and I think it can accommodated without too much difficulty. For example, each user can set a “vacation” option, which if turned on, will cause every unanswered comment directed at that user to receive a “I’m away from this blog” status response.
I do not like autoresponders.
Something that could be done unintrusively and without manual intervention from users is some indicator that the author has not logged in recently. eg. A change in color somewhere, an asterix, etc.
I like this idea, whether or not any other suggestions are put into practice, though I suggest that there be a way to opt out of using it (with a different/complementary signal that you’re doing so, rather than showing as always or never active) - there are periods when I only have the time or energy to skim the blog, but not comment, and I’m sure I’m not the only one who finds themselves in that position or simply prefers privacy.
I agree. You shouldn’t be expected to manually delete your cookies in order to manipulate activity indicators.
I feel that something like bug tracking / trouble ticket software format could be appropriate for this purpose—unresolved arguments would be marked as such, people could view unresolved “arguments” assigned to them, and it would be easy to judge the level of commitment of participants by their number of unresolved arguments.
Obviously, participating in such a system would require much higher level of commitment than that of the current blog format.
I agree about the issue of unresolved arguments. Was agreement reached and that″s why the debate stopped? No way to tell.
Particularly the epic AI-foom debate between Robin and Eliezer on OB, over whether AI or brain simulations were more likely to dominate the next century, was never clearly resolved with updated probability estimates from the two participants. In fact probability estimates were rare in general. Perhaps a step forward would be for disputants to publicize their probability estimates and update them as the conversation proceeds.
BTW sorry to see that linkrot continues to be a problem in the future.
I took the liberty of a creating a wiki page about the AI-foom debate, with links to all of the posts collected in one place, in case anyone wants to refer to it in the future.
I find myself reluctant to support this idea. I think the main reason is that it seems very hard to translate my degrees of belief into probability numbers. So I’m afraid that I’ll update my beliefs correctly in response to other people’s arguments, but state the wrong numbers. Is this a skill that we can learn to perform better?
Right now I just try to indicate my degrees of belief using English words, like “I’m sure”, “I think it’s likely”, “perhaps”, etc., which has the disadvantage of not being very precise, but the advantage of requiring little mental effort (which I can redirect into for example thinking about whether an argument is correct or not).
ETA: It does seem that there are situations where the extra mental effort required to state probability estimates would be useful, like in the AI-Foom debate, where there is persistent disagreement after an extensive discussion. The disputants can perhaps use probability estimates to track down which individual beliefs (e.g., conditional probabilities) are causing their overall disagreement.
Would that be desirable? I know, for example, that when reading Robin’s posts on that topic I often updated away from Robin’s position (weak arguments from a strong debater is evidence that there are not stronger arguments). Given this possibility, having public numbers diverging in such a way would be rather dramatic and decidedly favour dishonesty.
In general there are just far too many signalling reasons to avoid having ‘probability estimates’ public. Very few discussions even here are sufficiently rational as to make those numbers beneficial.
When your estimates are tracked (which was the purpose of predictionbook.com [disclaimer: financial interest]) it becomes much harder to signal with them without blowing your publicly visible calibration.
It does. Of course, given that I was primed with the ‘AI-foom’ debate I found the thought of worrying what people will think of your calibration a little amusing. :)
One of Robin’s comments made me think about what some of the possible hidden costs of my proposal are. Some have already been mentioned by others, so I’ll just collect them here:
drive potential users away from the site, because they’re not comfortable with the new norm
cause people to waste time inventing excuses so they can “legitimately” ignore an argument
cause people to prolong debates beyond what is productive, in order to not be seen as ignoring arguments
Are there any others that I’ve missed?
So, some ideas to minimize the costs:
make the norm “opt-in”, and allow a user to indicate a commitment to it by setting a user preference
make “I disagree, but no longer wish to continue this conversation” a socially acceptable response. As Nesov pointed out, this unavoidably sends the signal that you don’t respect your interlocutor enough to give a proper response, but it also signals that you respect the audience enough to not play the “plausible deniability” game. We as audience should try to make it worthwhile to send this signal.
Again, any other costs, and/or ideas to minimize them?
Leaving off ‘but no longer wish to continue this conversation’ sounds less disrespectful without losing any information content.
One suggestion: A lot of the downsides would be avoided if this norm were reserved for issues that come up a lot with one or more people. Since it gets repeated, it’s not a time drain to settle it once and for all.
What proposal?
I think that this is a great idea. I often find myself ending a debate with someone important and rational without the sense that our disagreement has been made explicit, and without a good reason for why we still disagree.
I suspect that if we imposed a norm on LW that said: every time two people disagree, they have to write down, at the end, why they disagree, we would do better.
Unfortunately that is usually ‘I said it all already and they just don’t get it. They think all this crazy stuff instead.’
Just letting things go allows both to save face. This can increase the quality of discussion because it reduces the need to advocate strongly so you are the clear winner once both sides make their closing statements.
Imposing a norm would add a lot to the effort involved in conversation. Every time you thought about engaging, you’d know you’d risk having to figure out a conclusion. This might or might not be a net win for signal to noise.
Sometimes it takes quite a while to figure out what the actual issues are when new ideas are being explored.
Instead of a norm requiring explicit conclusions, I recommend giving significant credit when they’re achieved.
I have been trying to figure out how to improve online conversations ever since I started (in 1992) to see a lot of potential in online conversations. In parallel, through the writing of Nick Szabo I came to understand a little bit how common law (the system used in the courts in the English-speaking countries) helped our civilization become as wealthy as it has become. Well about 5 or 6 years ago, I decided that judical systems are the best metaphor or model for what is needed to realize the full potential in online conversations. In particular, I came to believe that a fully satisfactory system to support online conversation would require a complex system of procedural rules and would have paid, professional roles similar to the roles of lawyer, judge, court reporter, baliff and bail bondsman. Some of those occupations would be trained in the application of the complex system of procedural rules and some would provide services to support those who are so trained. In the first sentence of the following quote, Wei Dai is heading in a similar direction:
Somewhat tangential, but do they really? The heyday of common law and privatized law which Nick spends so much time on was well before the 1500s and 1600s where the Industrial Revolution really took off; property rights & institutions were often as good or poor in various eras of China. This would suggest that they have small effects, but aren’t all that important in the long run.
(This isn’t my own thought; I’m parroting Gregory Clark’s A Farewell to Alms, where he covers the ‘institutions & laws’ explanation for the Industrial Revolution and rejects it.)
By “privatized law”, you probably mean jurisdiction as property, which of course is no more in England or in any other country. But on his blog, he spends quite a bit of time explaining other aspects of common law which continue to apply to this day.
This does not follow. Even if (as you suggest) property rights were “as good” in China as in England, there are many other features of a judicial system not captured by the phrase “country X had better property rights than country Y” that might have made a big difference. For example, starting around the time of the Glorious Revolution IIRC the English courts insisted that even the King of England had to pay his debts, as described in one of Nick’s blog posts, and I severely doubt that any court in China ever dared to rule against the Emperor of China in that way.
I have not read Farewell to Alms. Is not his thesis that the unparalleled economic success of England (and certain spinoffs from England, like the American colonies) in the 18th and 19th Centuries came from the fact that the English people had been subjected for a longer time than any other nation’s people to selection pressures for capital accumulation and success in capitalists enterprises in general. If not institutional differences, what if anything does Clark say created the earlier and longer selection pressures? Is it just that England, being an island, was invaded less often than territories on the Continent?
Anyway, to this day, the English-speaking countries, which, with the exception of India (which you can almost regard as a medium-sized English-speaking country inside a much larger country) are the only countries using common-law judicial systems, are better at creating wealth than any other countries, even thoroughly Westernized ones like France. I am of course not claiming that India is better at creating wealth than non-English-speaking Western countries. Average IQ is drastically lower in India than in any European country, and no choice of institutions could have conceivably overcome that handicap to economic performance. (Moreover, India’s judicial system departs from the English model in some ways, e.g., they have done away with juries.)
Correlation does not alway entail causation, but when a correlation between economic performance and whether the country used the English model for its judicial system persists for centuries, well, that’s evidence for causation in my book. Note that the scientific knowledge created by Newton, Hooke, Darwin, etc, were readily adopted by other nations (as was of course scientific knowledge created by other nations readily adopted by England) and the rest of the world was eager to adopt the methods of the industrial revolution (Germany being only 30 years behind England in industrialization, IIRC) but to this day, in France for example, what the French call “the Anglo-Saxon model” for governance and the economy is considered too harsh and inhumane for adoption in France. The only other feature of English society whose adoption is opposed strenuously in France is the English language, and I severely doubt that the English language is the cause of the superior economic performance of the English-speaking countries relative to France.
I readily concede that everything I wrote above is guesswork and could very easily be wrong. But given how important economic performance is (and given the potential for insights into the causes of superior economic performance to transfer to other domains of human effort, like creating better online conversations) guesswork is probably worth communicating, if the communication can be done in an honest, equivocal manner like I am trying to do here. It does not take much reading about the differences among judicial systems, and it does not take much experience with the operation of an actual judicial system (particularly if one manages or operates a business) to come to believe that the characteristics of the judical system tends to have a large effect on the economic performance of the community served by the judicial system.
A major thesis of Clark is that China at one time or another apparently had every condition or feature advanced to explain the sui generis Industrial Revolution, or that the advanced explanation fails on other grounds.
While neither you nor I can really speak to how much the Emperors respected debts in general, respecting debts and obligations in general is definitely in keeping with Confucian thought, and it’s not as if English kings were all that respecting, even in those early times when the king was weakest:
Considering China’s long history of commercial expansion and innovation and great merchant fortunes, with financial instruments and arrangements not exceeded until the 1500s in Europe, if even then, it’s hard to think that China really suffered much from arbitrary and unjust laws.
Even including the Confucian countries like China & South Korea & Japan? Keep in mind that before the Lost Decade, Japan had a higher per capita than France (by almost $1000). And then there are other non-English-controlled countries like Argentina:
The classic work on how property rights leads to wealth is still Adam Smith’s The Wealth of Nations, written at the start of the industrial revolution and in the middle of an agricultural revolution. England went from over 80% agricultural workers in 1870 to only 30% in 1800, freeing up labor for the industrial revolution; this occurred largely through capital investments in land improvements, for example draining, marl, and lime (http://www.bahs.org.uk/26n1a1.pdf). Smith captures well the legal changes going on, and that he saw as encouraging capital investment, such as the decline of the guilds, the replacement of primogeniture and the complex system of tenancies in land with alienable fee simple ownership, and the resultant enclosure movement, in which commons were replaced by single proprietor control of land.
Roughly speaking, Japan and the rest of East Asia converted to Roman law (the law of Western Europe outside of England) between the mid 19th (Japan) and mid 20th centuries. The process Smith describes was also a partial Romanization of English law into what we now know as the modern common law. So all countries to successfully industrialize have done so under variations of the Roman or English common law, and the English common law itself borrowed quite a bit from the Roman substantive law. (Contrast to Roman procedural law, which is awful, but that’s another story).
The interaction of the decline of political property rights with the industrial revolution is complicated. On the one hand, political corporations such as the East India Company and the West Indies and American colonies were very important to the British economy at that time. Overseas trade provided timber, cotton, and many other industrial inputs. On the other hand, the decline of political property rights in land led to the alienable ownership and the decline of the guilds that Smith and the new capitalists championed.
I particularly commend Book 3 Ch. 3 and Book 4 Ch. 7 of Wealth of Nations which cover much of this, albeit from Smith’s Romanist view. Sadly, most people never get past the famous Book 1.
East Asian institutions are hard to compare, first because their population may have an IQ advantage that makes up for institutional handicaps, and second because we don’t really know what they were: at least here in the West our knowledge of their old legal systems is extremely poor, and they underwent radical Westernization in the late 19th and 20th centuries. Clark’s genetic theory can’t explain why Britain declined so rapidly after about 1870 from being the leader in industrialization and a globe-straddling empire to being just another of dozens of medium-sized industrialized countries today. Political and legal developments, in particular the Reform movement, can, but that is a topic for another day.
Apart from a few very useful graphs Clark’s work is rather poor and this theories are silly. He needs to learn far more about both evolution and law to form useful theories in those areas. See http://unenumerated.blogspot.com/2007/09/institutional-changes-precedent-to.html and http://unenumerated.blogspot.com/2007/08/why-industrial-revolution.html.
Those are interesting links & comments, thanks!
gwern, do you have reference(s) for Chinese “financial instruments and arrangements not exceeded until the 1500s in Europe”? Thanks in advance.
Not up to your scholarly level, I don’t think. I’m largely going on the reading/research I did using De Roover and others to write http://en.wikipedia.org/wiki/Medici_Bank , where I was struck by the wretched subterfuges that merchants had to resort to and the general lack of sophistication, which struck me as quite different from Chinese systems with genuine fiat currency, undisguised interest, and general sophistication (there may’ve been Chinese insurance in there too, but I’ve forgotten any details of that).
Well, I’ve read a paper that supports a different perspective: usury laws were historically circumvented in the West and Middle East through clever use of (what we now call) the Put-Call Parity Theorem: any bonds that were issued were converted into a combination of puts, calls, and possibly rental contracts. This retains the substance of an interest-bearing loan, but without any explicit “interest payment” While the law might have been sophisticated, the resulting use of derivatives contracts was not.
The paper discusses the origin of mortgages in medieval England and the Middle East. It’s been a while since I read it, so I can’t summarize it, but I was shocked by their rather early trading of derivatives and options.
If it’s any good, interest (or ursary) was only legalised in England in 1571, up to a value of 10%.
Citation: Praise and Paradox; Merchants and craftsmen in Elizabethan popular literature, Laura Caroline Stevenson.
Have you had a look at email-mediated Nomic-type games, such as Agora Nomic ? For your purposes they’re a relevant and interesting blend of judicial system and online discussion.
What I have been after is a way to improve serious conversations, among, e.g., scientists or administrators. In contrast, unless I am very severely mistaken, Nomic players are mostly out to have fun, particularly, the same kind of fun that a person gets by watching a TV show, like Desperate Housewives or Seinfeld, where every character is always trying to one-up every other character.
Would you advise also legal scholars to learn about Nomic to improve real-world, high-stakes judical systems? I would not. I think Nomic is almost certainly irrelevant to that purpose—unless perhaps your goal is to understand the natural human capacity to get off on status competitions and political and legalistic manipulations and trickery to discourage lawyers or judges from getting off on it.
One potential response to what I just wrote is that if we insist that no lawyer or judge derive pleasure from status competitions or manipulations and trickery then our courts would not have enough lawyers and judges even though we pay them well. My response to that is that all I meant is that it would be counterproductive to introduce more of that natural human motivation into our courts, and the need to do so is the only circumstance I can imagine in which Nomic would actually contribute positively to a real-world, high-stakes judicial system.
Suppose someone has gone through the trouble and expense of getting a degree in law and passing the bar exam, but cannot stay motivated to practice law because he or she does not derive sufficient pleasure from it (and he or she cannot become a professor of law). Well, that is a circumstance in which Nomic might have something quite worthwhile to teach that individual lawyer (namely, how to take pleasure in legal maneuvering and legal competition). But that is different from Nomic’s having something to contribute to the judicial system as a whole; the judicial system as a whole is not experiencing any problems in filling its occupational roles with motivated workers.
Moreover, if you are at all interested in becoming more “epistemically” rational than the average person, it is a bad idea to learn how to take pleasure in status competitions or political or legal maneuverings.
Peter Suber, the inventor of Nomic, is a legal scholar.
...and Nomic itself was published as an appendix to his book on legal philosophy, The Paradox of Self-Amendment: A Study of Law, Logic, Omnipotence, and Change, although a preview of its rules appeared in one of Douglas Hofstadter’s Metamagical Themas columns.
That is not a refutation of anything I wrote. Lots of things invented by legal scholars turn out to have no positive effect on judicial systems.
I think about this kind of issue a lot myself. My conclusion is along the lines of Hanson’s X isn’t about X—debating isn’t really about discovering truth, for most people in most forums (LWers might be able to do better).
Indeed, it’s not even clear to me that debate ever works. In science, debate is useful mostly to clarify positions, the meaning of terms, and the points of disagreement. It is never relied upon to actually obtain truth—that’s what experiments are for.
One problem that debates inevitably encounter is the failure to distinguish questions of “is” from questions of “ought”. We can potentially come to an agreement about answers to is-questions. It will be harder to agree about ought-questions.
Almost all debates involve mixtures of is-questions and ought-questions. Ideally, we would lay out a system of terminal values (answers to basic ought-questions), then ask a bunch of is-questions, and figure out what policy leads to the best fulfillment of the values. Of course, people never do this, either because the answers to the is-questions can’t be reliably obtained, or because debate isn’t about finding truth.
To get better answers to policy questions, we should do something similar to what Wall St. types do when they need to evaluate a security. Build a big spreadsheet that expresses the relationship between the is-Q and ought-Q values, plug in values for the ought-Q numbers and estimates for the is-Q numbers, and see what comes out. The model should also be tested under various assumptions for the is-Q numbers.
Every rationalist should be willing to revise his support of any policy, if new information about the is-Q numbers appears. Furthermore, he should be able to express what kind of new is-Q information would lead him to revise his policy support. For example, if you support international treaties to limit CO2 emissions, you should be to say under what conditions you would reverse your support (the same is true if you don’t support such treaties, of course).
I find that people sometimes misread my intent (perhaps I am not clear enough) or use words in a different way to me. So continuing the discussion wouldn’t increase their knowledge of the world apart from the little bit that refers to me, which doesn’t seem worthwhile.
I feel a forum where no argument is unresolved would work better if there was a way of splitting people into groups with different view points. Then anyone from that group could make arguments on its behalf.
On the other hand, I often find that it is because people have assumed I am arguing on behalf of a specific group that they don’t understand me. I quickly lose patience in such situations.
I was thinking more for a way to self-identify as supporting certain statements.
Like with what wedrifid said, that sounds like a recipe for tribalism (“refute the Xers!”) and misclassification of others’ beliefs.
If someone else addresses an argument the same way you would have, then just say so if you want to make clear that that’s your position.
Doesn’t the group dynamics depend on how the people are split up? I’m thinking of something along the lines of agreeing with a comment and the list of people that agree with a comment being considered one team. In comments down that thread anyone on the top-level post gets associated with posts by that team, although they could disassociate themselves from certain messages. Groups would be tenuous and transient. You could even allow wiki like editing of comments by members of the same group.
I’m interested in forms of discussion that will scale up (with more than the tens of active participants we have at the moment), and also leave something useful for people to read later on.
I would like to pronounce to one and all that I am now and always will be my own team. Other people or teams thereof may be considered allies of my team and warrant reciprocation when that they agree with me or, while disagreeing, do so respectfully.
My team hereby places itself in opposition to all attempts to formalize party systems or official groupings transient or otherwise. Human instincts provide this by default and such dynamics are evident and healthily so on LessWrong.com already.
The system I proposed wasn’t for lesswrong. It was for the hypothetical place where no argument can be left unanswered.
You don’t like teams I get it. Feel free to suggest a system that encourages people to counter all points they don’t agree with, with the following properties
1) Doesn’t create too much noise
2) Allows people to see all current open points that they may be interested in countering, i.e. serves the equivalent purpose of replies in lesswrong, but also allows people to see other peoples relevant orphaned arguments.
3) Doesn’t require too much time.
I quite like your point, now that you put it minus the rigidity. Your argument, first looked like what SilasBarta said. But I agree with you on trying to make this debating smarter.
There is certainly scope to improve the way comments are structured at lesswrong. May be showing who voted up a comment would be a good start. Then we can move to associating certain messages with a group of people who agree to that point. And yes, it is important to maintain flexibility while making these changes.
Given that we’re not supposed to be using voting to express agreement or disagreement, I propose that a second voting system should be put in place if we go this route.
While I encouraged the use of voting mechanisms to in a more nuanced manner than mere agreement I am not willing to accept shame or guilt for doing so on occasion. Mostly because to do so would be a recipe for bitterness and contempt. Human instincts for hypocrisy and self deception being what they are people will vote based on disagreement even if they happen to do cry foul or sulk if others reciprocate.
For my part a vote means “I would like to see more posts similar to this one”. This is not quite the nash equilibrium of “this vote best serves my social agenda” which does come close to being the most useful model of the dynamics at times.Nevertheless, the “I want more of this” is a voting attitude which can be maintained with little frustration and serves to enable precisely mechanism that karma systems are intended for.
My tangent aside, I totally agree with your conclusion and for more or less the same reason.
No disagreement: We’re strongly enough wired, I think, to use a simple voting system like this one in a particular way that a strong but unenforceable social norm against doing so won’t do anything but cause unnecessary emotional turmoil. On the other hand, the existing weak norm is useful and relevant, which is why I tried to evoke it—obviously not clearly enough, though.
You can just view a person’s profile and see their past posts. That way you can check quickly what side of a previous argument they argued.
Given that people can change their minds over time, and rationalists especially are likely to do so as data and evidence mounts, having a readily visible viewpoint-signature strikes me as a bad idea.
Offline of course, this happens all the time. For instance, if women are most likely to support X, so a female entering a debate is probably supporting X.
I’m not sure what you are responding to here. I don’t want to give people a label of non-foom believer or whatever. That would be one way of splitting people into groups but not the way I was thinking of.
SPAMMITY SPAM SPAM
Brilliant post Wei.
Historical examination of scientific progress is much less of a gradual ascent towards a better understanding upon the presentation of a superior argument (Karl Popper’s Logic of Scientific Discovery) but much more a irrational insistence on a set of assumptions as unquestionable dogma until the dam finally burst under the enormous pressures that kept building (Thomas Kuhn’s Structure of Scientific Revolutions).
Really? I thought it consisted mostly of elites retorting straw men and ignoring any strong arguments of those lower in status until such time as they died or retired. The lower status engage in sound arguments while biding their time till it is their chance to do the ignoring and in so doing iterate the level of ignorance one generation forward.
You will find that this is pretty much what Kuhn says.
This is childish. You propose a norm, it doesn’t make much of a splash, and so you adopt an incredulous persona to try to make us invoke an absurdity heuristic on our lack of norm-having? I liked the original idea—it’d be interesting, at least—but this is silly.
It was meant to be a light-hearted attempt to fight the Status Quo Institution Bias. Since you liked the original idea but don’t like the way I went about trying to get it implemented, do you have any other suggestions?
You could have just started using it yourself and upvoted anyone who copied you. Or tried to draw out specific participants in specific threads—“I think an endcap to this conversation from you would add a lot of value, if you could just summarize...”
I did try to do it myself, for example here, here, here and here. But nobody seems to have noticed or imitated me, or if they did, I didn’t notice them.
Trying to practice this to a greater extent, without an established group norm, would have signaled things I prefer not to signal, I think.
I do this sort of thing all the time, so shouldn’t you open fire on me before targeting Dai?
To be fair, I think I experienced a similar irritation to Alicorn’s over certain cutesy representations of our future. But I also feel that way every time Assimov makes his characters curse ‘Gee, Space!’, so you’re in good company.
My favorite part of the post is the hilarious suggestion that he should challenge the ignorer to a duel to the death…
I figured I’d already complained about you enough for a month, back when I was criticizing your fiction. (I also think that being the de facto guy in charge here gives you certain special leeway in the proposing norms department.)
Ignore.