If there is something intrinsically valuable about controversy (and I’m not really sure that there is, but I’m willing to accept the premise for the sake of discussion), and we’re not getting the optimal level of controversy on the topics we normally discuss (again, not sure I agree, but stipulated), then perhaps what we should be doing is not looking for “more and better contrarians” who will disagree with us on the stuff we have consensus on, but rather starting to discuss more difficult topics where there is less consensus.
One problem is, of course, that some of us are already worried that LW is too weird-sounding and not sufficiently palatable to the mainstream, for example, and would probably be made uncomfortable if we explore more controversial stuff… it would feel too much like going to school in a clown suit. And moving from areas of strength to areas of weakness is always a little scary, and some of us will resist the transition simply for that reason. And many more.
Still, if you can make a case for the value of controversy, you might find enough of us convinced by that case to make that transition.
LessWrong orthodoxy includes a large number of propositions (over a hundred posts in just core sequences, at least one thesis per post)
The deductions that lead to each claim are largely independent (if post B was an obvious corollary of post A, it would have saved writer’s and readers’ time not to write it)
Reasoning is error-prone, especially when not formalized (this is a point made in the sequences; if it’s wrong then q.e.d.)
Even if each deduction is overwhelmingly likely (let’s say 99%) to be correct, it would be likely (63% in this case) that at least one out of a hundred would be incorrect
Because these are deductive chains of reasoning (they’re “the sequences”, not just “the set”), one false deduction can invalidate any number of conclusions which follow from it. The Principle of Explosion has been defeating brilliant people for millennia.
In other words, even if you believe that each item of LessWrong consensus is almost certain to be correct, you should still be doubtful that every item of LessWrong consensus is likely to be correct. And if there are significant errors, then how else will they be found and publicized other than via a controversial discussion?
I agree that there are errors in the “LW consensus.” I agree that a cost-effective mechanism for identifying those errors would be a valuable thing.
By your estimation, how many controversial discussions have occurred on LW in the last year? How many of them have contributed to identifying any of those errors?
Those are both good questions (as is the implicit point about cost-effectiveness or lack thereof); I’m afraid I’m not a heavy enough reader here to quickly give accurate answers.
I’m not looking to you for accurate answers, I’m trying to understand the model you’re operating on. If you tell me you think there have been a few controversial (in the sense you describe above) discussions and you think they’ve contributed to identifying errors, then it makes sense to me that you think having more such discussions is valuable. I may disagree, but it’s clear to me what we’re disagreeing about. If you tell me you don’t think we’ve had any such discussions, I can sort of understanding you believing that they would be valuable if we had them, but I would also conclude I don’t quite know what sorts of discussions you’re talking about. If you tell me you think we’ve had a few such discussions but they haven’t contributed anything, then I would be very confused and want to revisit my understanding of why you believe what you believe. Etc.
Controversial doesn’t necessarily mean weird-sounding. For example, we could talk more about medicine, an area with a great deal of disagreement, without seeming like clown-suit wearing crazies. Mainstream topics should be more than enough to fill the controversy quota.
This wouldn’t be an issue except it’s entirely unclear to me that LessWrong is making much in the way of progress of whatever sort. There’s the meetup groups, which sometimes look good and sometimes sputter.
But perhaps I’m wrong and there’s a list of things that are reasonable evidence of progress of whatever sort.
As I’ve said elsewhere, I’m not convinced that the goal of having correct beliefs on the topics addressed in the Sequences will be cost-effectively approached by introducing new contrarians to LW. It would likely be more cost-effective to identify some thinkers we collectively esteem and hire them to perform a “peer review” on those topics.
That said, I’m not sure I see what the point of that would be either, since it’s not like EY is going to edit the Sequences regardless of what the reviewers say. It might be even more cost-effective to hire reviewers for his book before he publishes it.
Perhaps we have this backwards?
If there is something intrinsically valuable about controversy (and I’m not really sure that there is, but I’m willing to accept the premise for the sake of discussion), and we’re not getting the optimal level of controversy on the topics we normally discuss (again, not sure I agree, but stipulated), then perhaps what we should be doing is not looking for “more and better contrarians” who will disagree with us on the stuff we have consensus on, but rather starting to discuss more difficult topics where there is less consensus.
One problem is, of course, that some of us are already worried that LW is too weird-sounding and not sufficiently palatable to the mainstream, for example, and would probably be made uncomfortable if we explore more controversial stuff… it would feel too much like going to school in a clown suit. And moving from areas of strength to areas of weakness is always a little scary, and some of us will resist the transition simply for that reason. And many more.
Still, if you can make a case for the value of controversy, you might find enough of us convinced by that case to make that transition.
Here’s a case for the value of controversy.
LessWrong orthodoxy includes a large number of propositions (over a hundred posts in just core sequences, at least one thesis per post)
The deductions that lead to each claim are largely independent (if post B was an obvious corollary of post A, it would have saved writer’s and readers’ time not to write it)
Reasoning is error-prone, especially when not formalized (this is a point made in the sequences; if it’s wrong then q.e.d.)
Even if each deduction is overwhelmingly likely (let’s say 99%) to be correct, it would be likely (63% in this case) that at least one out of a hundred would be incorrect
Because these are deductive chains of reasoning (they’re “the sequences”, not just “the set”), one false deduction can invalidate any number of conclusions which follow from it. The Principle of Explosion has been defeating brilliant people for millennia.
In other words, even if you believe that each item of LessWrong consensus is almost certain to be correct, you should still be doubtful that every item of LessWrong consensus is likely to be correct. And if there are significant errors, then how else will they be found and publicized other than via a controversial discussion?
I agree that there are errors in the “LW consensus.”
I agree that a cost-effective mechanism for identifying those errors would be a valuable thing.
By your estimation, how many controversial discussions have occurred on LW in the last year?
How many of them have contributed to identifying any of those errors?
Those are both good questions (as is the implicit point about cost-effectiveness or lack thereof); I’m afraid I’m not a heavy enough reader here to quickly give accurate answers.
I’m not looking to you for accurate answers, I’m trying to understand the model you’re operating on.
If you tell me you think there have been a few controversial (in the sense you describe above) discussions and you think they’ve contributed to identifying errors, then it makes sense to me that you think having more such discussions is valuable. I may disagree, but it’s clear to me what we’re disagreeing about.
If you tell me you don’t think we’ve had any such discussions, I can sort of understanding you believing that they would be valuable if we had them, but I would also conclude I don’t quite know what sorts of discussions you’re talking about.
If you tell me you think we’ve had a few such discussions but they haven’t contributed anything, then I would be very confused and want to revisit my understanding of why you believe what you believe.
Etc.
Controversial doesn’t necessarily mean weird-sounding. For example, we could talk more about medicine, an area with a great deal of disagreement, without seeming like clown-suit wearing crazies. Mainstream topics should be more than enough to fill the controversy quota.
(nods) Fair point.
This wouldn’t be an issue except it’s entirely unclear to me that LessWrong is making much in the way of progress of whatever sort. There’s the meetup groups, which sometimes look good and sometimes sputter.
But perhaps I’m wrong and there’s a list of things that are reasonable evidence of progress of whatever sort.
See Wei Dai’s comment here—he doesn’t value controversy qua controversy.
Mm.
Fair enough.
As I’ve said elsewhere, I’m not convinced that the goal of having correct beliefs on the topics addressed in the Sequences will be cost-effectively approached by introducing new contrarians to LW.
It would likely be more cost-effective to identify some thinkers we collectively esteem and hire them to perform a “peer review” on those topics.
That said, I’m not sure I see what the point of that would be either, since it’s not like EY is going to edit the Sequences regardless of what the reviewers say.
It might be even more cost-effective to hire reviewers for his book before he publishes it.