So, sorry in advance if I’m reading way too much into a casual choice of words, but—this is an incredibly ominous metaphor, right? (I’m definitely not blaming you for anything, because I’ve also used it in just this context, and it took me a while to notice how incredibly ominous it is.)
Maybe my rationality realism is showing, but I thought the premise and promise of the website is that there are laws of systematically correct reasoning as objective as mathematics—different mathematicians from different cultures might have different interests (like analysis or algebra or combinatorics) or be accustomed to different notations, but ultimately, they’re all on the same cooperative quest for Truth—even if that cooperative process may occasionally involve some amount of yelling and crying.
The Overton window concept describes a process of social-pressure mind control, not rational deliberation: an idea is said to be “outside the Overton window” not on account of its being wrong, but on account of its being unacceptably unpopular. If a mathematician were to describe a debate with their colleagues about mathematics (as opposed to some dumb non-math thing like tenure or teaching duties) as an “Overton-window fight”, I would be pretty worried about the culture of that mathematics department, wouldn’t you?!
concepts that multiple longterm community members actually want to make bids for community attention/endorsement of
Again, sorry in advance if I’m reading way too much into a casual choice of words, but—paying attention to what “longterm community members” want is an instance of Goodhart’s law, isn’t it? (Some would say of the regressional variety, but I think this case is actually the adversarial type.)
We want concepts that advance the art of human rationality. The hope is that longterm community members are performing the right kind of computation such that “concepts that multiple long-term community members want endorsed” ends up being the same thing as “concepts that advance the art of human rationality”, much as one would hope that “what this calculator outputs when you type in 2 + 3” ends up being the same thing as 2 + 3. If the calculator outputs something other than 5, then the machine really shouldn’t be called a “calculator”—in order to not confuse people, it needs to be either renamed or destroyed. Same thing with a “rationality community.”
The Overton window concept describes a process of social-pressure mind control, not rational deliberation: an idea is said to be “outside the Overton window” not on account of its being wrong, but on account of its being unacceptably unpopular. If a mathematician were to describe a debate with their colleagues about mathematics (as opposed to some dumb non-math thing like tenure or teaching duties) as an “Overton-window fight”, I would be pretty worried about the culture of that mathematics department, wouldn’t you?!
I think it’s ominous if Raemon used the word with that intended meaning, but I’m guessing he didn’t (and most people around here don’t?). When I think “Overton window”, I just think “what is considered reasonable to discuss without it being regarded as weird or extreme or requiring extreme evidence to overcome a very low prior” and think of the term being agnostic to how it got decided. In this sense, our community has an Overton window that definitely includes physics and history, presently really excludes Reiki and astrology, and perhaps has meditation/IFS on the border. I think overall the process by which we’ve ended up with this window has been much better than what most of broader society uses.
My understanding of Ray’s comments about “concentrating Overton window fights” was that just now was a period when we’d more than usual communally debate (using the correct and normative laws of reasoning) ideas which we’re as yet still contentious with the community and increasing consensus of whether they were good or not– based on their epistemic merits.
...
It’s a separate question about what best way to use the term “Overton window” is and upon which I don’t have a strong opinion at present.
This is roughly how I intended it. But, it’s not a coincidence that the word has the history that it does, and did seem worth reflecting on at least briefly.
(note: I think this conversation is important, but part of the point of the review is to have a large number of similarly important conversations. I will probably reply a couple more times. My current guess is that my budget for such conversations this month is going to be better spent this month on the object-level review process, and/or building code that’s “meta-level” to support the object-level process)
My off-the-cuff thought is that I agree with you about the shape of how this is worrisome, but probably disagree about it’s magnitude.
(But, I notice as I say that that my brain is compressing magnitude into a region can easily compare. i.e. It’s seems quite plausible the absolute magnitude of how “worrisome” this should be is 100, but my brain has 12 settings for importance and I’ve already compressed things down in a way optimized for comparing relevant plans and actions. i.e. if the fire alarm is always ringing, there’s not much point in having a fire alarm)
If the calculator outputs something other than 5, then the machine really shouldn’t be called a “calculator”—in order to not confuse people, it needs to be either renamed or destroyed. Same thing with a “rationality community.”
I think this depends on how the machine compares to other tools that calculate – if there are obviously better tools, you should probably use those. If those tools are strictly better, then the calculator should be abandoned. But if the calculator is currently the most accurate tool for calculating numbers, it probably makes more sense to continue using it (while looking for better tools). You can re-name it to “aspiring calculator”, but in practice long names are clunky and hard to use on a day-to-day basis.
Sometimes you don’t actually have a better option than implementing FizzBuzz in TensorFlow, or implementing rationality on mental architecture that’s at least partially optimized for politics.
There is a certain sense in which this should have you sitting bolt upright in alarm, but, again, a constant-fire-alarm isn’t very useful.
It’s definitely an instance of Goodhart’s law (which subtype(s) probably depends on the particular discussion). The question is “do we have actually have better ideas that will more rapidly converge on the closest approximation of the most useful truths?”
And in all seriousness: it seems quite likely that there are better versions of the review process out there. The current iteration is one that got roughly 3 weeks of discussion among the LessWrong team and a couple people I reached out to for feedback. I think it’s quite likely we can do better.
I think for this particular year it makes most sense to stick at least to the broad-strokes of the plan (switching plans completely every time you have a plausibly-better idea seems a recipe for not completing projects at all). But, that leaves some wiggle room for implementation details. And if you do have suggestions for broad-strokes-of-plans that are better than the status quo, I’m definitely interested in those for next year.
(This all brings me back to the original comment at the top of this thread: what process for the “Review Phase” of the review do you expect to yield the best results, in aggregate? I think it’s more useful to try to answer that question, than to figure out exactly how freaked out to be about the architecture of the LessWrong community’s metacognition)
(actually, the thing I’m worried about here is that I expect this subthread to be much more enticing than figuring out the best answers to “how should the Review (and Voting) Phases be structured”, despite the latter being much more actionably useful. And this seems like a concrete instance of “human brains are architected around politics, finding it easier to fight than to build, with ‘overton-window-fight’ being an unfortunately accurate description of what’s going on a lot of the time”)
I agree that focusing on the object-level review process is a much better use of your time than reacting to my perma-panicked cultural commentary. Happy to end this subthread here.
FWIW, I would be quite excited for you to devote thought to the “how to do a good review process?” question, if that’s something you have in your motivation budget.
So, sorry in advance if I’m reading way too much into a casual choice of words, but—this is an incredibly ominous metaphor, right? (I’m definitely not blaming you for anything, because I’ve also used it in just this context, and it took me a while to notice how incredibly ominous it is.)
Maybe my rationality realism is showing, but I thought the premise and promise of the website is that there are laws of systematically correct reasoning as objective as mathematics—different mathematicians from different cultures might have different interests (like analysis or algebra or combinatorics) or be accustomed to different notations, but ultimately, they’re all on the same cooperative quest for Truth—even if that cooperative process may occasionally involve some amount of yelling and crying.
(“And being universals,” said the Lady 3rd, “they bear no distinguishing evidence of their origin.”)
The Overton window concept describes a process of social-pressure mind control, not rational deliberation: an idea is said to be “outside the Overton window” not on account of its being wrong, but on account of its being unacceptably unpopular. If a mathematician were to describe a debate with their colleagues about mathematics (as opposed to some dumb non-math thing like tenure or teaching duties) as an “Overton-window fight”, I would be pretty worried about the culture of that mathematics department, wouldn’t you?!
Again, sorry in advance if I’m reading way too much into a casual choice of words, but—paying attention to what “longterm community members” want is an instance of Goodhart’s law, isn’t it? (Some would say of the regressional variety, but I think this case is actually the adversarial type.)
We want concepts that advance the art of human rationality. The hope is that longterm community members are performing the right kind of computation such that “concepts that multiple long-term community members want endorsed” ends up being the same thing as “concepts that advance the art of human rationality”, much as one would hope that “what this calculator outputs when you type in
2 + 3
” ends up being the same thing as 2 + 3. If the calculator outputs something other than 5, then the machine really shouldn’t be called a “calculator”—in order to not confuse people, it needs to be either renamed or destroyed. Same thing with a “rationality community.”I think it’s ominous if Raemon used the word with that intended meaning, but I’m guessing he didn’t (and most people around here don’t?). When I think “Overton window”, I just think “what is considered reasonable to discuss without it being regarded as weird or extreme or requiring extreme evidence to overcome a very low prior” and think of the term being agnostic to how it got decided. In this sense, our community has an Overton window that definitely includes physics and history, presently really excludes Reiki and astrology, and perhaps has meditation/IFS on the border. I think overall the process by which we’ve ended up with this window has been much better than what most of broader society uses.
My understanding of Ray’s comments about “concentrating Overton window fights” was that just now was a period when we’d more than usual communally debate (using the correct and normative laws of reasoning) ideas which we’re as yet still contentious with the community and increasing consensus of whether they were good or not– based on their epistemic merits.
...
It’s a separate question about what best way to use the term “Overton window” is and upon which I don’t have a strong opinion at present.
This is roughly how I intended it. But, it’s not a coincidence that the word has the history that it does, and did seem worth reflecting on at least briefly.
(note: I think this conversation is important, but part of the point of the review is to have a large number of similarly important conversations. I will probably reply a couple more times. My current guess is that my budget for such conversations this month is going to be better spent this month on the object-level review process, and/or building code that’s “meta-level” to support the object-level process)
My off-the-cuff thought is that I agree with you about the shape of how this is worrisome, but probably disagree about it’s magnitude.
(But, I notice as I say that that my brain is compressing magnitude into a region can easily compare. i.e. It’s seems quite plausible the absolute magnitude of how “worrisome” this should be is 100, but my brain has 12 settings for importance and I’ve already compressed things down in a way optimized for comparing relevant plans and actions. i.e. if the fire alarm is always ringing, there’s not much point in having a fire alarm)
I think this depends on how the machine compares to other tools that calculate – if there are obviously better tools, you should probably use those. If those tools are strictly better, then the calculator should be abandoned. But if the calculator is currently the most accurate tool for calculating numbers, it probably makes more sense to continue using it (while looking for better tools). You can re-name it to “aspiring calculator”, but in practice long names are clunky and hard to use on a day-to-day basis.
Sometimes you don’t actually have a better option than implementing FizzBuzz in TensorFlow, or implementing rationality on mental architecture that’s at least partially optimized for politics.
There is a certain sense in which this should have you sitting bolt upright in alarm, but, again, a constant-fire-alarm isn’t very useful.
It’s definitely an instance of Goodhart’s law (which subtype(s) probably depends on the particular discussion). The question is “do we have actually have better ideas that will more rapidly converge on the closest approximation of the most useful truths?”
And in all seriousness: it seems quite likely that there are better versions of the review process out there. The current iteration is one that got roughly 3 weeks of discussion among the LessWrong team and a couple people I reached out to for feedback. I think it’s quite likely we can do better.
I think for this particular year it makes most sense to stick at least to the broad-strokes of the plan (switching plans completely every time you have a plausibly-better idea seems a recipe for not completing projects at all). But, that leaves some wiggle room for implementation details. And if you do have suggestions for broad-strokes-of-plans that are better than the status quo, I’m definitely interested in those for next year.
(This all brings me back to the original comment at the top of this thread: what process for the “Review Phase” of the review do you expect to yield the best results, in aggregate? I think it’s more useful to try to answer that question, than to figure out exactly how freaked out to be about the architecture of the LessWrong community’s metacognition)
(actually, the thing I’m worried about here is that I expect this subthread to be much more enticing than figuring out the best answers to “how should the Review (and Voting) Phases be structured”, despite the latter being much more actionably useful. And this seems like a concrete instance of “human brains are architected around politics, finding it easier to fight than to build, with ‘overton-window-fight’ being an unfortunately accurate description of what’s going on a lot of the time”)
I agree that focusing on the object-level review process is a much better use of your time than reacting to my perma-panicked cultural commentary. Happy to end this subthread here.
FWIW, I would be quite excited for you to devote thought to the “how to do a good review process?” question, if that’s something you have in your motivation budget.