A norm is either unnecessary when there happens to be agreement, or exerts pressure to act against one’s better judgement.
I think you’re missing the value of having norms at the entry points to new subcultures.
LessWrong is not quite as clearly bounded as a martial arts academy; people do not agree to enter it knowing that there will be things they have to do (like wearing a uniform, bowing, etc).
And yet it is a nonstandard subculture; its members genuinely want it to be different from being on the rest of the internet.
Norms smooth that transition—they help someone who’s using better-judgment-calibrated-to-the-broader-internet to learn that better-judgment-calibrated-to-here looks different.
Norms smooth that transition—they help [...] to learn
When you come to know something because there is a norm to,
instead of by happening to be in the appropriate frame of mind
to get convinced by arguments,
you either broke your existing cognition, or learned to obscure it,
perhaps even from yourself when the norm is powerful enough.
members genuinely want it to be different from being on the rest of the internet
I want people here to be allowed honesty and integrity,
not get glared into cooperative whatever.
This has costs of its own.
There needs to be some sort of selection effect
that keeps things good,
my point is that cultivation of norms is a step in the wrong direction.
Especially norms about things more meaningful than a uniform,
things that interact with how people think,
with reasons for thinking one way or another.
It’s hard to avoid goodharting and deceptive alignment.
Explicitly optimizing for obviously flawed proxies is inherently dangerous.
Norms take on a life of their own, telling them to stop when appropriate doesn’t work very well.
They only truly spare those strong enough to see their true nature,
but even that is not a prerequisite
for temporarily wielding them to good effect.
When you come to know something because there is a norm to, instead of by happening to be in the appropriate frame of mind to get convinced by arguments, you either broke your existing cognition, or learned to obscure it, perhaps even from yourself when the norm is powerful enough.
This is stated as an absolute, when it is not an absolute. You might want to take a glance at the precursor essay Sapir-Whorf for Rationalists, and separately consider that not everyone’s mind works the way you’re confidently implying All Minds Work.
I want people here to be allowed honesty and integrity, not get glared into cooperative whatever.
You’re strawmanning norms quite explicitly, here, as if “glared into cooperative whatever” is at all a reasonable description of what healthy norms look like. You seem to have an unstated premise of, like, “that whole section where Duncan talked about what a guideline looks like was a lie,” or something.
my point is that cultivation of norms is a step in the wrong direction
I hear that that’s your position, but so far I think you have failed to argue for that position except by strawmanning what I’m saying and rose-ifying what you’re saying.
It’s hard to avoid goodharting and deceptive alignment. Explicitly optimizing for obviously flawed proxies is inherently dangerous.
Agree; I literally created the CFAR class on Goodharting. Explicitly optimizing for obviously flawed proxies is specifically recommended against in this post.
Norms take on a life of their own, telling them to stop when appropriate doesn’t work very well.
This is the closest thing to a-point-I’d-like-to-roll-around-and-discuss-with-you-and-others in your comments above, but I’m going to be loath to enter such a discussion until I feel like my points are not going to be rounded off to the dumbest possible neighbor of what I’m actually trying to say.
I think you’re missing the value of having norms at the entry points to new subcultures.
LessWrong is not quite as clearly bounded as a martial arts academy; people do not agree to enter it knowing that there will be things they have to do (like wearing a uniform, bowing, etc).
And yet it is a nonstandard subculture; its members genuinely want it to be different from being on the rest of the internet.
Norms smooth that transition—they help someone who’s using better-judgment-calibrated-to-the-broader-internet to learn that better-judgment-calibrated-to-here looks different.
When you come to know something because there is a norm to, instead of by happening to be in the appropriate frame of mind to get convinced by arguments, you either broke your existing cognition, or learned to obscure it, perhaps even from yourself when the norm is powerful enough.
I want people here to be allowed honesty and integrity, not get glared into cooperative whatever. This has costs of its own.
There needs to be some sort of selection effect that keeps things good, my point is that cultivation of norms is a step in the wrong direction. Especially norms about things more meaningful than a uniform, things that interact with how people think, with reasons for thinking one way or another.
It’s hard to avoid goodharting and deceptive alignment. Explicitly optimizing for obviously flawed proxies is inherently dangerous. Norms take on a life of their own, telling them to stop when appropriate doesn’t work very well. They only truly spare those strong enough to see their true nature, but even that is not a prerequisite for temporarily wielding them to good effect.
This is stated as an absolute, when it is not an absolute. You might want to take a glance at the precursor essay Sapir-Whorf for Rationalists, and separately consider that not everyone’s mind works the way you’re confidently implying All Minds Work.
You’re strawmanning norms quite explicitly, here, as if “glared into cooperative whatever” is at all a reasonable description of what healthy norms look like. You seem to have an unstated premise of, like, “that whole section where Duncan talked about what a guideline looks like was a lie,” or something.
I hear that that’s your position, but so far I think you have failed to argue for that position except by strawmanning what I’m saying and rose-ifying what you’re saying.
Agree; I literally created the CFAR class on Goodharting. Explicitly optimizing for obviously flawed proxies is specifically recommended against in this post.
This is the closest thing to a-point-I’d-like-to-roll-around-and-discuss-with-you-and-others in your comments above, but I’m going to be loath to enter such a discussion until I feel like my points are not going to be rounded off to the dumbest possible neighbor of what I’m actually trying to say.