the guidelines are descriptive of good discourse that already exists; here I am attempting to convert them into prescriptions
There is no value in framing good arguments as prescriptive,
and delayed damage in cultivating prescriptive framings of arguments.
A norm is either unnecessary when there happens to be agreement,
or exerts pressure to act against one’s better judgement.
The worst possible reason for that agreement to already be there
is a norm that encourages it.
given the goals of clear thinking, clear communication, and collaborative truth-seeking, the burden of proof is on a given guideline violation to justify itself
When someone already generally believes something,
changing their mind requires some sort of argument,
it probably won’t happen for no reason at all.
That is the only burden of proof there ever is.
Thus a believer in a guideline sometimes won’t be convinced
by a guideline violation that occurs without explanation.
But a person violating a guideline isn’t necessarily working on that argument,
leaving reasons for guideline violation a mystery.
A norm is either unnecessary when there happens to be agreement, or exerts pressure to act against one’s better judgement.
I think you’re missing the value of having norms at the entry points to new subcultures.
LessWrong is not quite as clearly bounded as a martial arts academy; people do not agree to enter it knowing that there will be things they have to do (like wearing a uniform, bowing, etc).
And yet it is a nonstandard subculture; its members genuinely want it to be different from being on the rest of the internet.
Norms smooth that transition—they help someone who’s using better-judgment-calibrated-to-the-broader-internet to learn that better-judgment-calibrated-to-here looks different.
Norms smooth that transition—they help [...] to learn
When you come to know something because there is a norm to,
instead of by happening to be in the appropriate frame of mind
to get convinced by arguments,
you either broke your existing cognition, or learned to obscure it,
perhaps even from yourself when the norm is powerful enough.
members genuinely want it to be different from being on the rest of the internet
I want people here to be allowed honesty and integrity,
not get glared into cooperative whatever.
This has costs of its own.
There needs to be some sort of selection effect
that keeps things good,
my point is that cultivation of norms is a step in the wrong direction.
Especially norms about things more meaningful than a uniform,
things that interact with how people think,
with reasons for thinking one way or another.
It’s hard to avoid goodharting and deceptive alignment.
Explicitly optimizing for obviously flawed proxies is inherently dangerous.
Norms take on a life of their own, telling them to stop when appropriate doesn’t work very well.
They only truly spare those strong enough to see their true nature,
but even that is not a prerequisite
for temporarily wielding them to good effect.
When you come to know something because there is a norm to, instead of by happening to be in the appropriate frame of mind to get convinced by arguments, you either broke your existing cognition, or learned to obscure it, perhaps even from yourself when the norm is powerful enough.
This is stated as an absolute, when it is not an absolute. You might want to take a glance at the precursor essay Sapir-Whorf for Rationalists, and separately consider that not everyone’s mind works the way you’re confidently implying All Minds Work.
I want people here to be allowed honesty and integrity, not get glared into cooperative whatever.
You’re strawmanning norms quite explicitly, here, as if “glared into cooperative whatever” is at all a reasonable description of what healthy norms look like. You seem to have an unstated premise of, like, “that whole section where Duncan talked about what a guideline looks like was a lie,” or something.
my point is that cultivation of norms is a step in the wrong direction
I hear that that’s your position, but so far I think you have failed to argue for that position except by strawmanning what I’m saying and rose-ifying what you’re saying.
It’s hard to avoid goodharting and deceptive alignment. Explicitly optimizing for obviously flawed proxies is inherently dangerous.
Agree; I literally created the CFAR class on Goodharting. Explicitly optimizing for obviously flawed proxies is specifically recommended against in this post.
Norms take on a life of their own, telling them to stop when appropriate doesn’t work very well.
This is the closest thing to a-point-I’d-like-to-roll-around-and-discuss-with-you-and-others in your comments above, but I’m going to be loath to enter such a discussion until I feel like my points are not going to be rounded off to the dumbest possible neighbor of what I’m actually trying to say.
There is no value in framing good arguments as prescriptive, and delayed damage in cultivating prescriptive framings of arguments. A norm is either unnecessary when there happens to be agreement, or exerts pressure to act against one’s better judgement. The worst possible reason for that agreement to already be there is a norm that encourages it.
When someone already generally believes something, changing their mind requires some sort of argument, it probably won’t happen for no reason at all. That is the only burden of proof there ever is.
Thus a believer in a guideline sometimes won’t be convinced by a guideline violation that occurs without explanation. But a person violating a guideline isn’t necessarily working on that argument, leaving reasons for guideline violation a mystery.
I think you’re missing the value of having norms at the entry points to new subcultures.
LessWrong is not quite as clearly bounded as a martial arts academy; people do not agree to enter it knowing that there will be things they have to do (like wearing a uniform, bowing, etc).
And yet it is a nonstandard subculture; its members genuinely want it to be different from being on the rest of the internet.
Norms smooth that transition—they help someone who’s using better-judgment-calibrated-to-the-broader-internet to learn that better-judgment-calibrated-to-here looks different.
When you come to know something because there is a norm to, instead of by happening to be in the appropriate frame of mind to get convinced by arguments, you either broke your existing cognition, or learned to obscure it, perhaps even from yourself when the norm is powerful enough.
I want people here to be allowed honesty and integrity, not get glared into cooperative whatever. This has costs of its own.
There needs to be some sort of selection effect that keeps things good, my point is that cultivation of norms is a step in the wrong direction. Especially norms about things more meaningful than a uniform, things that interact with how people think, with reasons for thinking one way or another.
It’s hard to avoid goodharting and deceptive alignment. Explicitly optimizing for obviously flawed proxies is inherently dangerous. Norms take on a life of their own, telling them to stop when appropriate doesn’t work very well. They only truly spare those strong enough to see their true nature, but even that is not a prerequisite for temporarily wielding them to good effect.
This is stated as an absolute, when it is not an absolute. You might want to take a glance at the precursor essay Sapir-Whorf for Rationalists, and separately consider that not everyone’s mind works the way you’re confidently implying All Minds Work.
You’re strawmanning norms quite explicitly, here, as if “glared into cooperative whatever” is at all a reasonable description of what healthy norms look like. You seem to have an unstated premise of, like, “that whole section where Duncan talked about what a guideline looks like was a lie,” or something.
I hear that that’s your position, but so far I think you have failed to argue for that position except by strawmanning what I’m saying and rose-ifying what you’re saying.
Agree; I literally created the CFAR class on Goodharting. Explicitly optimizing for obviously flawed proxies is specifically recommended against in this post.
This is the closest thing to a-point-I’d-like-to-roll-around-and-discuss-with-you-and-others in your comments above, but I’m going to be loath to enter such a discussion until I feel like my points are not going to be rounded off to the dumbest possible neighbor of what I’m actually trying to say.