Yeah. I’ve got a couple brilliant and highly capable friends/allies/advisors who also STRONGLY prefer opportunity framings over obligation framings. I think that’s one of the things where the pendulum has overcorrected, though—I think the rationality community as a whole is rather correctly allergic to obligation framings, because of bad experiences with badly made obligations in the past, but I think we’re missing out on an important piece of the puzzle. You can run a successful thing that’s, like, “we’ll do this every week for twelve weeks, show up as much as you like!” and you can run a successful thing that’s, like, “we’ll do this if we get enough people to commit for twelve weeks!” and I think the two styles overlap but there’s a LOT of non-overlap, and the Bay Area rationalists are missing half of that.
“we’ll do this if we get enough people to commit for twelve weeks!”
I actually totally buy this. There are some things where you just have to commit, and accept the obligations that come with that.
My hesitation primarily comes from the fact that the code of conduct seems intended to be pervasive. It even has requirements that happen entirely inside your own mind. These seem like bad features for an obligation-based system.
My model is that obligation-based systems work best when they’re concrete and specific, and limited to specific times and circumstances. “Commit to performing specified activities twice a week for twelve weeks” seems good, while “never have a mental lapse of type x” seems bad.
That makes sense, yeah. I’m hoping the cure comes both from the culture-of-gentleness we referenced above, and the above-board “Yep, we’re trying to restructure our thinking here” and people choosing intelligently whether to opt in or opt out.
Good place to keep an eye out for problems, though. Yellow flag.
Edit: also, it’s fair to note that the bits that go on inside someone’s head often aren’t so much “you have to think X” as they are “you can’t act on ~X if that’s what you’re thinking.” Like, the agreement that, however frustrated you might FEEL about the fact that people were keeping you up, you’re in a social contract not to VENT at them, if you didn’t first ask them to stop. Similarly, maybe you don’t have the emotional resources to take the outside view/calm down when triggered, but you’re aware that everyone else will act like you should, and that your socially-accepted options are somewhat constrained. You can still do what feels right in the moment, but it’s not endorsed on a broad scale, and may cost.
it’s fair to note that the bits that go on inside someone’s head often aren’t so much “you have to think X” as they are “you can’t act on ~X if that’s what you’re thinking.”
This framing does bother me less, so that is a fair clarification. However, I don’t think it applies to some of them, particularly:
will not form negative models of other Dragons without giving those Dragons a chance to hear about and interact with them
True. Updated the wording on that one to reflect the real causality (notice negative model --> share it); will look at the others with this lens again soon. Thanks.
Yeah. I’ve got a couple brilliant and highly capable friends/allies/advisors who also STRONGLY prefer opportunity framings over obligation framings. I think that’s one of the things where the pendulum has overcorrected, though—I think the rationality community as a whole is rather correctly allergic to obligation framings, because of bad experiences with badly made obligations in the past, but I think we’re missing out on an important piece of the puzzle. You can run a successful thing that’s, like, “we’ll do this every week for twelve weeks, show up as much as you like!” and you can run a successful thing that’s, like, “we’ll do this if we get enough people to commit for twelve weeks!” and I think the two styles overlap but there’s a LOT of non-overlap, and the Bay Area rationalists are missing half of that.
I actually totally buy this. There are some things where you just have to commit, and accept the obligations that come with that.
My hesitation primarily comes from the fact that the code of conduct seems intended to be pervasive. It even has requirements that happen entirely inside your own mind. These seem like bad features for an obligation-based system.
My model is that obligation-based systems work best when they’re concrete and specific, and limited to specific times and circumstances. “Commit to performing specified activities twice a week for twelve weeks” seems good, while “never have a mental lapse of type x” seems bad.
That makes sense, yeah. I’m hoping the cure comes both from the culture-of-gentleness we referenced above, and the above-board “Yep, we’re trying to restructure our thinking here” and people choosing intelligently whether to opt in or opt out.
Good place to keep an eye out for problems, though. Yellow flag.
Edit: also, it’s fair to note that the bits that go on inside someone’s head often aren’t so much “you have to think X” as they are “you can’t act on ~X if that’s what you’re thinking.” Like, the agreement that, however frustrated you might FEEL about the fact that people were keeping you up, you’re in a social contract not to VENT at them, if you didn’t first ask them to stop. Similarly, maybe you don’t have the emotional resources to take the outside view/calm down when triggered, but you’re aware that everyone else will act like you should, and that your socially-accepted options are somewhat constrained. You can still do what feels right in the moment, but it’s not endorsed on a broad scale, and may cost.
This framing does bother me less, so that is a fair clarification. However, I don’t think it applies to some of them, particularly:
True. Updated the wording on that one to reflect the real causality (notice negative model --> share it); will look at the others with this lens again soon. Thanks.