This seems like a potentially downstream issue of rationalist/EA organizations ignoring a few Chesterton Fences that are really important, and one of those Chesterton Fences is not having dating/romantic relationships in the employment context if there is any power asymmetry issues. These can easily lead to abuse or worse issues.
In general, one impression I get from a lot of rationalist/EA organizations is that there are very few boundaries between work, romantic/dating and potentially living depending on the organization, and the ones it does have are either much too illegible and high context, especially social context, and/or are way too porous, in that they can be easily violated.
Yes, there are no preformed Cartesian boundaries that we can use, but that doesn’t stop us from at least forming approximate boundaries and enforcing them, and while legible norms are never fun and have their costs, I do think that the benefits of legible norms, especially epistemically legible norms in the dating/romantic scene, especially in an employment context are very, very high value, so much that I think the downsides aren’t enough to say that it’s bad overall to enforce legible norms around dating/romantic relationships in the employment context. I’d say somewhat similar things around legible norms on living situations, pay etc.
Seems like some rationalists have a standard solution to Chesterton’s Fence: “Yes, I absolutely understand why the fence is there. It was built for stupid people. Since I am smart, the same rules obviously do not apply to me.”
And when later something bad happens (quite predictably, the outside view would say), the lesson they take seems to be: “Well, apparently those people were not smart enough or didn’t do their research properly. Unlike me. So this piece of evidence does not apply to me.”
*
I actually often agree with the first part. It’s just that it is easy to overestimate one’s own smartness. Especially because it isn’t a single thing, and people can be e.g. very smart at math, and maybe average (i.e. not even stupid, just not exceptionally smart either) in human relations. Also, collective wisdom can be aware of rare but highly negative outcomes, which seem unlikely to you, because they are, in fact, rare.
What makes my blood boil is the second part. If you can’t predict ahead who will turn out “apparently not that smart” and you only say it in hindsight after the bad thing has already happened, it means you are just making excuses to ignore the evidence. Even if, hypothetically speaking, you are the smartest person and the rules truly do not apply to you, it is still highly irresponsible to promote this behavior among rationalists in general (because you know that a fraction of them will later turn out to be “not that smart” and will get hurt, even if that fraction may not include you).
promote this behavior among rationalists in general
What are you imagining when you say “promote this behavior”? Writing lesswrong posts in favor? Choosing to live that way yourself? Privately recommending that people do that? Not commenting when other people say that they’re planning to do something that violates the Chesterton’s fence?
The example I had mostly in mind was experimenting with drugs. I think there were no posts on LW in favor of this, but it gets a lot of defense in comments. Like when someone mentions in some debate that they know rationalists who have overdosed, or who went crazy after experimenting with drugs, someone else always publicly objects against collectively taking the lesson.
If people do stupid things in private, that can’t (and arguably shouldn’t) be prevented.
This seems like a potentially downstream issue of rationalist/EA organizations ignoring a few Chesterton Fences that are really important, and one of those Chesterton Fences is not having dating/romantic relationships in the employment context if there is any power asymmetry issues. These can easily lead to abuse or worse issues.
In general, one impression I get from a lot of rationalist/EA organizations is that there are very few boundaries between work, romantic/dating and potentially living depending on the organization, and the ones it does have are either much too illegible and high context, especially social context, and/or are way too porous, in that they can be easily violated.
Yes, there are no preformed Cartesian boundaries that we can use, but that doesn’t stop us from at least forming approximate boundaries and enforcing them, and while legible norms are never fun and have their costs, I do think that the benefits of legible norms, especially epistemically legible norms in the dating/romantic scene, especially in an employment context are very, very high value, so much that I think the downsides aren’t enough to say that it’s bad overall to enforce legible norms around dating/romantic relationships in the employment context. I’d say somewhat similar things around legible norms on living situations, pay etc.
Seems like some rationalists have a standard solution to Chesterton’s Fence: “Yes, I absolutely understand why the fence is there. It was built for stupid people. Since I am smart, the same rules obviously do not apply to me.”
And when later something bad happens (quite predictably, the outside view would say), the lesson they take seems to be: “Well, apparently those people were not smart enough or didn’t do their research properly. Unlike me. So this piece of evidence does not apply to me.”
*
I actually often agree with the first part. It’s just that it is easy to overestimate one’s own smartness. Especially because it isn’t a single thing, and people can be e.g. very smart at math, and maybe average (i.e. not even stupid, just not exceptionally smart either) in human relations. Also, collective wisdom can be aware of rare but highly negative outcomes, which seem unlikely to you, because they are, in fact, rare.
What makes my blood boil is the second part. If you can’t predict ahead who will turn out “apparently not that smart” and you only say it in hindsight after the bad thing has already happened, it means you are just making excuses to ignore the evidence. Even if, hypothetically speaking, you are the smartest person and the rules truly do not apply to you, it is still highly irresponsible to promote this behavior among rationalists in general (because you know that a fraction of them will later turn out to be “not that smart” and will get hurt, even if that fraction may not include you).
What are you imagining when you say “promote this behavior”? Writing lesswrong posts in favor? Choosing to live that way yourself? Privately recommending that people do that? Not commenting when other people say that they’re planning to do something that violates the Chesterton’s fence?
The example I had mostly in mind was experimenting with drugs. I think there were no posts on LW in favor of this, but it gets a lot of defense in comments. Like when someone mentions in some debate that they know rationalists who have overdosed, or who went crazy after experimenting with drugs, someone else always publicly objects against collectively taking the lesson.
If people do stupid things in private, that can’t (and arguably shouldn’t) be prevented.