As I understand it – with my only source being Ben’s post and a couple of comments that I’ve read – Drew is also a cofounder of Nonlinear. Also, this was reported:
Alice and Chloe reported a substantial conflict within the household between Kat and Alice. Alice was polyamorous, and she and Drew entered into a casual romantic relationship. Kat previously had a polyamorous marriage that ended in divorce, and is now monogamously partnered with Emerson. Kat reportedly told Alice that she didn’t mind polyamory “on the other side of the world”, but couldn’t stand it right next to her, and probably either Alice would need to become monogamous or Alice should leave the organization. Alice didn’t become monogamous. Alice reports that Kat became increasingly cold over multiple months, and was very hard to work with. (footnote) After this, there were further reports of claims of Kat professing her romantic love for Alice, and also precisely opposite reports of Alice professing her romantic love for Kat. I am pretty confused about what happened.
So, based on what we’re told, there was romantic entanglement between the employers – Drew included – and Alice, and such relationships, even in the best-case scenario, need to be handled with a lot of caution, and this situation seems to be significantly worse than a best-case scenario.
My understanding (definitely fallible, but I’ve been quite engaged in this case, and am one of the people Ben interviewed) has been that Alice and Chloe are not concerned about this, and in fact that they both wish to insulate Drew from any negative consequences. This seems to me like an informative and important consideration. (It also gives me reason to think that the benefits of gaining more information about this are less likely to be worth the costs.)
This seems like a potentially downstream issue of rationalist/EA organizations ignoring a few Chesterton Fences that are really important, and one of those Chesterton Fences is not having dating/romantic relationships in the employment context if there is any power asymmetry issues. These can easily lead to abuse or worse issues.
In general, one impression I get from a lot of rationalist/EA organizations is that there are very few boundaries between work, romantic/dating and potentially living depending on the organization, and the ones it does have are either much too illegible and high context, especially social context, and/or are way too porous, in that they can be easily violated.
Yes, there are no preformed Cartesian boundaries that we can use, but that doesn’t stop us from at least forming approximate boundaries and enforcing them, and while legible norms are never fun and have their costs, I do think that the benefits of legible norms, especially epistemically legible norms in the dating/romantic scene, especially in an employment context are very, very high value, so much that I think the downsides aren’t enough to say that it’s bad overall to enforce legible norms around dating/romantic relationships in the employment context. I’d say somewhat similar things around legible norms on living situations, pay etc.
Seems like some rationalists have a standard solution to Chesterton’s Fence: “Yes, I absolutely understand why the fence is there. It was built for stupid people. Since I am smart, the same rules obviously do not apply to me.”
And when later something bad happens (quite predictably, the outside view would say), the lesson they take seems to be: “Well, apparently those people were not smart enough or didn’t do their research properly. Unlike me. So this piece of evidence does not apply to me.”
*
I actually often agree with the first part. It’s just that it is easy to overestimate one’s own smartness. Especially because it isn’t a single thing, and people can be e.g. very smart at math, and maybe average (i.e. not even stupid, just not exceptionally smart either) in human relations. Also, collective wisdom can be aware of rare but highly negative outcomes, which seem unlikely to you, because they are, in fact, rare.
What makes my blood boil is the second part. If you can’t predict ahead who will turn out “apparently not that smart” and you only say it in hindsight after the bad thing has already happened, it means you are just making excuses to ignore the evidence. Even if, hypothetically speaking, you are the smartest person and the rules truly do not apply to you, it is still highly irresponsible to promote this behavior among rationalists in general (because you know that a fraction of them will later turn out to be “not that smart” and will get hurt, even if that fraction may not include you).
promote this behavior among rationalists in general
What are you imagining when you say “promote this behavior”? Writing lesswrong posts in favor? Choosing to live that way yourself? Privately recommending that people do that? Not commenting when other people say that they’re planning to do something that violates the Chesterton’s fence?
The example I had mostly in mind was experimenting with drugs. I think there were no posts on LW in favor of this, but it gets a lot of defense in comments. Like when someone mentions in some debate that they know rationalists who have overdosed, or who went crazy after experimenting with drugs, someone else always publicly objects against collectively taking the lesson.
If people do stupid things in private, that can’t (and arguably shouldn’t) be prevented.
As I understand it – with my only source being Ben’s post and a couple of comments that I’ve read – Drew is also a cofounder of Nonlinear. Also, this was reported:
So, based on what we’re told, there was romantic entanglement between the employers – Drew included – and Alice, and such relationships, even in the best-case scenario, need to be handled with a lot of caution, and this situation seems to be significantly worse than a best-case scenario.
My understanding (definitely fallible, but I’ve been quite engaged in this case, and am one of the people Ben interviewed) has been that Alice and Chloe are not concerned about this, and in fact that they both wish to insulate Drew from any negative consequences. This seems to me like an informative and important consideration. (It also gives me reason to think that the benefits of gaining more information about this are less likely to be worth the costs.)
This seems like a potentially downstream issue of rationalist/EA organizations ignoring a few Chesterton Fences that are really important, and one of those Chesterton Fences is not having dating/romantic relationships in the employment context if there is any power asymmetry issues. These can easily lead to abuse or worse issues.
In general, one impression I get from a lot of rationalist/EA organizations is that there are very few boundaries between work, romantic/dating and potentially living depending on the organization, and the ones it does have are either much too illegible and high context, especially social context, and/or are way too porous, in that they can be easily violated.
Yes, there are no preformed Cartesian boundaries that we can use, but that doesn’t stop us from at least forming approximate boundaries and enforcing them, and while legible norms are never fun and have their costs, I do think that the benefits of legible norms, especially epistemically legible norms in the dating/romantic scene, especially in an employment context are very, very high value, so much that I think the downsides aren’t enough to say that it’s bad overall to enforce legible norms around dating/romantic relationships in the employment context. I’d say somewhat similar things around legible norms on living situations, pay etc.
Seems like some rationalists have a standard solution to Chesterton’s Fence: “Yes, I absolutely understand why the fence is there. It was built for stupid people. Since I am smart, the same rules obviously do not apply to me.”
And when later something bad happens (quite predictably, the outside view would say), the lesson they take seems to be: “Well, apparently those people were not smart enough or didn’t do their research properly. Unlike me. So this piece of evidence does not apply to me.”
*
I actually often agree with the first part. It’s just that it is easy to overestimate one’s own smartness. Especially because it isn’t a single thing, and people can be e.g. very smart at math, and maybe average (i.e. not even stupid, just not exceptionally smart either) in human relations. Also, collective wisdom can be aware of rare but highly negative outcomes, which seem unlikely to you, because they are, in fact, rare.
What makes my blood boil is the second part. If you can’t predict ahead who will turn out “apparently not that smart” and you only say it in hindsight after the bad thing has already happened, it means you are just making excuses to ignore the evidence. Even if, hypothetically speaking, you are the smartest person and the rules truly do not apply to you, it is still highly irresponsible to promote this behavior among rationalists in general (because you know that a fraction of them will later turn out to be “not that smart” and will get hurt, even if that fraction may not include you).
What are you imagining when you say “promote this behavior”? Writing lesswrong posts in favor? Choosing to live that way yourself? Privately recommending that people do that? Not commenting when other people say that they’re planning to do something that violates the Chesterton’s fence?
The example I had mostly in mind was experimenting with drugs. I think there were no posts on LW in favor of this, but it gets a lot of defense in comments. Like when someone mentions in some debate that they know rationalists who have overdosed, or who went crazy after experimenting with drugs, someone else always publicly objects against collectively taking the lesson.
If people do stupid things in private, that can’t (and arguably shouldn’t) be prevented.