On the positive side, I think an experiment in a more centrally managed model makes sense, and group activity that has become integrated into routine is an incredibly good commitment device for getting the activity done- the kind of social technology used in workplaces everywhere that people struggle to apply to their other projects and self-improvement efforts. Collaborative self-improvement is good; it was a big part of what I was interested in for the Accelerator Project before that became defunct.
On the skulls side, though, I think the big risk factor that comes to mind for me for any authoritarian project wasn’t addressed directly. You’ve done a lot of review of failed projects, and succeeded projects, but I don’t get an impression you’ve done much of a review of abusive projects. The big common element I’ve seen in abusive projects is that unreasonable demands were made that any sensible person should have ‘defected’ on- they were asked things or placed under demands which from the outside and in retrospect staying in the group was in no way worth meeting- and people didn’t defect. They stayed in the abusive situation.
A lot of abusive relationships involve people trading off their work performance and prospects, and their outside relationship prospects, in order to live up to commitments made within those relationships, when they should have walked. They concede arguments when they can’t find a reason that will be accepted because the other person rejects everything they say, rather than deciding to defect on the personhood norm of use of reasons. I see people who have been in abusive relationships in the past anxiously worrying about how they will find a way to justify themselves in circumstances where I would have been willing to bite the bullet and say “No, I’m afraid not, I have reasons but I can’t really talk about them.”, because the option of simply putting their foot down without reasons- a costly last resort but an option- is mentally unavailable to them.
What I draw from the case studies of abusive situations I’ve encountered, is that humans have false negatives as well as false positives about ‘defection’; that is, people maintain commitments when they should have defected as well as defecting when they should have maintained commitments. Some of us are more prone to the former, and others are more prone to the latter. The people prone to the former are often impressively bad at boundaries, at knowing when to say no, at making a continually updated cost/benefit analysis to their continued presence in an environment, at protecting themselves. Making self-protection a mantra indicates that you’ve kind of seen a part of it, but the overall model being “humans defect on commitments too much” rather than “humans are lousy at knowing when to commit and when not to” seems like it will miss consideration of what various ideas will do with false negatives often.
The rationalist community as a whole probably is mostly people with relatively few false negatives and mostly false positives. Most of us know when to walk and are independent enough to be keeping an eye on the door when things get worrying, and have no trouble saying “you seem to be under the mistaken impression I need to give you a reason” if people try to reject our reasons. So I can understand failures the other way not being the most salient thing. But the rationalist community as a whole is mostly people who won’t be part of this project.
When you select out the minority who are interested in this project, I think you will get a considerably higher rate of people who fail in the direction of backing down if they can’t find a reason that (they think) others will accept, in the direction of not having good boundaries, and more generally in the direction of not ‘defecting’ enough to protect themselves. And I’ve met enough of them in rationalist-adjacent spaces that I know they’re nearby, they’re smart, they’re helpful, some are reliable, and they’re kind of vulnerable.
I think as leader you need to do more than say “protect yourself”. I think you need to expect that some people you are leading will /not/ say no when they should, and you won’t successfully filter all of them out before starting no more than you’ll filter all people who will fail in any other way out before starting. And you need to take responsibility for protecting them, rather than delegating it exclusively for them to handle. To be a bit rough, “protect yourself” seems like trying to avoid part of the leadership role that isn’t actually optional: that if you fail in the wrong way you will hurt people, and you as leader are responsible for not failing in that way, and 95% isn’t good enough. The drill instructor persona does not come off as the sort of person who would do that- with the unidirectional emphasis on committing more- and I think that is part of why people who don’t know you personally find it kinda alarming in this context.
(The military, of course, from which the stereotype originates, deals with this by simply not giving two shits about causing psychological harm, and is fine either severely hurting people to turn them into what it needs or severely hurting them before spitting them out if they are people who are harmed by what it does.)
On the somewhat more object level, the exit plan discussed seems wildly inadequate, and very likely to be a strong barrier against anyone who isn’t one of our exceptional libertines leaving when they should. This isn’t a normal house share, and it is significantly more important than a regular house share that people are not prevented from leaving by financial constraints or inability to find a replacement who’s interested. The harsh terms typical of an SF house share are not suitable, I think.
The finding a replacement person part seems especially impractical, given most people trend towards an average of their friends and so if their friends on one side are DA people, and they’re unsuited to DA, their other friends are probably even more unsuited to DA on average. I would strongly suggest taking only financial recompense on someone leaving for up to a limited number of months of rent if a replacement is not secured, and either permitting that recompense to be paid back at a later date after immediate departure, or requiring it as an upfront deposit, to guarantee safety of exit.
If there are financial costs involved with ensuring exit is readily available, there are enough people who think that this is valuable that it should be possible to secure capital for use in that scenario.
Strong approval of all of this. The short answer is, I’ve spent tens of hours working more closely with the people who will actually be involved looking at all of the issues you raise here. We’re all aware of things like the potential for emotional abuse and financial entrapment, and putting possible solutions into place, and I simply didn’t feel the need to lengthen the post by another third to include stuff that’s only half-in-progress and also largely too detailed/irrelevant to outsiders.
(As a single bite-sized example: the “protect yourself” mantra is there to lay the baseline, but thus far we’re also including a) explicit “non-conformity” training in bowing out of activities, coupled with strong norms of socially supporting people who “rule #1” themselves out, and clear ways to resolve anxiety or embarrassment and save face, b) weekly open-ended retrospectives that include room for anonymous feedback as well as public, c) two one-on-ones per week with me in which the number one focus is “how are you, can you be supported in any way,” d) outside check-ins with someone completely unrelated to the house, to provide a fresh perspective and safe outlet, and e) regular Circling and pair debugging so that everyone knows “where everyone is” and has a cheap Schelling point for “I need help with X.”)
This is tangentially related at best, but if you have some high quality non-conformity training I would love to borrow it for my local purposes. I’ve got some, but still feel like it’s the largest weakness in the rationality training I’ve been doing.
Because basically every cult has a 30 second boilerplate that looks exactly like that?
When I say “discuss safety”, I’m looking for a standard of discussion that is above that provided by actual, known-dangerous cults. Cults routinely use exactly the “check-ins” you’re describing, as a way to emotionally manipulate members. And the “group” check-ins turn in to peer pressure. So the only actual safety valve ANYWHERE in there is (D).
You’re proposing starting something that looks like the cult. I’m asking you for evidence that you are not, in fact, a cult leader. Thus far, almost all evidence you’ve provided has been perfectly in line with “you are a cult leader”.
If you feel this is an unfair standard of discussion, then this is probably not the correct community for you.
Also, this is very important: You’re asking people to sign a legal contract about finances without any way to to terminate the experiment if it turns out you are in fact a cult leader. This is a huge red flag, and you’ve refused to address it.
I’m not interested in entering into a discussion where the standard is “Duncan must overcome an assumption that he’s a cult leader, and bears all the burden of proof.” That’s deeply fucked up, and inappropriate given that I willingly created a multithousand word explanation for transparency and critique, and have positively engaged with all but the bottom 3% of commentary (of which I claim you are firmly a part).
I think you’re flat-out wrong in claiming that “almost all evidence you’ve provided has been perfectly in line with ‘you are a cult leader.’” The whole original post provided all kinds of models and caveats that distinguish it from the (correctly feared and fought-against) standard cult model. You are engaged in confirmation bias and motivated cognition and stereotyping and strawmanning, and you are the one who is failing to rise to the standard of discussion of this community, and I will not back off from saying it however much people might glare at me for it.
I’m not interested in entering into a discussion where the standard is “Duncan must overcome an assumption that he’s a cult leader, and bears all the burden of proof.”
While I agree that a lot of the criticism towards you has been hostile or at least pretty uncharitable, I would only point out that I suspect the default tendency most people have is to automatically reject anything that shows even the most minor outward signs of cultishness, and that these heavy prior beliefs will be difficult to overcome. So, it seems more likely that the standard is “outward signs of cultishness indicate a cult, and cults are really bad” rather than “Duncan is a cult leader.” (This is sort of similar to the criticisms of the rationality community in general).
I think there are a lot of reasons why people have such heavy priors here, and that they aren’t completely unjustified. I myself have them, because I feel that in most cases where I have observed outward signs of cultishness, it turned out these signals were correct in indicating an unhealthy or dangerous situation. I don’t think it’s necessary to go into detail about them because it would take a huge amount of space and we could potentially get into an endless debate about whether these details bear any similarity to the set-up you are proposing.
So it generally seems that your responses to the people who have these very heavy priors against what you are doing to be along the lines of “You can’t just come in here with your heavy priors and expect that they alone constitute valid evidence that my proposal is a bad idea”, and in that regard your rebuttal is valid. However, I do personally feel that, when someone does show up in an argument with very confident prior belief in something, the charitable principle is to assume at least initially that they have a possibly valid chain of evidence and reasoning that led them to that belief.
It could be that there is some social collective knowledge (like a history of shared experiences and reasoning) that led up to this belief, and therefore it is generally expected that we shouldn’t have to back-track through that reasoning chain (therefore allowing us to make confident statements in arguments without producing the evidence). I think that “cults” are a fairly good example of this kind of knowledge—things people almost universally consider bad, except for cult members themselves, so much so that saying otherwise could be considered taboo.
And this is definitely not to claim that every taboo is a justified taboo. It’s also not to say that you haven’t argued well or presented your arguments well. I’m only arguing that it’s going to be an uphill battle against the naysayers, and that to convince them they are wrong would probably require back-tracking through their chain of reasoning that led to their prior belief. In addition, if you find yourself becoming frustrated with them, just keep the above in mind.
For essentially the above reasons, my model predicts that most of the people who decide to participate in this endeavor will be those who trust you and know you very well, and possibly people who know and trust people who know and trust you very well. Secondly, my model also predicts that most of the participants will have done something similar to this already (the military, bootcamps, martial arts dojos, etc.) and successfully made it through them without burning out or getting distressed about the situation. Thus it predicts that people who don’t know you very well or who have never done anything similar to this before are unlikely to participate and are also unlikely to be swayed by the arguments given in favor of it. And even more unfortunately, due to the predicted composition of the participants, we may not be able to learn much about how successful the project will be for people who wouldn’t normally be inclined to participate, and so even if the outcome on the first run is successful, it will still be unlikely to sway those people.
I don’t place much weight on this model right now and I currently expect something like a 30% chance I will need to update it drastically. For example, you might already be receiving a ton of support from people who have never tried this and who don’t know you very well, and that would force me to update right away.
Also, even though I don’t know you personally, I generally feel positively towards the rationality community and feel safe in the knowledge that this whole thing is happening within it, because it means that this project is not too disconnected from the wider community and that you have sufficient dis-incentives from actually becoming a cult-leader.
In short: Don’t let the negativity you are facing become too much of a burden, just keep in mind that it’s possible that many of the most negative critics (besides obvious trolls) are not acting in bad faith, and that it could require more work than is feasible to engage with all of it sufficiently.
Also, this is very important: You’re asking people to sign a legal contract about finances without any way to to terminate the experiment if it turns out you are in fact a cult leader. This is a huge red flag, and you’ve refused to address it.
I would be vastly reassured if you could stop dodging that one single point. I think it is a very valid point, no matter how unfair the rest of my approach may or may not be.
On the positive side, I think an experiment in a more centrally managed model makes sense, and group activity that has become integrated into routine is an incredibly good commitment device for getting the activity done- the kind of social technology used in workplaces everywhere that people struggle to apply to their other projects and self-improvement efforts. Collaborative self-improvement is good; it was a big part of what I was interested in for the Accelerator Project before that became defunct.
On the skulls side, though, I think the big risk factor that comes to mind for me for any authoritarian project wasn’t addressed directly. You’ve done a lot of review of failed projects, and succeeded projects, but I don’t get an impression you’ve done much of a review of abusive projects. The big common element I’ve seen in abusive projects is that unreasonable demands were made that any sensible person should have ‘defected’ on- they were asked things or placed under demands which from the outside and in retrospect staying in the group was in no way worth meeting- and people didn’t defect. They stayed in the abusive situation.
A lot of abusive relationships involve people trading off their work performance and prospects, and their outside relationship prospects, in order to live up to commitments made within those relationships, when they should have walked. They concede arguments when they can’t find a reason that will be accepted because the other person rejects everything they say, rather than deciding to defect on the personhood norm of use of reasons. I see people who have been in abusive relationships in the past anxiously worrying about how they will find a way to justify themselves in circumstances where I would have been willing to bite the bullet and say “No, I’m afraid not, I have reasons but I can’t really talk about them.”, because the option of simply putting their foot down without reasons- a costly last resort but an option- is mentally unavailable to them.
What I draw from the case studies of abusive situations I’ve encountered, is that humans have false negatives as well as false positives about ‘defection’; that is, people maintain commitments when they should have defected as well as defecting when they should have maintained commitments. Some of us are more prone to the former, and others are more prone to the latter. The people prone to the former are often impressively bad at boundaries, at knowing when to say no, at making a continually updated cost/benefit analysis to their continued presence in an environment, at protecting themselves. Making self-protection a mantra indicates that you’ve kind of seen a part of it, but the overall model being “humans defect on commitments too much” rather than “humans are lousy at knowing when to commit and when not to” seems like it will miss consideration of what various ideas will do with false negatives often.
The rationalist community as a whole probably is mostly people with relatively few false negatives and mostly false positives. Most of us know when to walk and are independent enough to be keeping an eye on the door when things get worrying, and have no trouble saying “you seem to be under the mistaken impression I need to give you a reason” if people try to reject our reasons. So I can understand failures the other way not being the most salient thing. But the rationalist community as a whole is mostly people who won’t be part of this project.
When you select out the minority who are interested in this project, I think you will get a considerably higher rate of people who fail in the direction of backing down if they can’t find a reason that (they think) others will accept, in the direction of not having good boundaries, and more generally in the direction of not ‘defecting’ enough to protect themselves. And I’ve met enough of them in rationalist-adjacent spaces that I know they’re nearby, they’re smart, they’re helpful, some are reliable, and they’re kind of vulnerable.
I think as leader you need to do more than say “protect yourself”. I think you need to expect that some people you are leading will /not/ say no when they should, and you won’t successfully filter all of them out before starting no more than you’ll filter all people who will fail in any other way out before starting. And you need to take responsibility for protecting them, rather than delegating it exclusively for them to handle. To be a bit rough, “protect yourself” seems like trying to avoid part of the leadership role that isn’t actually optional: that if you fail in the wrong way you will hurt people, and you as leader are responsible for not failing in that way, and 95% isn’t good enough. The drill instructor persona does not come off as the sort of person who would do that- with the unidirectional emphasis on committing more- and I think that is part of why people who don’t know you personally find it kinda alarming in this context.
(The military, of course, from which the stereotype originates, deals with this by simply not giving two shits about causing psychological harm, and is fine either severely hurting people to turn them into what it needs or severely hurting them before spitting them out if they are people who are harmed by what it does.)
On the somewhat more object level, the exit plan discussed seems wildly inadequate, and very likely to be a strong barrier against anyone who isn’t one of our exceptional libertines leaving when they should. This isn’t a normal house share, and it is significantly more important than a regular house share that people are not prevented from leaving by financial constraints or inability to find a replacement who’s interested. The harsh terms typical of an SF house share are not suitable, I think.
The finding a replacement person part seems especially impractical, given most people trend towards an average of their friends and so if their friends on one side are DA people, and they’re unsuited to DA, their other friends are probably even more unsuited to DA on average. I would strongly suggest taking only financial recompense on someone leaving for up to a limited number of months of rent if a replacement is not secured, and either permitting that recompense to be paid back at a later date after immediate departure, or requiring it as an upfront deposit, to guarantee safety of exit.
If there are financial costs involved with ensuring exit is readily available, there are enough people who think that this is valuable that it should be possible to secure capital for use in that scenario.
Strong approval of all of this. The short answer is, I’ve spent tens of hours working more closely with the people who will actually be involved looking at all of the issues you raise here. We’re all aware of things like the potential for emotional abuse and financial entrapment, and putting possible solutions into place, and I simply didn’t feel the need to lengthen the post by another third to include stuff that’s only half-in-progress and also largely too detailed/irrelevant to outsiders.
(As a single bite-sized example: the “protect yourself” mantra is there to lay the baseline, but thus far we’re also including a) explicit “non-conformity” training in bowing out of activities, coupled with strong norms of socially supporting people who “rule #1” themselves out, and clear ways to resolve anxiety or embarrassment and save face, b) weekly open-ended retrospectives that include room for anonymous feedback as well as public, c) two one-on-ones per week with me in which the number one focus is “how are you, can you be supported in any way,” d) outside check-ins with someone completely unrelated to the house, to provide a fresh perspective and safe outlet, and e) regular Circling and pair debugging so that everyone knows “where everyone is” and has a cheap Schelling point for “I need help with X.”)
This is tangentially related at best, but if you have some high quality non-conformity training I would love to borrow it for my local purposes. I’ve got some, but still feel like it’s the largest weakness in the rationality training I’ve been doing.
I would be much more inclined to believe you if you would actually discuss those solutions, instead of simply insisting we should “just trust you”.
How can you read the parenthetical above and dismiss it as “not discussion” and still claim to be anything other than deontologically hostile?
Because basically every cult has a 30 second boilerplate that looks exactly like that?
When I say “discuss safety”, I’m looking for a standard of discussion that is above that provided by actual, known-dangerous cults. Cults routinely use exactly the “check-ins” you’re describing, as a way to emotionally manipulate members. And the “group” check-ins turn in to peer pressure. So the only actual safety valve ANYWHERE in there is (D).
You’re proposing starting something that looks like the cult. I’m asking you for evidence that you are not, in fact, a cult leader. Thus far, almost all evidence you’ve provided has been perfectly in line with “you are a cult leader”.
If you feel this is an unfair standard of discussion, then this is probably not the correct community for you.
Also, this is very important: You’re asking people to sign a legal contract about finances without any way to to terminate the experiment if it turns out you are in fact a cult leader. This is a huge red flag, and you’ve refused to address it.
I’m not interested in entering into a discussion where the standard is “Duncan must overcome an assumption that he’s a cult leader, and bears all the burden of proof.” That’s deeply fucked up, and inappropriate given that I willingly created a multithousand word explanation for transparency and critique, and have positively engaged with all but the bottom 3% of commentary (of which I claim you are firmly a part).
I think you’re flat-out wrong in claiming that “almost all evidence you’ve provided has been perfectly in line with ‘you are a cult leader.’” The whole original post provided all kinds of models and caveats that distinguish it from the (correctly feared and fought-against) standard cult model. You are engaged in confirmation bias and motivated cognition and stereotyping and strawmanning, and you are the one who is failing to rise to the standard of discussion of this community, and I will not back off from saying it however much people might glare at me for it.
While I agree that a lot of the criticism towards you has been hostile or at least pretty uncharitable, I would only point out that I suspect the default tendency most people have is to automatically reject anything that shows even the most minor outward signs of cultishness, and that these heavy prior beliefs will be difficult to overcome. So, it seems more likely that the standard is “outward signs of cultishness indicate a cult, and cults are really bad” rather than “Duncan is a cult leader.” (This is sort of similar to the criticisms of the rationality community in general).
I think there are a lot of reasons why people have such heavy priors here, and that they aren’t completely unjustified. I myself have them, because I feel that in most cases where I have observed outward signs of cultishness, it turned out these signals were correct in indicating an unhealthy or dangerous situation. I don’t think it’s necessary to go into detail about them because it would take a huge amount of space and we could potentially get into an endless debate about whether these details bear any similarity to the set-up you are proposing.
So it generally seems that your responses to the people who have these very heavy priors against what you are doing to be along the lines of “You can’t just come in here with your heavy priors and expect that they alone constitute valid evidence that my proposal is a bad idea”, and in that regard your rebuttal is valid. However, I do personally feel that, when someone does show up in an argument with very confident prior belief in something, the charitable principle is to assume at least initially that they have a possibly valid chain of evidence and reasoning that led them to that belief.
It could be that there is some social collective knowledge (like a history of shared experiences and reasoning) that led up to this belief, and therefore it is generally expected that we shouldn’t have to back-track through that reasoning chain (therefore allowing us to make confident statements in arguments without producing the evidence). I think that “cults” are a fairly good example of this kind of knowledge—things people almost universally consider bad, except for cult members themselves, so much so that saying otherwise could be considered taboo.
And this is definitely not to claim that every taboo is a justified taboo. It’s also not to say that you haven’t argued well or presented your arguments well. I’m only arguing that it’s going to be an uphill battle against the naysayers, and that to convince them they are wrong would probably require back-tracking through their chain of reasoning that led to their prior belief. In addition, if you find yourself becoming frustrated with them, just keep the above in mind.
For essentially the above reasons, my model predicts that most of the people who decide to participate in this endeavor will be those who trust you and know you very well, and possibly people who know and trust people who know and trust you very well. Secondly, my model also predicts that most of the participants will have done something similar to this already (the military, bootcamps, martial arts dojos, etc.) and successfully made it through them without burning out or getting distressed about the situation. Thus it predicts that people who don’t know you very well or who have never done anything similar to this before are unlikely to participate and are also unlikely to be swayed by the arguments given in favor of it. And even more unfortunately, due to the predicted composition of the participants, we may not be able to learn much about how successful the project will be for people who wouldn’t normally be inclined to participate, and so even if the outcome on the first run is successful, it will still be unlikely to sway those people.
I don’t place much weight on this model right now and I currently expect something like a 30% chance I will need to update it drastically. For example, you might already be receiving a ton of support from people who have never tried this and who don’t know you very well, and that would force me to update right away.
Also, even though I don’t know you personally, I generally feel positively towards the rationality community and feel safe in the knowledge that this whole thing is happening within it, because it means that this project is not too disconnected from the wider community and that you have sufficient dis-incentives from actually becoming a cult-leader.
In short: Don’t let the negativity you are facing become too much of a burden, just keep in mind that it’s possible that many of the most negative critics (besides obvious trolls) are not acting in bad faith, and that it could require more work than is feasible to engage with all of it sufficiently.
I like everything you’ve said here, including the gentle pointers of places where I myself have been uncharitable or naive.
Also, this is very important: You’re asking people to sign a legal contract about finances without any way to to terminate the experiment if it turns out you are in fact a cult leader. This is a huge red flag, and you’ve refused to address it.
I would be vastly reassured if you could stop dodging that one single point. I think it is a very valid point, no matter how unfair the rest of my approach may or may not be.