I wish someone would create good bay area community health. It isn’t our core mission; it doesn’t relate all that directly to our core mission; but it relates to the background environment in which CFAR and quite a few other organizations may or may not end up effective.
One daydream for a small institution that might help some with this health is as follows:
Somebody creates the “Society for Maintaining a Very Basic Standard of Behavior”;
It has certain very basic rules (e.g. “no physical violence”; “no doing things that are really about as over the line as physical violence according to a majority of our anonymously polled members”; etc.)
It has an explicit membership list of folks who agree to both: (a) follow these rules; and (b) ostracize from “community events” (e.g. parties to which >4 other society members are invited) folks who are in bad standing with the society (whether or not they personally think those members are guilty).
It has a simple, legible, explicitly declared procedure for determining who has/hasn’t entered bad standing (e.g.: a majority vote of the anonymously polled membership of the society; or an anonymous vote of a smaller “jury” randomly chosen from the society).
Benefits I’m daydreaming might come from this institution:
A. If the society had large membership, bad actors could be ostracized from larger sections of the community, and with more simplicity and less drama.
B. Also, we could do that while imposing less restraint on individual speech, which would make the whole thing less creepy. Like, if many many people thought person B should be exiled, and person A wanted to defer but was not herself convinced, she could: (a) defer explicitly, while saying that’s what she was doing; and meanwhile (b) speak her mind without worrying that she would destabilize the community’s ability to ever coordinate.
Why aren’t there Knowers of Character who Investigate all Incidents Thoroughly Enough for The Rest of The Community to Defer To, already? Isn’t that a natural role that many people would like to play?
Is it just that the community hasn’t explicitly formed consensus that the people who’re already very close to being in that role can be trusted, and forming that consensus takes a little bit of work?
No; this would somehow be near-impossible in our present context in the bay, IMO; although Berkeley’s REACH center and REACH panel are helpful here and solve part of this, IMO.
I would have a lot of trust in a vote. I seriously doubt we as a community would agree on a set of knower I would trust. Also some similar ideas have been tried and went horribly in at least some cases (ex the alumni dispute resolution council system). It is much harder for bad actors to subvert a vote than to subvert a small number of people.
No; that isn’t the trouble; I could imagine us getting the money together for such a thing, since one doesn’t need anything like a consensus to fund a position. The trouble is more that at this point the members of the bay area {formerly known as “rationalist”} “community” are divided into multiple political factions, or perhaps more-chaos-than-factions, which do not trust one another’s judgment (even about pretty basic things, like “yes this person’s actions are outside of reasonable behavioral norms). It is very hard to imagine an individual or a small committee that people would trust in the right way. Perhaps even more so after that individual or committee tried ruling against someone who really wanted to stay, and and that person attempted to create “fear, doubt, and uncertainty” or whatever about the institution that attempted to ostracize them.
I think something in this space is really important, and I’d be interested in investing significantly in any attempt that had a decent shot at helping. Though I don’t yet have a strong enough read myself on what the goal ought to be.
For whatever it’s worth, my sense is that it’s actually reasonably doable to build an institution/process that does well here, and gets trust from a large fraction of the community, though it is by no means an easy task. I do think it would likely require more than one full-time person, and at least one person of pretty exceptional skill in designing processes and institutions (as well as general competence).
I think Anna roughly agrees (hence her first comment), she was just answering the question of “why hasn’t this already been done?”
I do think adversarial pressure (i.e. if you rule against a person they will try to sow distrust against you and it’s very stressful and time consuming) is a reason that “reasonably doable” isn’t really a fair description. It’s doable, but quite hard, and a big commitment that I think is qualitatively different from other hard jobs.
CFAR relies heavily on selection effects for finding workshop participants. In general we do very little marketing or direct outreach, although AIRCS and MSFP do some of the latter; mostly people hear about us via word of mouth. This system actually works surprisingly (to me) well at causing promising people to apply.
But I think many of the people we would be most happy to have at a workshop probably never hear about us, or at least never apply. One could try fixing this with marketing/outreach strategies, but I worry this would disrupt the selection effects which I think have been a necessary ingredient for nearly all of our impact.
So I fantasize sometimes about a new organization being created which draws loads of people together, via selection effects similar to those which have attracted people to LessWrong, which would make it easier for us to find more promising people.
(I also—and this isn’t a wish for an organization, exactly, but it gestures at the kind of problem I speculate some organization could potentially help solve—sometimes fantasize about developing something like “scouts” at existing places with such selection effects. For example, a bunch of safety researchers competed in IMO/IOI when they were younger; I think it would be plausibly valuable for us to make friends with some team coaches, and for them to occasionally put us in touch with promising people).
I expect there are a bunch which never hear about us due to language barrier, and/or because they’re geographically distant from most of our alumni. But I would be surprised if there weren’t also lots of geographically-near, epistemically-promising people who’ve just never happened to encounter someone recommending a workshop.
What organisation, if it existed and ran independently of CFAR, would be the most useful to CFAR?
I wish someone would create good bay area community health. It isn’t our core mission; it doesn’t relate all that directly to our core mission; but it relates to the background environment in which CFAR and quite a few other organizations may or may not end up effective.
One daydream for a small institution that might help some with this health is as follows:
Somebody creates the “Society for Maintaining a Very Basic Standard of Behavior”;
It has certain very basic rules (e.g. “no physical violence”; “no doing things that are really about as over the line as physical violence according to a majority of our anonymously polled members”; etc.)
It has an explicit membership list of folks who agree to both: (a) follow these rules; and (b) ostracize from “community events” (e.g. parties to which >4 other society members are invited) folks who are in bad standing with the society (whether or not they personally think those members are guilty).
It has a simple, legible, explicitly declared procedure for determining who has/hasn’t entered bad standing (e.g.: a majority vote of the anonymously polled membership of the society; or an anonymous vote of a smaller “jury” randomly chosen from the society).
Benefits I’m daydreaming might come from this institution:
A. If the society had large membership, bad actors could be ostracized from larger sections of the community, and with more simplicity and less drama.
B. Also, we could do that while imposing less restraint on individual speech, which would make the whole thing less creepy. Like, if many many people thought person B should be exiled, and person A wanted to defer but was not herself convinced, she could: (a) defer explicitly, while saying that’s what she was doing; and meanwhile (b) speak her mind without worrying that she would destabilize the community’s ability to ever coordinate.
Why aren’t there Knowers of Character who Investigate all Incidents Thoroughly Enough for The Rest of The Community to Defer To, already? Isn’t that a natural role that many people would like to play?
Is it just that the community hasn’t explicitly formed consensus that the people who’re already very close to being in that role can be trusted, and forming that consensus takes a little bit of work?
No; this would somehow be near-impossible in our present context in the bay, IMO; although Berkeley’s REACH center and REACH panel are helpful here and solve part of this, IMO.
I would have a lot of trust in a vote. I seriously doubt we as a community would agree on a set of knower I would trust. Also some similar ideas have been tried and went horribly in at least some cases (ex the alumni dispute resolution council system). It is much harder for bad actors to subvert a vote than to subvert a small number of people.
I believe the reason why is that knowing everyone in the community would literally be a full-time job and no one wants to pay for that.
No; that isn’t the trouble; I could imagine us getting the money together for such a thing, since one doesn’t need anything like a consensus to fund a position. The trouble is more that at this point the members of the bay area {formerly known as “rationalist”} “community” are divided into multiple political factions, or perhaps more-chaos-than-factions, which do not trust one another’s judgment (even about pretty basic things, like “yes this person’s actions are outside of reasonable behavioral norms). It is very hard to imagine an individual or a small committee that people would trust in the right way. Perhaps even more so after that individual or committee tried ruling against someone who really wanted to stay, and and that person attempted to create “fear, doubt, and uncertainty” or whatever about the institution that attempted to ostracize them.
I think something in this space is really important, and I’d be interested in investing significantly in any attempt that had a decent shot at helping. Though I don’t yet have a strong enough read myself on what the goal ought to be.
For whatever it’s worth, my sense is that it’s actually reasonably doable to build an institution/process that does well here, and gets trust from a large fraction of the community, though it is by no means an easy task. I do think it would likely require more than one full-time person, and at least one person of pretty exceptional skill in designing processes and institutions (as well as general competence).
I think Anna roughly agrees (hence her first comment), she was just answering the question of “why hasn’t this already been done?”
I do think adversarial pressure (i.e. if you rule against a person they will try to sow distrust against you and it’s very stressful and time consuming) is a reason that “reasonably doable” isn’t really a fair description. It’s doable, but quite hard, and a big commitment that I think is qualitatively different from other hard jobs.
CFAR relies heavily on selection effects for finding workshop participants. In general we do very little marketing or direct outreach, although AIRCS and MSFP do some of the latter; mostly people hear about us via word of mouth. This system actually works surprisingly (to me) well at causing promising people to apply.
But I think many of the people we would be most happy to have at a workshop probably never hear about us, or at least never apply. One could try fixing this with marketing/outreach strategies, but I worry this would disrupt the selection effects which I think have been a necessary ingredient for nearly all of our impact.
So I fantasize sometimes about a new organization being created which draws loads of people together, via selection effects similar to those which have attracted people to LessWrong, which would make it easier for us to find more promising people.
(I also—and this isn’t a wish for an organization, exactly, but it gestures at the kind of problem I speculate some organization could potentially help solve—sometimes fantasize about developing something like “scouts” at existing places with such selection effects. For example, a bunch of safety researchers competed in IMO/IOI when they were younger; I think it would be plausibly valuable for us to make friends with some team coaches, and for them to occasionally put us in touch with promising people).
What kind of people do you think never hear about CFAR but that you want to have at your workshops?
I expect there are a bunch which never hear about us due to language barrier, and/or because they’re geographically distant from most of our alumni. But I would be surprised if there weren’t also lots of geographically-near, epistemically-promising people who’ve just never happened to encounter someone recommending a workshop.
It seems to me like being more explicit about what kind of people should be there would make it easier for other people to send them your way.