“How to get people to take ideas seriously without serious risk they will go insane along the way” is a very important problem. In retrospect, CFAR should have had this as an explicit priority from the start.
Responding partly to Orthonormal and partly to Raemon:
Part of the trouble is that group dynamic problems are harder to understand, harder to iterate on, and take longer to appear and to be obvious. (And are then harder to iterate toward fixing.)
Re: individuals having manic or psychotic episodes, I agree with what Raemon says. About six months into a year into CFAR’s workshop-running experience, a participant had a manic episode a couple weeks after a workshop in a way that seemed plausibly triggered partly by the workshop. (Interestingly, if I’m not mixing people up, the same individual later told me that they’d also been somewhat destabilized by reading the sequences, earlier on.) We then learned a lot about warning signs of psychotic or manic episodes and took a bunch of steps to mostly-successfully reduce the odds of having the workshop trigger these. (In terms of causal mechanisms: It turns out that workshops of all sorts, and stuff that messes with one’s head of all sorts, seem to trigger manic or psychotic episodes occasionally. E.g. Landmark workshops; meditation retreats; philosophy courses; going away to college; many different types of recreational drugs; and different small self-help workshops run by a couple people I tried randomly asking about this from outside the rationality community. So my guess is that it isn’t the “taking ideas seriously” aspect of CFAR as such, although I dunno.)
Re: other kinds of “less sane”:
(1)
IMO, there has been a build-up over time of mentally iffy psychological habits/techniques/outlook-bits in the Berkeley “formerly known as rationality” community, including iffy thingies that affect the rate at which other iffy things get created (e.g., by messing with the taste of those receiving/evaluating/passing on new “mess with your head” techniques; and by helping people be more generative of “mess with your head” methods via them having had a chance to see several already which makes it easier to build more). My guess is that CFAR workshops have accidentally been functioning as a “gateway drug” toward many things of iffy sanity-impact, basically by: (a) providing a healthy-looking context in which people get over their concerns about introspection/self-hacking because they look around and see other happy healthy-looking people; and (b) providing some entry-level practice with introspection, and with “dialoging with one’s tastes and implicit models and so on”, which makes it easier for people to mess with their heads in other, less-vetted ways later.
My guess is that the CFAR workshop has good effects on folks who come from a sane-isn or at least stable-is outside context, attend a workshop, and then return to that outside context. My guess is that its effects are iffier for people who are living in the bay area, do not have a day job/family/other anchor, and are on a search for “meaning.”
My guess is that those effects have been getting gradually worse over the last five or more years, as a background level of this sort of thing accumulates.
I ought probably to write about this in a top-level post, and may actually manage to do so. I’m also not at all confident of my parsing/ontology here, and would quite appreciate help with it.
(2)
Separately, AI risk seems pretty hard for people, including ones unrelated to this community.
(3)
Separately, “taking ideas seriously” indeed seems to pose risks. And I had conversations with e.g. Michael Vassar back in ~2008 where he pointed out that this poses risks; it wasn’t missing from the list. (Even apart from tail risks, some forms of “taking ideas seriously” seem maybe-stupid in cases where the “ideas” are not grounded also in one’s inner simulator, tastes, viscera — much sense is there that isn’t in ideology-mode alone). I don’t know whether CFAR workshops increase or decrease peoples’ tendency to take ideas seriously in the problematic sense, exactly. They have mostly tried to connect peoples’ ideas and peoples’ viscera in both directions.
“How to take ideas seriously without [the taking ideas seriously bit] causing them to go insane” as such actually still isn’t that high on my priorities list; I’d welcome arguments that it should be, though.
—
I’d also welcome arguments that I’m just distinguishing 50 types of snow and that these should all be called the same thing from a distance. But for the moment for me the group-level gradual health/wholesomeness shifts and the individual-level stuff show up as pretty different.
Encouragement to write the top level post, with offer of at least some help although presumably people who are there in Berkeley to see it would be more helpful in many ways. This matches my model of what is happening.
Seeing you write about this problem, in such harsh terms as “formerly-known-as-rationality community” and “effects are iffier and getting worse”, is surprising in a good way.
Maybe talking clearly could help against these effects. The American talking style has been getting more oblique lately, and it’s especially bad on LW, maybe due to all the mind practices. I feel this, I guess that, I’d like to understand better… For contrast, read DeMille’s interview after he quit dianetics. It’s such a refreshingly direct style, like he spent years mired in oblique talk and mind practices then got fed up and flipped to the opposite, total clarity. I’d love to see more of that here.
The American talking style has been getting more oblique lately, and it’s especially bad on LW, maybe due to all the mind practices. I feel this, I guess that, I’d like to understand better…
I tend to talk like that, prefer that kind of talk, and haven’t done any mind practices. (I guess you mean meditation, circling, that kind of thing?) I think it’s a good way to communicate degrees of uncertainty (and other “metadata”) without having to put a lot of effort into coming up with explicit numbers. I don’t see anything in Anna’s post that argues against this, so if you want to push against it I think you’ll have to say more about your objections.
For some reason it’s not as annoying to me when you do it. But still, in most cases I’d prefer to learn the actual evidence that someone saw, rather than their posterior beliefs or even their likelihood ratios (as your conversation with Hal Finney here shows very nicely). And when sharing evidence you don’t have to qualify it as much, you can just say what you saw.
But still, in most cases I’d prefer to learn the actual evidence that someone saw, rather than their posterior beliefs or even their likelihood ratios (as your conversation with Hal Finney here shows very nicely).
I think that makes sense (and made the point more explicitly at the end of Probability Space & Aumann Agreement). But sharing evidence is pretty costly and it’s infeasible to share everything that goes into one’s posterior beliefs. It seems sensible to share posterior beliefs first and then engage in some protocol (e.g., double cruxing or just ordinary discussion) for exchanging the most important evidence while minimizing cost with whoever actually disagrees with you. (This does leave the possibility that two people agree after having observed different evidence and could still benefit from exchanging evidence, but still seems reasonable as a rule of thumb in the real world.)
And when sharing evidence you don’t have to qualify it so much, you can just say what you saw.
I think you still do? Because you may not be sure that you remember it correctly, or interpreted it correctly in the first place, or don’t totally trust the source of the evidence, etc.
That’s fair. Though I’m also worried that when Alice and Bob exchange beliefs (“I believe in global warming” “I don’t”), they might not go on to exchange evidence, because one or both of them just get frustrated and leave. When someone states their belief first, it’s hard to know where to even start arguing. This effect is kind of unseen, but I think it stops a lot of good conversations from happening.
While if you start with evidence, there’s at least some chance of conversation about the actual thing. And it’s not that time-consuming, if everyone shares their strongest evidence first and gets a chance to respond to the other person’s strongest evidence. I wish more conversations went like that.
I agree that this is and should be a core goal of rationality. It’s a bit unclear to me how easy it would have been to predict the magnitude of the problem in advance. There’s a large number of things to get right when inventing a whole new worldview and culture from scratch. (Insofar as it was predictable in advance, I think it is good to do some kind of backprop where you try to figure out why you didn’t prioritize it, so that you don’t make that same mistake again. I’m not currently sure what I’d actually learn here)
Meanwhile, my impression is that once “actually a couple people have had psychotic breaks oh geez”, CFAR was reasonably quick to pivot towards prioritize avoiding that outcome (I don’t know exactly what went on there and it’s plausible that response time should have been faster, or in response to earlier warning signs).
But, part of the reason this is hard is that there isn’t actually a central authority here and there’s a huge inertial mass of people already excited about brain-tinkering that’s hard to pivot on a dime.
Shorter version:
“How to get people to take ideas seriously without serious risk they will go insane along the way” is a very important problem. In retrospect, CFAR should have had this as an explicit priority from the start.
Responding partly to Orthonormal and partly to Raemon:
Part of the trouble is that group dynamic problems are harder to understand, harder to iterate on, and take longer to appear and to be obvious. (And are then harder to iterate toward fixing.)
Re: individuals having manic or psychotic episodes, I agree with what Raemon says. About six months into a year into CFAR’s workshop-running experience, a participant had a manic episode a couple weeks after a workshop in a way that seemed plausibly triggered partly by the workshop. (Interestingly, if I’m not mixing people up, the same individual later told me that they’d also been somewhat destabilized by reading the sequences, earlier on.) We then learned a lot about warning signs of psychotic or manic episodes and took a bunch of steps to mostly-successfully reduce the odds of having the workshop trigger these. (In terms of causal mechanisms: It turns out that workshops of all sorts, and stuff that messes with one’s head of all sorts, seem to trigger manic or psychotic episodes occasionally. E.g. Landmark workshops; meditation retreats; philosophy courses; going away to college; many different types of recreational drugs; and different small self-help workshops run by a couple people I tried randomly asking about this from outside the rationality community. So my guess is that it isn’t the “taking ideas seriously” aspect of CFAR as such, although I dunno.)
Re: other kinds of “less sane”:
(1) IMO, there has been a build-up over time of mentally iffy psychological habits/techniques/outlook-bits in the Berkeley “formerly known as rationality” community, including iffy thingies that affect the rate at which other iffy things get created (e.g., by messing with the taste of those receiving/evaluating/passing on new “mess with your head” techniques; and by helping people be more generative of “mess with your head” methods via them having had a chance to see several already which makes it easier to build more). My guess is that CFAR workshops have accidentally been functioning as a “gateway drug” toward many things of iffy sanity-impact, basically by: (a) providing a healthy-looking context in which people get over their concerns about introspection/self-hacking because they look around and see other happy healthy-looking people; and (b) providing some entry-level practice with introspection, and with “dialoging with one’s tastes and implicit models and so on”, which makes it easier for people to mess with their heads in other, less-vetted ways later.
My guess is that the CFAR workshop has good effects on folks who come from a sane-isn or at least stable-is outside context, attend a workshop, and then return to that outside context. My guess is that its effects are iffier for people who are living in the bay area, do not have a day job/family/other anchor, and are on a search for “meaning.”
My guess is that those effects have been getting gradually worse over the last five or more years, as a background level of this sort of thing accumulates.
I ought probably to write about this in a top-level post, and may actually manage to do so. I’m also not at all confident of my parsing/ontology here, and would quite appreciate help with it.
(2) Separately, AI risk seems pretty hard for people, including ones unrelated to this community.
(3) Separately, “taking ideas seriously” indeed seems to pose risks. And I had conversations with e.g. Michael Vassar back in ~2008 where he pointed out that this poses risks; it wasn’t missing from the list. (Even apart from tail risks, some forms of “taking ideas seriously” seem maybe-stupid in cases where the “ideas” are not grounded also in one’s inner simulator, tastes, viscera — much sense is there that isn’t in ideology-mode alone). I don’t know whether CFAR workshops increase or decrease peoples’ tendency to take ideas seriously in the problematic sense, exactly. They have mostly tried to connect peoples’ ideas and peoples’ viscera in both directions.
“How to take ideas seriously without [the taking ideas seriously bit] causing them to go insane” as such actually still isn’t that high on my priorities list; I’d welcome arguments that it should be, though.
—
I’d also welcome arguments that I’m just distinguishing 50 types of snow and that these should all be called the same thing from a distance. But for the moment for me the group-level gradual health/wholesomeness shifts and the individual-level stuff show up as pretty different.
Encouragement to write the top level post, with offer of at least some help although presumably people who are there in Berkeley to see it would be more helpful in many ways. This matches my model of what is happening.
Seeing you write about this problem, in such harsh terms as “formerly-known-as-rationality community” and “effects are iffier and getting worse”, is surprising in a good way.
Maybe talking clearly could help against these effects. The American talking style has been getting more oblique lately, and it’s especially bad on LW, maybe due to all the mind practices. I feel this, I guess that, I’d like to understand better… For contrast, read DeMille’s interview after he quit dianetics. It’s such a refreshingly direct style, like he spent years mired in oblique talk and mind practices then got fed up and flipped to the opposite, total clarity. I’d love to see more of that here.
I tend to talk like that, prefer that kind of talk, and haven’t done any mind practices. (I guess you mean meditation, circling, that kind of thing?) I think it’s a good way to communicate degrees of uncertainty (and other “metadata”) without having to put a lot of effort into coming up with explicit numbers. I don’t see anything in Anna’s post that argues against this, so if you want to push against it I think you’ll have to say more about your objections.
For some reason it’s not as annoying to me when you do it. But still, in most cases I’d prefer to learn the actual evidence that someone saw, rather than their posterior beliefs or even their likelihood ratios (as your conversation with Hal Finney here shows very nicely). And when sharing evidence you don’t have to qualify it as much, you can just say what you saw.
I think that makes sense (and made the point more explicitly at the end of Probability Space & Aumann Agreement). But sharing evidence is pretty costly and it’s infeasible to share everything that goes into one’s posterior beliefs. It seems sensible to share posterior beliefs first and then engage in some protocol (e.g., double cruxing or just ordinary discussion) for exchanging the most important evidence while minimizing cost with whoever actually disagrees with you. (This does leave the possibility that two people agree after having observed different evidence and could still benefit from exchanging evidence, but still seems reasonable as a rule of thumb in the real world.)
I think you still do? Because you may not be sure that you remember it correctly, or interpreted it correctly in the first place, or don’t totally trust the source of the evidence, etc.
That’s fair. Though I’m also worried that when Alice and Bob exchange beliefs (“I believe in global warming” “I don’t”), they might not go on to exchange evidence, because one or both of them just get frustrated and leave. When someone states their belief first, it’s hard to know where to even start arguing. This effect is kind of unseen, but I think it stops a lot of good conversations from happening.
While if you start with evidence, there’s at least some chance of conversation about the actual thing. And it’s not that time-consuming, if everyone shares their strongest evidence first and gets a chance to respond to the other person’s strongest evidence. I wish more conversations went like that.
I agree that this is and should be a core goal of rationality. It’s a bit unclear to me how easy it would have been to predict the magnitude of the problem in advance. There’s a large number of things to get right when inventing a whole new worldview and culture from scratch. (Insofar as it was predictable in advance, I think it is good to do some kind of backprop where you try to figure out why you didn’t prioritize it, so that you don’t make that same mistake again. I’m not currently sure what I’d actually learn here)
Meanwhile, my impression is that once “actually a couple people have had psychotic breaks oh geez”, CFAR was reasonably quick to pivot towards prioritize avoiding that outcome (I don’t know exactly what went on there and it’s plausible that response time should have been faster, or in response to earlier warning signs).
But, part of the reason this is hard is that there isn’t actually a central authority here and there’s a huge inertial mass of people already excited about brain-tinkering that’s hard to pivot on a dime.