I feel like most of the value I got out of the minicamp in terms of techniques came early. This is probably due a combination of effects:
1) I reached a limit on my ability to internalize what I was learning without some time spent putting things to use.
2) I was not well mentally organized—my rationality concepts were all individual floating bits not well sewn together—so I reached a point where new concepts didn’t fit into my map very easily.
I agree things got more disorganized, in fact, I remember on a couple occasions seeing the ‘this isn’t the outcome I expected’ look on Anna’s face and the attempt to update and try a different approach or go with the flow and see where things were leading. I marked this responsiveness as a good thing.
As for your ugly it’s important to note that was a casual discussion among attendees. I suppose this highlights risks from a general increase in credibility-giving by close temporal association with other new ideas you’re giving credibility to? Example: I talked to a lot of curious people that week about how Valve’s internal structure works, but no one should necessarily run off and establish a Valve-like company without understanding Valve’s initial conditions, goals, employee make-up, other institutions, and comparing them with their own initial conditions, goals, employees, institutions, etc.
I suppose this highlights risks from a general increase in credibility-giving by close temporal association with other new ideas you’re giving credibility to?
Yes, this.
Usually this risk is low, but here it was actually quite high. This particular instance was an Ugly example, because the category—ideas with close temporal association—was false. But there were many scary examples based on good categories. The most outlandish was meditation. Remember that other people’s brains are part of evidence, now witness quite a few people who have just spent the last few days on activities that convinced you they are pretty decent (compared to baseline, damn good) at doing their research, discarding bullshit, not strongly espousing ideas they don’t strongly hold, examining the ideas they do hold, etc. etc… witness them say with a straight face that meditation, which you (I) assumed was a crock of mystic religion that just took a different turn than the Western religions you’re familiar with… witness them say that meditation is super-useful. Then watch your brain say “Bull! Wait, they’re good at things. Maybe not bull? Hey, argument from authority, bull after all! Wait, argument from authority is evidence… :S I… have to take this seriously...”
IFS, NVC, nootropics? Guess I have to take them seriously too.
(I exaggerate slightly, but my feelings were stronger than I think they should have been, so that story is in line with how I felt, if not precisely what my beliefs were)
I had a dim view of meditation because my only exposure to meditation prior was in mystic contexts. Here I saw people talk about it separate from that context. My assumption was that if you approached it using Bayes and other tools, you could start to figure out if it was bullshit or not. It doesn’t seem unreasonable to me that folks interested could explore it and see what turns up.
Would I choose to do so? No. I have plenty of other low hanging fruit and the amount of non-mystic guidance around meditation seems really minimal, so I’d be paying opportunity cost to cover unknown territory with unknown payoffs.
I don’t feel oddly attached to any beliefs here. Maybe I’ll go search for some research. Right now I feel if I found some good papers providing evidence for or against meditation I would shift appropriately.
I don’t see myself updating my beliefs about meditation (which are weak) unduly because of an argument from authority. They changed because the arguments were reasoned from principles or with process I accept as sound. Reasoning like “fairly credible sources like Feynman claim they can learn to shift the perception of the center of self-awareness to the left. (Feynman was also a bullshitter, but let’s take this as an example...) What do we think he meant? Is what we think he meant possible? What is possible? Is that reproducible? Would it be useful to be able to do that? Should we spend time trying to figure out if we can do that?” This would be what I consider to be a discussion in the space of meditation-like stuff that is non-mystical and enjoyable. It isn’t going to turn me into a mystic any more than Curzi’s anecdotes about his buddy’s nootropics overdoses will turn me into a juicer.
I didn’t take away the message ‘meditation is super-useful.’ I took away the message ‘meditation is something some people are messing with to see what works.’ I’m less worried about that than if someone said ‘eating McDonalds every day for every meal is something some people are messing with to see what works.’ because my priors tell me that is really harmful whereas my priors tell me meditating every day is probably just a waste of time. A possibly non-mystical waste of time.
Now I’m worried comment-readers will think I’m a blind supporter of meditation. It is more accurate to say I went from immediate dismissal of meditation to a position of seeing the act of meditating as separable from a mystic context.
Now my wife is telling me I should actually be MORE curious about meditation and go do some research.
Right now I feel if I found some good papers providing evidence for or against meditation I would shift appropriately.
Are you familiar with the study (studies) about meditation and brain health? I’ve seen one or two crop up, but I’ve not read the actual studies themselves—just summaries. IIRC, it appears to reduce the effects of aging.
The other reason I consider meditation possibly worth pursuing is that it appears to be an effective “mindhack” in at least one respect: it can be used to reduce or eliminate unpleasant physical and mental sensations. For example, I believe it’s been shown to be effective in reducing stress and anxiety, and—more impressively—chronic pain, or even sensations like “chilly”. How useful this is is more debatable: while I’m waiting in line, shivering, I probably won’t be able to meditate effectively, or have the time to.
Hm, super-useful was a bad term. The actual impressions I got were “obviously coherent and not bs, and with high enough mean+variance that the value of investigation is very high”. Not necessarily the value of any one specific person investigating, but the value of it being investigated.
So I went a bit further than your
Now I’m worried comment-readers will think I’m a supporter of meditation (an out-group belief?). It is more accurate to say I went from immediate dismissal of meditation to a position of seeing the act of meditating as separable from a mystic context.
to believe the top of the curve was a) grossly useful and b) of non-negligible likelihood.
I had a dim view of meditation because my only exposure to meditation prior was in mystic contexts.
It strikes me that you may want to take a step further and consider mysticism itself as a functionally useful brain-hack much like meditation. It’s very possible that mystical texts could be used to bring out a mental stance conducive to rationality. The Litanies of Tarski and Gendlin are fairly obvious examples, and I’d even argue that HP:MoR seems to be fulfilling that role as a kind of shared mythology tapping into well-understood tropes, at least for the subset of rationalists who like Harry Potter fanfiction.
Metaphysical terminology is a huge bag of stupid and abstraction, but what I mean by mysticism is something like ‘characteristic of a metaphysical belief system.’ The mysticism tag tells me that a concept is positing extra facts about how the world works in a way that isn’t consistent with my more fundamental, empirical beliefs.
So in my mind I have ‘WARNING!’ tags (intentionally) attached to mysticism. So when I see something that has the mysticism tag attached to it, I approach cautiously and with a big stick. Or to save time or avoid the risk of being eaten I often don’t approach at all.
If I find that I have a metaphysical belief or if I detect that a fact/idea may be metaphysical, then I attach the mystical tag to it and go find my stick.
If something in my mind has the mysticism tag attached to it inappropriately, then I want to reclassify that thing—slightly reduce the size of the tag or create a branch through more specific concept definition and separation.
So I don’t really see value in attaching the mysticism tag to things that don’t directly warrant it. What you call a mystical litany I’d call a mnemonic technique for reminding yourself of a useful process or dangerous bias. Religions have litanies, but litanies are not inherently religious concepts.
So no, I won’t consider mysticism itself as a useful brain hack. Mysticism is allocated the purpose of ‘warning sign’ . It’s not the only warning sign, but it’s a useful one.
I can see why you would consider what you call “mysticism”, or metaphysical belief systems, a warning sign. However, the use of mystical text forms, which is what I was referring to in my comment, is quite unrelated to this kind of metaphysical and cosmological rigidity. Compare, say, Christian fundamentalists versus Quakers or Unitarian Universalists, or Islamic Wahabis and Qutbis versus Sufis: the most doctrinal and memetically dangerous groups make only sparing use of mystical practices, or forbid them outright.
Atheists and agnostics are obviously a more challenging case, but it appears that at least some neopagans comfortably identify as such, using their supposed metaphysical beliefs as functionally useful aliefs, to be invoked through a ritual whenever the psychical effects of such rituals are desired. There is in fact an account of just such a ritual practice on LW itself involving the Winter Solstice, which is often celebrated as a festival by neopagan groups. It’s hard to describe that account as anything other than a mystical ritual aiming to infuence the participants in very specific ways and induce a desirable stance of mind among them. In fact, that particular practice may be regarded as extremely foolish and memetically dangerous (because it involves a fairly blatant kind of happy-death-spiral) in a way that other mystical practices are not. I now see that post as a cautionary tale about the dangers of self-mindhacking, but that does not justify its wholesale rejection, particularly in an instructional context where long-term change is in fact desired.
that the people who decompartmentalise crazy and do crazy stuff—fundies, cultists, fundie cultists—have a strong aversion to ambiguity, subtlety, irony;
that groups with weird ideas who are not averse to ambiguity, subtlety or irony are less likely to do crazy stuff.
The first I think is obvious, the second as a positive result would be somewhat surprising and worthy of investigation.
I also suspect that a lot of romantic objection to rationality and science is that they see science as an example of group 1 holding that anything that can’t be measured doesn’t exist and throwing away important detail.
I wonder how we would meaningfully gather numbers on such things.
I think mysticism is inherently irrational, and thus seriously participating in “mysticism itself” is counter-productive if you wish you become more rational. But I say “seriously participating”, because as you say, perhaps mystical aliefs can be used to produce useful mental states—as long as it is recognized that that’s what you’re doing, and you don’t ascribe any special significance to the mystical aspects (i.e., you recognize that the same effect can probably be achieved without any such relics; it’s just a matter of preference).
Like those neopagans you mention, I am both an atheist and a Wodanist. I use Wodan as a symbol of various ideals, and the devotions, rituals, symbols, etc. involved to remind myself of these. My actual beliefs are entirely atheistic and materialistic, but I enjoy the trappings and history behind Germanic paganism of this sort; thus, the main reason behind my Wodanism is simply enjoyment. Useful? Yes, as a reminder or way to encourage yourself (e.g., “though I am tempted to waste my money, I will be self-disciplined like my patron god”) - but that’s entirely apart from any mystical aspects.
Useful? Yes, as a reminder or way to encourage yourself (e.g., “though I am tempted to waste my money, I will be self-disciplined like my patron god”) - but that’s entirely apart from any mystical aspects.
I agree with this as far as rational belief is concerned, and on a denotational level. But I’m not sure whether one can achieve the very tangible benefits of enacting rituals involving such “gods” as Pan, Wodan or Hermes/Thoth without alieving that the gods are really there at some level—if only as archetypes of one’s unconscious psychology—so that one can relate to them on their own terms.
As long as the “gods” are not literally considered as supernatural entities (whatever that might mean) believing in them needs not be any more irrational than believing in any other features of our psychology. But successfully channeling a god might require us to connote that belief in ways that will seem quite foreign to a rationalistic, logically-oriented mental stance.
Well, that gets rather complicated. Think of it as the extent to which the religion appeals and encourages irrationality, and this causes its followers to be instrumentally irrational in verifiable ways. I’m not talking about self-identified moral or ethical systems here, but rather obviously crazy beliefs like “Our god will reward you with a heavenly garden and 42 virgins if you become a martyr” or “You need to purify yourself from the tiny spiritual beings which were brought to Earth by an all-powerful alien millions of years ago”. Stuff like that will appeal to human utility/reward functions in fairly obvious ways, assuming that it is truly, fervently believed.
I feel like most of the value I got out of the minicamp in terms of techniques came early. This is probably due a combination of effects:
1) I reached a limit on my ability to internalize what I was learning without some time spent putting things to use. 2) I was not well mentally organized—my rationality concepts were all individual floating bits not well sewn together—so I reached a point where new concepts didn’t fit into my map very easily.
Did you attend the 3-day version or the week-long version? I would be curious to know after what length of time you saw significantly diminishing returns.
Relatedly, I wonder what minimum consecutive length of time you need to get a lot out of this. How would the returns from three spaced-apart day-long workshops compare to those from a single three-day workshop? (This would of course work better with a group of people who don’t need to travel a significant distance.) Is the New York meetup group what happens if you take this sort of thing, break it into small chunks and spread it out over time?
People who attended minicamp can probably provide more informed speculation on these matters than I can.
I feel like most of the value I got out of the minicamp in terms of techniques came early. This is probably due a combination of effects:
1) I reached a limit on my ability to internalize what I was learning without some time spent putting things to use. 2) I was not well mentally organized—my rationality concepts were all individual floating bits not well sewn together—so I reached a point where new concepts didn’t fit into my map very easily.
I agree things got more disorganized, in fact, I remember on a couple occasions seeing the ‘this isn’t the outcome I expected’ look on Anna’s face and the attempt to update and try a different approach or go with the flow and see where things were leading. I marked this responsiveness as a good thing.
As for your ugly it’s important to note that was a casual discussion among attendees. I suppose this highlights risks from a general increase in credibility-giving by close temporal association with other new ideas you’re giving credibility to? Example: I talked to a lot of curious people that week about how Valve’s internal structure works, but no one should necessarily run off and establish a Valve-like company without understanding Valve’s initial conditions, goals, employee make-up, other institutions, and comparing them with their own initial conditions, goals, employees, institutions, etc.
Yes, this.
Usually this risk is low, but here it was actually quite high. This particular instance was an Ugly example, because the category—ideas with close temporal association—was false. But there were many scary examples based on good categories. The most outlandish was meditation. Remember that other people’s brains are part of evidence, now witness quite a few people who have just spent the last few days on activities that convinced you they are pretty decent (compared to baseline, damn good) at doing their research, discarding bullshit, not strongly espousing ideas they don’t strongly hold, examining the ideas they do hold, etc. etc… witness them say with a straight face that meditation, which you (I) assumed was a crock of mystic religion that just took a different turn than the Western religions you’re familiar with… witness them say that meditation is super-useful. Then watch your brain say “Bull! Wait, they’re good at things. Maybe not bull? Hey, argument from authority, bull after all! Wait, argument from authority is evidence… :S I… have to take this seriously...”
IFS, NVC, nootropics? Guess I have to take them seriously too.
(I exaggerate slightly, but my feelings were stronger than I think they should have been, so that story is in line with how I felt, if not precisely what my beliefs were)
I had a dim view of meditation because my only exposure to meditation prior was in mystic contexts. Here I saw people talk about it separate from that context. My assumption was that if you approached it using Bayes and other tools, you could start to figure out if it was bullshit or not. It doesn’t seem unreasonable to me that folks interested could explore it and see what turns up.
Would I choose to do so? No. I have plenty of other low hanging fruit and the amount of non-mystic guidance around meditation seems really minimal, so I’d be paying opportunity cost to cover unknown territory with unknown payoffs.
I don’t feel oddly attached to any beliefs here. Maybe I’ll go search for some research. Right now I feel if I found some good papers providing evidence for or against meditation I would shift appropriately.
I don’t see myself updating my beliefs about meditation (which are weak) unduly because of an argument from authority. They changed because the arguments were reasoned from principles or with process I accept as sound. Reasoning like “fairly credible sources like Feynman claim they can learn to shift the perception of the center of self-awareness to the left. (Feynman was also a bullshitter, but let’s take this as an example...) What do we think he meant? Is what we think he meant possible? What is possible? Is that reproducible? Would it be useful to be able to do that? Should we spend time trying to figure out if we can do that?” This would be what I consider to be a discussion in the space of meditation-like stuff that is non-mystical and enjoyable. It isn’t going to turn me into a mystic any more than Curzi’s anecdotes about his buddy’s nootropics overdoses will turn me into a juicer.
I didn’t take away the message ‘meditation is super-useful.’ I took away the message ‘meditation is something some people are messing with to see what works.’ I’m less worried about that than if someone said ‘eating McDonalds every day for every meal is something some people are messing with to see what works.’ because my priors tell me that is really harmful whereas my priors tell me meditating every day is probably just a waste of time. A possibly non-mystical waste of time.
Now I’m worried comment-readers will think I’m a blind supporter of meditation. It is more accurate to say I went from immediate dismissal of meditation to a position of seeing the act of meditating as separable from a mystic context.
Now my wife is telling me I should actually be MORE curious about meditation and go do some research.
Are you familiar with the study (studies) about meditation and brain health? I’ve seen one or two crop up, but I’ve not read the actual studies themselves—just summaries. IIRC, it appears to reduce the effects of aging.
The other reason I consider meditation possibly worth pursuing is that it appears to be an effective “mindhack” in at least one respect: it can be used to reduce or eliminate unpleasant physical and mental sensations. For example, I believe it’s been shown to be effective in reducing stress and anxiety, and—more impressively—chronic pain, or even sensations like “chilly”. How useful this is is more debatable: while I’m waiting in line, shivering, I probably won’t be able to meditate effectively, or have the time to.
Hm, super-useful was a bad term. The actual impressions I got were “obviously coherent and not bs, and with high enough mean+variance that the value of investigation is very high”. Not necessarily the value of any one specific person investigating, but the value of it being investigated.
So I went a bit further than your
to believe the top of the curve was a) grossly useful and b) of non-negligible likelihood.
It strikes me that you may want to take a step further and consider mysticism itself as a functionally useful brain-hack much like meditation. It’s very possible that mystical texts could be used to bring out a mental stance conducive to rationality. The Litanies of Tarski and Gendlin are fairly obvious examples, and I’d even argue that HP:MoR seems to be fulfilling that role as a kind of shared mythology tapping into well-understood tropes, at least for the subset of rationalists who like Harry Potter fanfiction.
Metaphysical terminology is a huge bag of stupid and abstraction, but what I mean by mysticism is something like ‘characteristic of a metaphysical belief system.’ The mysticism tag tells me that a concept is positing extra facts about how the world works in a way that isn’t consistent with my more fundamental, empirical beliefs.
So in my mind I have ‘WARNING!’ tags (intentionally) attached to mysticism. So when I see something that has the mysticism tag attached to it, I approach cautiously and with a big stick. Or to save time or avoid the risk of being eaten I often don’t approach at all.
If I find that I have a metaphysical belief or if I detect that a fact/idea may be metaphysical, then I attach the mystical tag to it and go find my stick.
If something in my mind has the mysticism tag attached to it inappropriately, then I want to reclassify that thing—slightly reduce the size of the tag or create a branch through more specific concept definition and separation.
So I don’t really see value in attaching the mysticism tag to things that don’t directly warrant it. What you call a mystical litany I’d call a mnemonic technique for reminding yourself of a useful process or dangerous bias. Religions have litanies, but litanies are not inherently religious concepts.
So no, I won’t consider mysticism itself as a useful brain hack. Mysticism is allocated the purpose of ‘warning sign’ . It’s not the only warning sign, but it’s a useful one.
I can see why you would consider what you call “mysticism”, or metaphysical belief systems, a warning sign. However, the use of mystical text forms, which is what I was referring to in my comment, is quite unrelated to this kind of metaphysical and cosmological rigidity. Compare, say, Christian fundamentalists versus Quakers or Unitarian Universalists, or Islamic Wahabis and Qutbis versus Sufis: the most doctrinal and memetically dangerous groups make only sparing use of mystical practices, or forbid them outright.
Atheists and agnostics are obviously a more challenging case, but it appears that at least some neopagans comfortably identify as such, using their supposed metaphysical beliefs as functionally useful aliefs, to be invoked through a ritual whenever the psychical effects of such rituals are desired. There is in fact an account of just such a ritual practice on LW itself involving the Winter Solstice, which is often celebrated as a festival by neopagan groups. It’s hard to describe that account as anything other than a mystical ritual aiming to infuence the participants in very specific ways and induce a desirable stance of mind among them. In fact, that particular practice may be regarded as extremely foolish and memetically dangerous (because it involves a fairly blatant kind of happy-death-spiral) in a way that other mystical practices are not. I now see that post as a cautionary tale about the dangers of self-mindhacking, but that does not justify its wholesale rejection, particularly in an instructional context where long-term change is in fact desired.
This does sound plausible:
that the people who decompartmentalise crazy and do crazy stuff—fundies, cultists, fundie cultists—have a strong aversion to ambiguity, subtlety, irony;
that groups with weird ideas who are not averse to ambiguity, subtlety or irony are less likely to do crazy stuff.
The first I think is obvious, the second as a positive result would be somewhat surprising and worthy of investigation.
I also suspect that a lot of romantic objection to rationality and science is that they see science as an example of group 1 holding that anything that can’t be measured doesn’t exist and throwing away important detail.
I wonder how we would meaningfully gather numbers on such things.
I think mysticism is inherently irrational, and thus seriously participating in “mysticism itself” is counter-productive if you wish you become more rational. But I say “seriously participating”, because as you say, perhaps mystical aliefs can be used to produce useful mental states—as long as it is recognized that that’s what you’re doing, and you don’t ascribe any special significance to the mystical aspects (i.e., you recognize that the same effect can probably be achieved without any such relics; it’s just a matter of preference).
Like those neopagans you mention, I am both an atheist and a Wodanist. I use Wodan as a symbol of various ideals, and the devotions, rituals, symbols, etc. involved to remind myself of these. My actual beliefs are entirely atheistic and materialistic, but I enjoy the trappings and history behind Germanic paganism of this sort; thus, the main reason behind my Wodanism is simply enjoyment. Useful? Yes, as a reminder or way to encourage yourself (e.g., “though I am tempted to waste my money, I will be self-disciplined like my patron god”) - but that’s entirely apart from any mystical aspects.
I agree with this as far as rational belief is concerned, and on a denotational level. But I’m not sure whether one can achieve the very tangible benefits of enacting rituals involving such “gods” as Pan, Wodan or Hermes/Thoth without alieving that the gods are really there at some level—if only as archetypes of one’s unconscious psychology—so that one can relate to them on their own terms.
As long as the “gods” are not literally considered as supernatural entities (whatever that might mean) believing in them needs not be any more irrational than believing in any other features of our psychology. But successfully channeling a god might require us to connote that belief in ways that will seem quite foreign to a rationalistic, logically-oriented mental stance.
What are your criteria for this?
Well, that gets rather complicated. Think of it as the extent to which the religion appeals and encourages irrationality, and this causes its followers to be instrumentally irrational in verifiable ways. I’m not talking about self-identified moral or ethical systems here, but rather obviously crazy beliefs like “Our god will reward you with a heavenly garden and 42 virgins if you become a martyr” or “You need to purify yourself from the tiny spiritual beings which were brought to Earth by an all-powerful alien millions of years ago”. Stuff like that will appeal to human utility/reward functions in fairly obvious ways, assuming that it is truly, fervently believed.
As an aside, what are IFS and NVC?
Edit: Ah, found links.
IFS: http://en.wikipedia.org/wiki/Internal_Family_Systems_Model
NVC: http://en.wikipedia.org/wiki/Nonviolent_Communication
Relatedly, I wonder what minimum consecutive length of time you need to get a lot out of this. How would the returns from three spaced-apart day-long workshops compare to those from a single three-day workshop? (This would of course work better with a group of people who don’t need to travel a significant distance.) Is the New York meetup group what happens if you take this sort of thing, break it into small chunks and spread it out over time?
People who attended minicamp can probably provide more informed speculation on these matters than I can.