That said, I recall the time I was out trolling the Scientologists and watched someone’s face light up that way as she was being sold a copy of Dianetics and a communication course. She certainly seemed to be getting that feeling. Predatory memes—they’re rare, but they exist.
Scary indeed. I suspect what we are each ‘vulnerable’ to will vary quite a lot from person to person.
Yes. I do think that a particularly dangerous attitude to memetic infections on the Scientology level is an incredulous “how could they be that stupid?” Because, of course, it contains an implicit “I could never be that stupid” and “poor victim, I am of course far more rational”. This just means your mind—in the context of being a general-purpose operating system that runs memes—does not have that particular vulnerability.
I suspect you will have a different vulnerability. It is not possible to completely analyse the safety of an arbitrary incoming meme before running it as root; and there isn’t any such thing as a perfect sandbox to test it in. Even for a theoretically immaculate perfectly spherical rationalist of uniform density, this may be equivalent to the halting problem.
My message is: it can happen to you, and thinking it can’t is more dangerous than nothing. Here are some defences against the dark arts.
[That’s the thing I’m working on. Thankfully, the commonest delusion seems to be “it can’t happen to me”, so merely scaring people out of that will considerably decrease their vulnerability and remind them to think about their thinking.]
This sort of thing makes me hope that the friendly AI designers are thinking like OpenBSD-level security researchers. And frankly, they need Bruce Schneier and Ed Felten and Dan Bernstein and Theo deRaadt on the job. We can’t design a program not to have bugs—just not to have ones that we know about. As a subset of that, we can’t design a constructed intelligence not to have cognitive biases—just not to have ones that we know about. And predatory memes evolve, rather than being designed from scratch. I’d just like you to picture a superintelligent AI catching the superintelligent equivalent of Scientology.
My message is: it can happen to you, and thinking it can’t is more dangerous than nothing.
With the balancing message: Some people are a lot less vulnerable to believing bullshit than others. For many on lesswrong their brains are biassed relative to the population towards devoting resources to bullshit prevention at the expense of engaging in optimal signalling. For these people actively focussing on second guessing themselves is a dangerous waste of time and effort.
Sometimes you are just more rational and pretending that you are not is humble but not rational or practical.
I can see that I’ve failed to convince you and I need to do better.
In my experience, the sort of thing you’ve written is a longer version of “It can’t happen to me, I’m far too smart for that” and a quite typical reaction to the notion that you, yes you, might have security holes. I don’t expect you to like that, but it is.
You really aren’t running OpenBSD with those less rational people running Windows.
I do think being able to make such statements of confidence in one’s immunity takes more detailed domain knowledge. Perhaps you are more immune and have knowledge and experience—but that isn’t what you said.
I am curious as to the specific basis you have for considering yourself more immune. Not just “I am more rational”, but something that’s actually put it to a test?
Put it this way, I have knowledge and experience of this stuff and I bother second-guessing myself.
(I can see that this bit is going to have to address the standard objection more.)
I can see that I’ve failed to convince you and I need to do better.
This is a failure mode common in when other-optimising. You assume that I need to be persuaded, put that as the bottom line and then work from there. There is no room for the possibility that I know more about my relative areas of weakness than you do. This is a rather bizarre position to take given that you don’t even have significant familiarity with the wedrifid online persona let alone me.
In my experience, the sort of thing you’ve written is a longer version of “It can’t happen to me, I’m far too smart for that” and a quite typical reaction to the notion that you, yes you, might have security holes. I don’t expect you to like that, but it is.
It isn’t so much that I dislike what you are saying as it is that it seems trivial and poorly calibrated to the context. Are you really telling a lesswrong frequenter that they may have security holes as though you are making some kind of novel suggestion that could trigger insecurity or offence?
I suggest that I understand the entirety of the point you are making and still respond with the grandparent. There is a limit to how much intellectual paranoia is helpful and under-confidence is a failure of epistemic rationality even if it is encouraged socially. This is a point that you either do not understand or have been careful to avoid acknowledging for the purpose of presenting your position.
I am curious as to the specific basis you have for considering yourself more immune. Not just “I am more rational”, but something that’s actually put it to a test?
I would be more inclined to answer such questions if they didn’t come with explicitly declared rhetorical intent.
I am curious as to the specific basis you have for considering yourself more immune. Not just “I am more rational”, but something that’s actually put it to a test?
I would be more inclined to answer such questions if they didn’t come with explicitly declared rhetorical intent.
No, I’m actually interested in knowing. If “nothing”, say that.
Regarding Scientology, I had the impression that they usually portray themselves to those they’re trying to recruit as being like a self-help community (“we’re like therapists or Tony Robbins, except that our techniques actually work!”) before they start sucking you into the crazy?
I’m sure that whatever it is that Tony Robbins preaches is less crazy than the Xenu story. (Although Scientology doesn’t seem any crazier than the crazier versions of mainstream religions...)
I’m sure that whatever it is that Tony Robbins preaches is less crazy than the Xenu story.
Here’s a video in which he lays out what he sees as the critical elements of human motivation and action. Pay extra attention to the slides—there’s more stuff there than he talks about.
(It’s a much more up-to-date and compact model than what he wrote in ATGW, by the way.)
I got through 11:00 of that video. If that giant is inside me I do not want him woken up. I want that sucker in a permanent vegetative state.
Many years ago I had a friend who is a television news anchor person. The video camera flattens you from three dimensions to two, and it also filters the amount of non-verbal communication you can project onto the storage media. To have energy and charisma on the replay, a person has to project something approaching mania at record time. I shudder to think what it would be like to sit down in the front row of the Robbins talk when he was performing for that video. He comes across as manic, and the most probable explanation for that is amphetamines.
The transcript might read rational, but that is video of a maniac.
A bit of context: that’s not how he normally speaks.
There’s another video (not publicly available, it’s from a guest speech he did at one of Brendon Burchard’s programs) where he gives the backstory on that talk. He was actually extremely nervous about giving that talk, for a couple different reasons. One, he felt it was a big honor and opportunity, two, he wanted to try to cram a lot of dense information into a twenty minute spot, and three, he got a bad introduction.
Specifically, he said the intro was something like, “Oh, and now here’s Tony Robbins to motivate us”, said in a sneering/dismissive tone… and he immediately felt some pressure to get the audience on his side—a kind of pressure that he hasn’t had to deal with in a public speaking engagement for quite some time. (Since normally he speaks to stadiums full of people who paid to come see him—vs. an invited talk to a group where a lot of people—perhaps most of the audience—sees him as a shallow “motivator”.)
IOW, the only drug you’re seeing there is him feeling cornered and wanting to prove something—plus the time pressure of wanting to condense material he usually spends days on into twenty minutes. His normal way of speaking is a lot less fast paced, if still emotionally intense.
One of his time management programs that I bought over a decade ago had some interesting example schedules in it, that showed what he does to prepare for his time on stage (for programs where he’s speaking all day) -- including nutrition, exercise, and renewal activities. It was impressive and well-thought out, but nothing that would require drugs.
One of Tony Robbins’ books has been really helpful to me. Admittedly the effects mostly faded after the beginning, but applying his techniques put me into a rather blissful state for a day or two and also allowed for a period of maybe two weeks to a month during which I did not procrastinate. I also suspect I got a lingering boost to my happiness setpoint even after that. This are much better results than I’ve had from any previous mind-hacking technique I’ve used.
Fortunately I think I’ve been managing to figure out some of the reasons why those techniques stopped working, and have been on an upswing, mood and productivity-wise, again. “Getting sucked into the crazy” is definitely not a term I’d use when referring to his stuff. His stuff is something that’s awesome, that works, and which I’d say everyone should read. (I already bought my mom an extra copy, though she didn’t get much out of it.)
You need to apply some filtering to pick out the actual techniques out of the hype, and possibly consciously suppress instinctive reactions of “the style of this text is so horrible it can’t be right”, but it’s great if you can do that.
I will post a summary of the most useful techniques at LW at some point—I’m still in the process of gathering long-term data, which is why I haven’t done so yet. Though I blogged about the mood-improving questions some time back.
You need to apply some filtering to pick out the actual techniques out of the hype
It’s not so much hype as lack of precision. Robbins tends to specify procedures in huge “steps” like, “step 1: cultivate a great life”. (I exaggerate, but not by that much.) He also seems to think that inspiring anecdotes are the best kind of evidence, which is why I had trouble taking most of ATGW seriously enough to really do much from it when I first bought it (like a decade or more ago).
Recently I re-read it, and noticed that there’s actually a lot of good stuff in there, it’s just stuff I never paid any attention to until I’d stumbled on similar ideas myself.
It’s sort of like that saying commonly (but falsely) attributed to Mark Twain:
“When I was a boy of fourteen, my father was so ignorant I could hardly stand to have the old man around. But when I got to be twenty-one, I was astonished at how much the old man had learned in seven years.”
Tony seems to have learned a lot in the years since I started doing this sort of thing. ;-)
It’s not so much hype as lack of precision. Robbins tends to specify procedures in huge “steps” like, “step 1: cultivate a great life”. (I exaggerate, but not by that much.)
That’s odd—I didn’t get that at all, and I found that he had a lot of advice about various concrete techniques. Off the top of my head: pattern interrupts, morning questions, evening questions, setback questions, smiling, re-imagining negative memories, gathering references, changing your mental vocabulary.
Scary indeed. I suspect what we are each ‘vulnerable’ to will vary quite a lot from person to person.
Yes. I do think that a particularly dangerous attitude to memetic infections on the Scientology level is an incredulous “how could they be that stupid?” Because, of course, it contains an implicit “I could never be that stupid” and “poor victim, I am of course far more rational”. This just means your mind—in the context of being a general-purpose operating system that runs memes—does not have that particular vulnerability.
I suspect you will have a different vulnerability. It is not possible to completely analyse the safety of an arbitrary incoming meme before running it as root; and there isn’t any such thing as a perfect sandbox to test it in. Even for a theoretically immaculate perfectly spherical rationalist of uniform density, this may be equivalent to the halting problem.
My message is: it can happen to you, and thinking it can’t is more dangerous than nothing. Here are some defences against the dark arts.
[That’s the thing I’m working on. Thankfully, the commonest delusion seems to be “it can’t happen to me”, so merely scaring people out of that will considerably decrease their vulnerability and remind them to think about their thinking.]
This sort of thing makes me hope that the friendly AI designers are thinking like OpenBSD-level security researchers. And frankly, they need Bruce Schneier and Ed Felten and Dan Bernstein and Theo deRaadt on the job. We can’t design a program not to have bugs—just not to have ones that we know about. As a subset of that, we can’t design a constructed intelligence not to have cognitive biases—just not to have ones that we know about. And predatory memes evolve, rather than being designed from scratch. I’d just like you to picture a superintelligent AI catching the superintelligent equivalent of Scientology.
With the balancing message: Some people are a lot less vulnerable to believing bullshit than others. For many on lesswrong their brains are biassed relative to the population towards devoting resources to bullshit prevention at the expense of engaging in optimal signalling. For these people actively focussing on second guessing themselves is a dangerous waste of time and effort.
Sometimes you are just more rational and pretending that you are not is humble but not rational or practical.
I can see that I’ve failed to convince you and I need to do better.
In my experience, the sort of thing you’ve written is a longer version of “It can’t happen to me, I’m far too smart for that” and a quite typical reaction to the notion that you, yes you, might have security holes. I don’t expect you to like that, but it is.
You really aren’t running OpenBSD with those less rational people running Windows.
I do think being able to make such statements of confidence in one’s immunity takes more detailed domain knowledge. Perhaps you are more immune and have knowledge and experience—but that isn’t what you said.
I am curious as to the specific basis you have for considering yourself more immune. Not just “I am more rational”, but something that’s actually put it to a test?
Put it this way, I have knowledge and experience of this stuff and I bother second-guessing myself.
(I can see that this bit is going to have to address the standard objection more.)
This is a failure mode common in when other-optimising. You assume that I need to be persuaded, put that as the bottom line and then work from there. There is no room for the possibility that I know more about my relative areas of weakness than you do. This is a rather bizarre position to take given that you don’t even have significant familiarity with the wedrifid online persona let alone me.
It isn’t so much that I dislike what you are saying as it is that it seems trivial and poorly calibrated to the context. Are you really telling a lesswrong frequenter that they may have security holes as though you are making some kind of novel suggestion that could trigger insecurity or offence?
I suggest that I understand the entirety of the point you are making and still respond with the grandparent. There is a limit to how much intellectual paranoia is helpful and under-confidence is a failure of epistemic rationality even if it is encouraged socially. This is a point that you either do not understand or have been careful to avoid acknowledging for the purpose of presenting your position.
I would be more inclined to answer such questions if they didn’t come with explicitly declared rhetorical intent.
No, I’m actually interested in knowing. If “nothing”, say that.
Regarding Scientology, I had the impression that they usually portray themselves to those they’re trying to recruit as being like a self-help community (“we’re like therapists or Tony Robbins, except that our techniques actually work!”) before they start sucking you into the crazy?
Wait… did you just use Tony Robbins as the alternative to being sucked into the crazy?
I’m sure that whatever it is that Tony Robbins preaches is less crazy than the Xenu story. (Although Scientology doesn’t seem any crazier than the crazier versions of mainstream religions...)
Here’s a video in which he lays out what he sees as the critical elements of human motivation and action. Pay extra attention to the slides—there’s more stuff there than he talks about.
(It’s a much more up-to-date and compact model than what he wrote in ATGW, by the way.)
I got through 11:00 of that video. If that giant is inside me I do not want him woken up. I want that sucker in a permanent vegetative state.
Many years ago I had a friend who is a television news anchor person. The video camera flattens you from three dimensions to two, and it also filters the amount of non-verbal communication you can project onto the storage media. To have energy and charisma on the replay, a person has to project something approaching mania at record time. I shudder to think what it would be like to sit down in the front row of the Robbins talk when he was performing for that video. He comes across as manic, and the most probable explanation for that is amphetamines.
The transcript might read rational, but that is video of a maniac.
A bit of context: that’s not how he normally speaks.
There’s another video (not publicly available, it’s from a guest speech he did at one of Brendon Burchard’s programs) where he gives the backstory on that talk. He was actually extremely nervous about giving that talk, for a couple different reasons. One, he felt it was a big honor and opportunity, two, he wanted to try to cram a lot of dense information into a twenty minute spot, and three, he got a bad introduction.
Specifically, he said the intro was something like, “Oh, and now here’s Tony Robbins to motivate us”, said in a sneering/dismissive tone… and he immediately felt some pressure to get the audience on his side—a kind of pressure that he hasn’t had to deal with in a public speaking engagement for quite some time. (Since normally he speaks to stadiums full of people who paid to come see him—vs. an invited talk to a group where a lot of people—perhaps most of the audience—sees him as a shallow “motivator”.)
IOW, the only drug you’re seeing there is him feeling cornered and wanting to prove something—plus the time pressure of wanting to condense material he usually spends days on into twenty minutes. His normal way of speaking is a lot less fast paced, if still emotionally intense.
One of his time management programs that I bought over a decade ago had some interesting example schedules in it, that showed what he does to prepare for his time on stage (for programs where he’s speaking all day) -- including nutrition, exercise, and renewal activities. It was impressive and well-thought out, but nothing that would require drugs.
One of Tony Robbins’ books has been really helpful to me. Admittedly the effects mostly faded after the beginning, but applying his techniques put me into a rather blissful state for a day or two and also allowed for a period of maybe two weeks to a month during which I did not procrastinate. I also suspect I got a lingering boost to my happiness setpoint even after that. This are much better results than I’ve had from any previous mind-hacking technique I’ve used.
Fortunately I think I’ve been managing to figure out some of the reasons why those techniques stopped working, and have been on an upswing, mood and productivity-wise, again. “Getting sucked into the crazy” is definitely not a term I’d use when referring to his stuff. His stuff is something that’s awesome, that works, and which I’d say everyone should read. (I already bought my mom an extra copy, though she didn’t get much out of it.)
What book?
Awakening the Giant Within.
You need to apply some filtering to pick out the actual techniques out of the hype, and possibly consciously suppress instinctive reactions of “the style of this text is so horrible it can’t be right”, but it’s great if you can do that.
I will post a summary of the most useful techniques at LW at some point—I’m still in the process of gathering long-term data, which is why I haven’t done so yet. Though I blogged about the mood-improving questions some time back.
It’s not so much hype as lack of precision. Robbins tends to specify procedures in huge “steps” like, “step 1: cultivate a great life”. (I exaggerate, but not by that much.) He also seems to think that inspiring anecdotes are the best kind of evidence, which is why I had trouble taking most of ATGW seriously enough to really do much from it when I first bought it (like a decade or more ago).
Recently I re-read it, and noticed that there’s actually a lot of good stuff in there, it’s just stuff I never paid any attention to until I’d stumbled on similar ideas myself.
It’s sort of like that saying commonly (but falsely) attributed to Mark Twain:
Tony seems to have learned a lot in the years since I started doing this sort of thing. ;-)
That’s odd—I didn’t get that at all, and I found that he had a lot of advice about various concrete techniques. Off the top of my head: pattern interrupts, morning questions, evening questions, setback questions, smiling, re-imagining negative memories, gathering references, changing your mental vocabulary.
He does, but they’re mostly in the areas that I ignored on my first few readings of the book. ;-)
Well, there’s crazy, and then there’s crazy...