It’s my blog. I think I can for the large fraction of atheists that got there by social pressure alone (at least for a month or so), but people that actually understand why atheism is the right answer would be tougher. I’m curious if I could break them too, but that’s way too evil for my tastes.
The techniques don’t cleave down the lines of good and evil epistemically—they cleave down the lines of good and evil instrumentally.
It takes different tools to make someone worse off than it does to help them. If you want to make them better epistemically, then you get to use the fact that having good maps helps you get where you want to be.
It takes different tools to make someone worse off than it does to help them.
Worse off by whose definition? Presumably, if you believed that conversion to Christianity makes one better off, you could use the same techniques (with a different set of arguments) to accomplish the goal.
Both, but the statement is stronger for their definition.
My general approach to helping people is to clear out their fears and then let them reassemble the pieces as they see fit—sometimes suggesting possible solutions. This is more easily used to help people than to hurt them, since they are in full control of their actions and more of the game space is visible to them. I can fool them into thinking they’re helping themselves, but I’d have to include at least selective fear removal (though this can happen accidentally through your own biases!).
In contrast, using leading questions and classical conditioning works equally well regardless of which direction you’re pushing.
Hmm, have been looking through your blog a bit more… I’m wondering if you can help people complaining about akrasia by making their second-order desires first-order ones?
Hmm, you would probably be great playing the jailed AI in an AI boxing experiment (can you beat [someone like] EY?), but how successful would you be playing the guard?
The AI box game still seems stacked against the AI roleplayer for any similar skill level. As the AI, I don’t think I could beat someone like EY or myself on the other end, and as the gate keeper I think I would beat someone like EY or myself.
I still wouldn’t consider myself secure against even human takeover in general, especially if I’m not prepared for mental assault.
Also, can you write an AI bot that would have a decent success rate against humans, by finding and exploiting the standard wetware bugs?
Not for any interesting opponent. I can’t even write a ‘real’ chatbot. The only reason I get the results I do is because I immediately force them into a binary yes/no response and then make sure they keep saying yes .
Something here feels off: I’d call the parent a pretty strong claim, effectively “I can cure akrasia (sorta) in the majority of people who ask”. I would have expected someone to have tested this, and reported their results; if positive, I would have expected this to be the sort of thing I would have noticed much sooner than a year and two months later. (In fact, around the time this was posted, I had started reading LessWrong and I’d received an email entitled “Jim’s hypnotherapy” that I ignored for some months).
Basically, my first reaction to this was “Why ain’t ya rich?”
Having said that, I want to build up the courage to PM you for a test, if you’re still doing so; if you’re half as powerful as you claim, then of course I want to benefit from that. ;p
(I’ve been reading your blog and wound up finding this because I typed “hypnotism” into the LW search box.)
effectively “I can cure akrasia (sorta) in the majority of people who ask”.
I wouldn’t quite say that. I meant “yes, akrasia is fixable in this way”. Less “I’m a wizard!” and more “Yes, there’s a solution, it looks like that, so have fun solving the puzzle”
To make a personal claim of competence, I’d have to add some qualifiers. Maybe something like “I expect to be able to cure akrasia (sorta) in the majority of people that commit to solving it with me”, which is a much stricter criteria than “asks”. I’d also have to add the caveat that “curing” might, after reflectively equilibriating, end up with them realizing they don’t want to work as hard as they thought—that’s not my call and it wouldn’t surprise me if a significant number of people went that way to some degree.
I would have expected someone to have tested this, and reported their results; if positive, I would have expected this to be the sort of thing I would have noticed much sooner than a year and two months later.
I’m not sure if you mean within LW in particular. I haven’t yet worked magic on any LWer in this context, but I did offer a couple times.
If you’re counting outside LW, hypnotherapists get results all the time—even “amazing” results. Some people are convinced, some people write it off in one way or another. It doesn’t surprise me all that much given how people get with “skepticism” and not wanting to be made fool about hypnotism.
Basically, my first reaction to this was “Why ain’t ya rich?”
Good question.
The first part of the answer is that I have gotten a ton of value out of these skills, and only expect to gain more.
The second part is that it’s not magic. It’s more of a martial art than a cheat code. Even when it appears to be magic, there’s usually more going on in the background that made it possible. The toughest part is all the meta-level bullshit that people carry around about their problems which makes getting them into “lets solve this” mode the hard part. Once you get someone to congruently say “Yes, I’m going to be a good hypnotic subject and we’re going to fix this”, you’ve done 90% of the work—but everyone focuses on the last 10% which looks like magic and then wonders “why not sprinkle this magic pixie dust on everyone!?!”.
Also, getting “rich”—assuming you mean at a level more than charging a couple hundred dollars per hour like many hypnotherapists do—requires you to be good at finding high leverage applications and working your way into them. That’s a whole new subset of skills and I haven’t yet gotten to that stage—though I plan on working on it.
Having said that, I want to build up the courage to PM you for a test, if you’re still doing so; if you’re half as powerful as you claim, then of course I want to benefit from that. ;p
First of all, I don’t like this “if you’re half as powerful as you claim” thing—especially since you seem to have read it as stronger than intended. When I make “strong claims” I do not expect, in the social obligation sense, to be believed. I’m just trying to be understood—that really is how I honestly see things. Take with as much salt as you please.
It’s important to make this explicit because setting up high expectations for a hoping skeptic is a sure way to fail—It sets up a dynamic of me being responsible for their behavior. While I do take responsibility for their actions internally, the only real way I can do this is through making sure they take responsibility for their own actions.
Also, there should be no courage needed. I can’t just take over your mind. With you, I’m not sure whether I’d pull out hypnosis at all, and (almost) certainly can’t just get into it from the start. Also, I can only push you as hard as you let yourself be pushed. Let’s chat some time and see where it goes.
Your link to your blog is down, but once its back up and if I find this claim plausible upon reading it, I would be very interested in trying this on myself.
You’re welcome to try and break my atheism, but I’m saying that only because I’m reasonably darned sure you can’t do that by any conversational means (so long as we’re actually in a universe that doesn’t have a God, of course, I’m not stating a blind belief known to me to be blind).
Edit: oh, wait, didn’t realize you were using actual hypnotism rather than conversation. Permission retracted; I don’t know enough about how that works.
I’m reasonably darned sure you can’t do that by any conversational means
Agreed. The only way I’d see myself as having a fighting chance would be if you had a strong reason to go into hypnosis and you didn’t know my intentions.
If the world really were at stake, I think I could help you with the red panda problem—though I still have fairly wide confidence intervals on how difficult that would be because I haven’t tried something like this. I have yet to find a real life example where I’d encourage self deception and a surprisingly large fraction of problems go away when you remove the self deception.
I have been having a lot of fun using hypnosis and techniques inspired by hypnosis to improve rationality—and successfully. I was a bit disappointed that you didn’t respond to my email offering to show what hypnosis says about training rationality. And now I’m a bit confused with the retraction because I had figured you had completely written me off as a crackpot.
Will Ryan mentioned that you were skeptical of “this stuff”. Can you elaborate on what specifically you’re skeptical about and what kinds of evidence you’d like to see?
I hope you don’t think you are actually “giving amnesia” or doing anything other than roleplaying mind-controller and mind-controllee, in dialogues like these. Those teenagers are just playing along for their own reasons.
Here are several.
Interesting, the guy must be a very good hypnotist. I’m wondering if he can convert people, as well as deconvert?
It’s my blog. I think I can for the large fraction of atheists that got there by social pressure alone (at least for a month or so), but people that actually understand why atheism is the right answer would be tougher. I’m curious if I could break them too, but that’s way too evil for my tastes.
The techniques don’t cleave down the lines of good and evil epistemically—they cleave down the lines of good and evil instrumentally.
It takes different tools to make someone worse off than it does to help them. If you want to make them better epistemically, then you get to use the fact that having good maps helps you get where you want to be.
Worse off by whose definition? Presumably, if you believed that conversion to Christianity makes one better off, you could use the same techniques (with a different set of arguments) to accomplish the goal.
Both, but the statement is stronger for their definition.
My general approach to helping people is to clear out their fears and then let them reassemble the pieces as they see fit—sometimes suggesting possible solutions. This is more easily used to help people than to hurt them, since they are in full control of their actions and more of the game space is visible to them. I can fool them into thinking they’re helping themselves, but I’d have to include at least selective fear removal (though this can happen accidentally through your own biases!).
In contrast, using leading questions and classical conditioning works equally well regardless of which direction you’re pushing.
Hmm, have been looking through your blog a bit more… I’m wondering if you can help people complaining about akrasia by making their second-order desires first-order ones?
Yep :)
Hmm, you would probably be great playing the jailed AI in an AI boxing experiment (can you beat [someone like] EY?), but how successful would you be playing the guard?
The AI box game still seems stacked against the AI roleplayer for any similar skill level. As the AI, I don’t think I could beat someone like EY or myself on the other end, and as the gate keeper I think I would beat someone like EY or myself.
I still wouldn’t consider myself secure against even human takeover in general, especially if I’m not prepared for mental assault.
Would you know what to look for?
Also, can you write an AI bot that would have a decent success rate against humans, by finding and exploiting the standard wetware bugs?
For the most part
Not for any interesting opponent. I can’t even write a ‘real’ chatbot. The only reason I get the results I do is because I immediately force them into a binary yes/no response and then make sure they keep saying yes .
Something here feels off: I’d call the parent a pretty strong claim, effectively “I can cure akrasia (sorta) in the majority of people who ask”. I would have expected someone to have tested this, and reported their results; if positive, I would have expected this to be the sort of thing I would have noticed much sooner than a year and two months later. (In fact, around the time this was posted, I had started reading LessWrong and I’d received an email entitled “Jim’s hypnotherapy” that I ignored for some months).
Basically, my first reaction to this was “Why ain’t ya rich?”
Having said that, I want to build up the courage to PM you for a test, if you’re still doing so; if you’re half as powerful as you claim, then of course I want to benefit from that. ;p
(I’ve been reading your blog and wound up finding this because I typed “hypnotism” into the LW search box.)
I wouldn’t quite say that. I meant “yes, akrasia is fixable in this way”. Less “I’m a wizard!” and more “Yes, there’s a solution, it looks like that, so have fun solving the puzzle”
To make a personal claim of competence, I’d have to add some qualifiers. Maybe something like “I expect to be able to cure akrasia (sorta) in the majority of people that commit to solving it with me”, which is a much stricter criteria than “asks”. I’d also have to add the caveat that “curing” might, after reflectively equilibriating, end up with them realizing they don’t want to work as hard as they thought—that’s not my call and it wouldn’t surprise me if a significant number of people went that way to some degree.
I’m not sure if you mean within LW in particular. I haven’t yet worked magic on any LWer in this context, but I did offer a couple times.
If you’re counting outside LW, hypnotherapists get results all the time—even “amazing” results. Some people are convinced, some people write it off in one way or another. It doesn’t surprise me all that much given how people get with “skepticism” and not wanting to be made fool about hypnotism.
Good question.
The first part of the answer is that I have gotten a ton of value out of these skills, and only expect to gain more.
The second part is that it’s not magic. It’s more of a martial art than a cheat code. Even when it appears to be magic, there’s usually more going on in the background that made it possible. The toughest part is all the meta-level bullshit that people carry around about their problems which makes getting them into “lets solve this” mode the hard part. Once you get someone to congruently say “Yes, I’m going to be a good hypnotic subject and we’re going to fix this”, you’ve done 90% of the work—but everyone focuses on the last 10% which looks like magic and then wonders “why not sprinkle this magic pixie dust on everyone!?!”.
Also, getting “rich”—assuming you mean at a level more than charging a couple hundred dollars per hour like many hypnotherapists do—requires you to be good at finding high leverage applications and working your way into them. That’s a whole new subset of skills and I haven’t yet gotten to that stage—though I plan on working on it.
First of all, I don’t like this “if you’re half as powerful as you claim” thing—especially since you seem to have read it as stronger than intended. When I make “strong claims” I do not expect, in the social obligation sense, to be believed. I’m just trying to be understood—that really is how I honestly see things. Take with as much salt as you please.
It’s important to make this explicit because setting up high expectations for a hoping skeptic is a sure way to fail—It sets up a dynamic of me being responsible for their behavior. While I do take responsibility for their actions internally, the only real way I can do this is through making sure they take responsibility for their own actions.
Also, there should be no courage needed. I can’t just take over your mind. With you, I’m not sure whether I’d pull out hypnosis at all, and (almost) certainly can’t just get into it from the start. Also, I can only push you as hard as you let yourself be pushed. Let’s chat some time and see where it goes.
Your link to your blog is down, but once its back up and if I find this claim plausible upon reading it, I would be very interested in trying this on myself.
EDIT: read the blog, and it looks awesome.
You’re welcome to try and break my atheism, but I’m saying that only because I’m reasonably darned sure you can’t do that by any conversational means (so long as we’re actually in a universe that doesn’t have a God, of course, I’m not stating a blind belief known to me to be blind).
Edit: oh, wait, didn’t realize you were using actual hypnotism rather than conversation. Permission retracted; I don’t know enough about how that works.
Agreed. The only way I’d see myself as having a fighting chance would be if you had a strong reason to go into hypnosis and you didn’t know my intentions.
If the world really were at stake, I think I could help you with the red panda problem—though I still have fairly wide confidence intervals on how difficult that would be because I haven’t tried something like this. I have yet to find a real life example where I’d encourage self deception and a surprisingly large fraction of problems go away when you remove the self deception.
I have been having a lot of fun using hypnosis and techniques inspired by hypnosis to improve rationality—and successfully. I was a bit disappointed that you didn’t respond to my email offering to show what hypnosis says about training rationality. And now I’m a bit confused with the retraction because I had figured you had completely written me off as a crackpot.
Will Ryan mentioned that you were skeptical of “this stuff”. Can you elaborate on what specifically you’re skeptical about and what kinds of evidence you’d like to see?
I hope you don’t think you are actually “giving amnesia” or doing anything other than roleplaying mind-controller and mind-controllee, in dialogues like these. Those teenagers are just playing along for their own reasons.
That hypothesis certainly isn’t new to me.
There’s a lot of research on hypnotic amnesia. Here are a few showing differences between hypnotically suggested amnesia and faked amnesia.
http://psycnet.apa.org/journals/abn/70/2/123/
http://www.ncbi.nlm.nih.gov/pubmed/2348012
http://psycnet.apa.org/journals/abn/105/3/381/
The relationship between “actually giving amnesia” and “roleplaying amnesia” is fascinating, but not something I’m going to get into here.