Skeptic: The idea that a magic new theory of psychology will unlock human potential and create a new political majority of model citizens is a secular messianism with nothing to back it up.
Leverage Researcher: Have you done the necessary reading? Our ideas are based on years of disjunctive lines of reasoning (see blog post #343, 562 and 617 on why you are wrong).
Skeptic: But you have never studied psychology, why would I trust your reasoning on the topic?
Leverage Researcher: That is magical thinking about prestige. Prestige is not a good indicator of quality. We have written a bunch of blog posts about rationality and cognitive biases.
Skeptic: That’s great. But do you have any data that indicates that your ideas might actually be true?
Leverage Researcher: No. You’re entitled to arguments, but not (that particular) proof (blog post #898).
Skeptic: Okay. But I asked experts and they disagree with your arguments.
Leverage Researcher: You will soon learn that your smart friends and experts are not remotely close to the rationality standards of Leverage Research, and you will no longer think it anywhere near as plausible that their differing opinion is because they know some incredible secret knowledge you don’t.
Skeptic: Ummm, okay. To refine my estimations regarding your theory of psychology, what do you anticipate to see if your ideas are right, is there any possibility to update on evidence?
Leverage Researcher: No, I don’t know enough about psychology to be more specific about my expectations. We will will know once we try it, please support us with money to do so.
Skeptic: I am not convinced.
Leverage Researcher: We call that motivated skepticism (see blog post #1355).
No. You’re entitled to arguments, but not (that particular) proof (blog post #898).
You would invoke this on someone asking for only specific evidence for your theory. It doesn’t make sense to invoke it against someone asking for ANY evidence.
You would invoke this on someone asking for only specific evidence for your theory. It doesn’t make sense to invoke it against someone asking for ANY evidence.
You have to take the outside view here. When an outsider asks if you have evidence that AI will go FOOM then they are not talking about arguments because convincing arguments are not enough in the opinion of a lot of people. That doesn’t imply that it is wrong to act on arguments but that you are so far detached from the reality of how people think that you don’t even get how ridiculous it sounds to an outsider that has not read the sequences. Which your comment and the 11 upvotes it got obviously show.
The way outsiders see it is that a lot of things can sound very convincing and yet be completely wrong and that only empirical evidence or mathematical proofs can corroborate extraordinary predictions like those made by SI.
The wrong way to approach those people is with snide remarks about their lack of rationality.
Your reply makes me think that you interpreted the ‘you’ in “You would invoke …” as you—XiXiDu, so it sounded like Incorrect was accusing you of being hypocritical. I think they might have just meant ‘one’, though, which would make their reply less of a snide remark and more of an (attempted) helpful correction.
I’m guessing you didn’t read it that way because Incorrect was attempting to correct the way Leverage Researcher was using that argument, but you didn’t identify with the Leverage Researcher character in your dialogue. So when Incorrect posted that as a reply to you, you thought they were saying that you yourself are just as bad as your character. I’m guessing about what’s going on in two different people’s brains though, so I could easily be wrong.
You would invoke this on someone asking for only specific evidence for your theory. It doesn’t make sense to invoke it against someone asking for ANY evidence.
And, in particular, you would invoke it when the proof demanded is proof that should not exist even given that the theory is correct.
Skeptic: But you have never studied psychology, why would I trust your reasoning on the topic?
(Not) Leverage Researcher: Yes I have. What are you basing this pretentious diatribe on? You clearly know nothing about us and are pattern matching to the issues you rant on about incessantly with respect to SingInst.
Skeptic: I am not convinced.
(Not) Leverage Researcher: I haven’t seen you be convinced about anything substantial ever. “Motivated skepticism” would be a polite declaration—it assumes your ‘curiosity’ is not a rhetorical facade. (See blog post #1355 for the appropriate response to this kind of rubbish.)
It is supposed to be funny. Humor at the expense of Leverage Research (or, based on content, perhaps SIAI?). Humor is one of the most effective ways of undermining something’s credibility. Making jokes based on false premises is rather insidious. If you reject any blatant falsehoods you may be conveyed as unable to take a joke.
I saw that coming and I knew it would be you. People are either trolling or part of a dark arts conspiracy.
The comment I wrote is the way I perceived lesswrong when I first came here. And I can tell from conversations with other people that they share that opinion.
A lot of your comments are incredible arrogant and consist of dismissive grandstanding. And on request you rarely explain yourself but merely point out that you don’t have to do so.
I wrote the comment so that SI can improve their public relations.
Only perfectly rational people are guaranteed to be able to do that. And you know that they are not both rational once the accusations like “trolling” and “arrogance” start flying.
Leverage Research is not SingInst. I have some reservations about their ideas, but they don’t overlap at all with the ones you’ve expressed here. Generalizing complaints you have against SingInst comes across as holding a grudge and misapplying it.
I don’t think that’s what happened. I think the aim was not to criticise LR at all, but to demonstrate to LW what LW’s answers to XiXiDu’s questions look like to him, when they’re portrayed as coming from somewhere else, somewhere LW is not particularly impressed by.
People here have pretty much stopped replying to objections with “you should read the Sequences”. This suggests that pointing out socially clunky behaviour is worth at least trying, for all the outcries of the stung.
How confident were you that your comment would result in noticeable improvements to SI’s public relations?
I am confident that people like Luke Muehlhauser will update on my comment and realize that you can’t approach outsiders the way it often happens on lesswrong. I voice this particular criticism for some time now and it got a lot better already.
Although people like wedrifid will probably never realize that it isn’t a good idea to link to lesswrong posts like they are the holy book of everyone who is sane and at the same time depict everyone who does disagree as either stupid, a troll or a master of dark arts.
Just check his latest comment, all he can do is attack people with a litany of charges like being logical rude or not able their change your mind.
On a first pass, the Leverage Research website feels like Objectivism. I say this because it is full of dubious claims about morality and psychology but which are presented as basic premises and facts. The explanations of “Connection Theory” are full of the same type of opaque reasoning and fiat statements about human nature which perhaps I am particularly sensitive to as a former Objectivist. Knowing nothing more than this first impression, I am going to make a prediction that there are Objectivist influences present here. That seems at least somewhat testable.
I didn’t notice any Objectivist influences looking through the high-level claims on the Leverage website, but their persuasive style does remind me quite a bit of Objectivism’s: lots of reasonable-sounding but not actually rigorous claims about human thinking, heavy reliance on inference, and a fairly grandiose tone in the final conclusions. I’d credit this not to direct influence but to convergent evolution. To Leverage’s credit, Connection Theory does come off as considerably less smug, and the reductionism isn’t as sketchy.
Now, none of this is a refutation—I haven’t gone deep enough into Leverage’s claims to say anything definitive about whether or not any of this stuff actually works. Plenty of stuff that I’d consider true reminds me of Objectivism’s claims, or of those of other equally pernicious ideologies. But it’s definitely enough to inform my priors, and it should shed light on some potential signaling problems in the presentation.
Since Connection Theory is mostly Geoff Anders’ work, I would be very surprised if it could have big influences he wasn’t aware of (maybe if he delegated a lot of stuff to Objectivist students or something, or was heavily influenced by some Objectivist psychologist).
I’m not an expert on Objectivism, but one of Rand’s principles was to always pass moral judgement.
Connection theory has much less moral judgement to it than most approaches.
It’s conceivable that there’s a similar intellectual style of trying to understand the world by starting with abstractions, but that’s not necessarily a matter of direct influence.
Maybe you should add a note at the top of the comment explicitly stating that it is not really about Leverage and does not at all represent your views about them.
I wrote the comment so that SI can improve their public relations.
You are trolling Leverage because you have issues with SingInst? It just isn’t ok to slander an organization like that based, from what I can tell, on the fact that there are social affiliations between Leverage and another group you disapprove of.
I thought the point was that the comment showed how the arguments, which we’ve gotten used to and don’t fully question anymore, would look ridiculous when applied in a different context. (It was a pretty effective demonstration for me—the same responses did look far less convincing when they were put in the mouth of Leverage Research people rather than LW users..)
I thought the point was that the comment showed how the arguments, which we’ve gotten used to and don’t fully question anymore, would look ridiculous when applied in a different context. (It was a pretty effective demonstration for me—the same responses did look far less convincing when they were put in the mouth of Leverage Research people rather than LW users..)
Exactly right.
Some remarks:
I don’t think the arguments LW/SI uses against its opponents are wrong but that reality is more complex than the recitation of a rationality mantra.
If you want to discuss or criticize people who are not aware of LW/SI then you should commit to an actual discussion rather than telling them that they haven’t read the sequences.
There is no reason for outsiders to suspect that LW/SI has any authority when it comes to arguments about AI, quantum physics or whatever.
If you want to convince outsiders then you should ask them questions and voice your own opinion. You should not tell them that you have it all figured out and that they just have to read those blog posts you wrote.
You should not portray yourself as the single bright shining hope for the redemption of the humanities collective intellect. That’s incredible arrogant and cultish.
You have to distill your subject matter and make it more palatable for the average person who really doesn’t care about being part of the Bayesian in-crowd.
You are trolling Leverage because you have issues with SingInst?
Could you please stop such accusations, it’s becoming ridiculous. If you have nothing sensible to say then let the matter rest. Your main approach of gaining karma seems to be quantity rather than actual argumentation.
I was just making fun of the original post that described Leverage Research as “secular messianism”. At the same time I was pointing out something important about how some behavior here could be perceived.
You seem to be the actual troll here who hides behind the accusation of trolling.
You are trolling Leverage because you have issues with SingInst?
Could you please stop such accusations, it’s becoming ridiculous.
The people being slandered here aren’t just strangers on the internet—they are people I know. If I see them being misrepresented then of course I am going to object. I spent a week taking classes from Geoff and he most certainly has studied (and researched) psychology. Yet his company is portrayed here in the role of uneducated. And then, by way of justification, you say:
I wrote the comment so that SI can improve their public relations.
I most certainly am going to make accusations about that because it just isn’t ok. You don’t go around misrepresenting the qualifications and credibility Leverage Research just because you have an issue with the Singularity Institute.
There’s only one way I was able to interpret XiXiDu’s top comment (the one you link to), and that was as a satire of responses to his many previous questions about SIAI. I can’t read it as a slander against Leverage at all. To me, this thread is roughly equivalent to attacking Jonathan Swift for his policy of baby-eating.
The people being slandered here aren’t just strangers on the internet—they are people I know.
Now you are being hypocritical. The author of the original post was the one who was rude with respect to Leverag. But you have chosen to attack me instead, I suspect because you agree with the author of the original post but get all outrageous if someone does criticise your precious SI.
Skeptic: Ummm, okay. To refine my estimations regarding your theory of psychology, what do you anticipate to see if your ideas are right, is there any possibility to update on evidence?
Leverage Researcher: No, I don’t know enough about psychology to be more specific about my expectations. We will will know once we try it, please support us with money to do so.
I don’t think that this is actually true. While I don’t know of Geoff’s specific plans to test CT, I do know that he’s interested in continuing to do so.
I don’t think that this is actually true. While I don’t know of Geoff’s specific plans to test CT, I do know that he’s interested in continuing to do so.
The author of the original post is skeptical about Leverage and I showed what would happen if Leverage was like lesswrong/SI. I am not criticizing Leverage.
To address the point behind the parody, the main difference between this and the analogous argument with “SIAI researcher”, besides user:Incorrect’s point and the fact that not being convinced is almost never automatically equated with motivated skepticism, is that the links to the blog posts don’t work. When they do, I don’t think the practice of linking to blog posts is problematic at all. It reduces the need to repeat arguments, and centralizes discussion of a particular issue to the comments of the corresponding post, instead of it being all over the place. Your dialog also gives the impression that you can find a post from the LW archives to support “anything”, that the linked post usually appears as incomprehensible and seemingly unrelated as a bunch of random digits, and that the act of giving someone a link to a relevant blog post is mainly a way to confuse and intimidate them with authority and the point is never for them to actually read it. Each of these impressions is false, as far as I can tell.
Skeptic: The idea that a magic new theory of psychology will unlock human potential and create a new political majority of model citizens is a secular messianism with nothing to back it up.
Leverage Researcher: Have you done the necessary reading? Our ideas are based on years of disjunctive lines of reasoning (see blog post #343, 562 and 617 on why you are wrong).
Skeptic: But you have never studied psychology, why would I trust your reasoning on the topic?
Leverage Researcher: That is magical thinking about prestige. Prestige is not a good indicator of quality. We have written a bunch of blog posts about rationality and cognitive biases.
Skeptic: That’s great. But do you have any data that indicates that your ideas might actually be true?
Leverage Researcher: No. You’re entitled to arguments, but not (that particular) proof (blog post #898).
Skeptic: Okay. But I asked experts and they disagree with your arguments.
Leverage Researcher: You will soon learn that your smart friends and experts are not remotely close to the rationality standards of Leverage Research, and you will no longer think it anywhere near as plausible that their differing opinion is because they know some incredible secret knowledge you don’t.
Skeptic: Ummm, okay. To refine my estimations regarding your theory of psychology, what do you anticipate to see if your ideas are right, is there any possibility to update on evidence?
Leverage Researcher: No, I don’t know enough about psychology to be more specific about my expectations. We will will know once we try it, please support us with money to do so.
Skeptic: I am not convinced.
Leverage Researcher: We call that motivated skepticism (see blog post #1355).
You would invoke this on someone asking for only specific evidence for your theory. It doesn’t make sense to invoke it against someone asking for ANY evidence.
You have to take the outside view here. When an outsider asks if you have evidence that AI will go FOOM then they are not talking about arguments because convincing arguments are not enough in the opinion of a lot of people. That doesn’t imply that it is wrong to act on arguments but that you are so far detached from the reality of how people think that you don’t even get how ridiculous it sounds to an outsider that has not read the sequences. Which your comment and the 11 upvotes it got obviously show.
The way outsiders see it is that a lot of things can sound very convincing and yet be completely wrong and that only empirical evidence or mathematical proofs can corroborate extraordinary predictions like those made by SI.
The wrong way to approach those people is with snide remarks about their lack of rationality.
Your reply makes me think that you interpreted the ‘you’ in “You would invoke …” as you—XiXiDu, so it sounded like Incorrect was accusing you of being hypocritical. I think they might have just meant ‘one’, though, which would make their reply less of a snide remark and more of an (attempted) helpful correction.
I’m guessing you didn’t read it that way because Incorrect was attempting to correct the way Leverage Researcher was using that argument, but you didn’t identify with the Leverage Researcher character in your dialogue. So when Incorrect posted that as a reply to you, you thought they were saying that you yourself are just as bad as your character. I’m guessing about what’s going on in two different people’s brains though, so I could easily be wrong.
And, in particular, you would invoke it when the proof demanded is proof that should not exist even given that the theory is correct.
The parent is misinformed trolling.
(Not) Leverage Researcher: Yes I have. What are you basing this pretentious diatribe on? You clearly know nothing about us and are pattern matching to the issues you rant on about incessantly with respect to SingInst.
(Not) Leverage Researcher: I haven’t seen you be convinced about anything substantial ever. “Motivated skepticism” would be a polite declaration—it assumes your ‘curiosity’ is not a rhetorical facade. (See blog post #1355 for the appropriate response to this kind of rubbish.)
I thought it was supposed to be funny. Curse you Poe’s Law!
It is supposed to be funny. Humor at the expense of Leverage Research (or, based on content, perhaps SIAI?). Humor is one of the most effective ways of undermining something’s credibility. Making jokes based on false premises is rather insidious. If you reject any blatant falsehoods you may be conveyed as unable to take a joke.
I saw that coming and I knew it would be you. People are either trolling or part of a dark arts conspiracy.
The comment I wrote is the way I perceived lesswrong when I first came here. And I can tell from conversations with other people that they share that opinion.
A lot of your comments are incredible arrogant and consist of dismissive grandstanding. And on request you rarely explain yourself but merely point out that you don’t have to do so.
I wrote the comment so that SI can improve their public relations.
I can’t tell whether you guys are metatrolling each other or what.
Can they come to an Aumann agreement on the matter?
TWO OPINIONS ENTER! ONE OPINION LEAVES!
Only perfectly rational people are guaranteed to be able to do that. And you know that they are not both rational once the accusations like “trolling” and “arrogance” start flying.
Sometimes an Aumann agreement just isn’t appropriate. This is one of those times.
Leverage Research is not SingInst. I have some reservations about their ideas, but they don’t overlap at all with the ones you’ve expressed here. Generalizing complaints you have against SingInst comes across as holding a grudge and misapplying it.
I don’t think that’s what happened. I think the aim was not to criticise LR at all, but to demonstrate to LW what LW’s answers to XiXiDu’s questions look like to him, when they’re portrayed as coming from somewhere else, somewhere LW is not particularly impressed by.
Really?
How confident were you that your comment would result in noticeable improvements to SI’s public relations?
People here have pretty much stopped replying to objections with “you should read the Sequences”. This suggests that pointing out socially clunky behaviour is worth at least trying, for all the outcries of the stung.
Mm. That’s fair.
Updated in favor of communication being a marginally less hopeless way of improving the world than I’d previously believed.
I am confident that people like Luke Muehlhauser will update on my comment and realize that you can’t approach outsiders the way it often happens on lesswrong. I voice this particular criticism for some time now and it got a lot better already.
Although people like wedrifid will probably never realize that it isn’t a good idea to link to lesswrong posts like they are the holy book of everyone who is sane and at the same time depict everyone who does disagree as either stupid, a troll or a master of dark arts.
Just check his latest comment, all he can do is attack people with a litany of charges like being logical rude or not able their change your mind.
On a first pass, the Leverage Research website feels like Objectivism. I say this because it is full of dubious claims about morality and psychology but which are presented as basic premises and facts. The explanations of “Connection Theory” are full of the same type of opaque reasoning and fiat statements about human nature which perhaps I am particularly sensitive to as a former Objectivist. Knowing nothing more than this first impression, I am going to make a prediction that there are Objectivist influences present here. That seems at least somewhat testable.
There are no Objectivist influences that I am aware of.
I didn’t notice any Objectivist influences looking through the high-level claims on the Leverage website, but their persuasive style does remind me quite a bit of Objectivism’s: lots of reasonable-sounding but not actually rigorous claims about human thinking, heavy reliance on inference, and a fairly grandiose tone in the final conclusions. I’d credit this not to direct influence but to convergent evolution. To Leverage’s credit, Connection Theory does come off as considerably less smug, and the reductionism isn’t as sketchy.
Now, none of this is a refutation—I haven’t gone deep enough into Leverage’s claims to say anything definitive about whether or not any of this stuff actually works. Plenty of stuff that I’d consider true reminds me of Objectivism’s claims, or of those of other equally pernicious ideologies. But it’s definitely enough to inform my priors, and it should shed light on some potential signaling problems in the presentation.
Maybe you are not aware of them?
Your denial would be more convincing if you compared and contrasted CT ideas and objectivist ideas.
Unfortunately, I’m not familiar with Ayn Rand’s ideas on psychology.
For a given value of ‘unfortunate’. :)
^Beat me to it.
Since Connection Theory is mostly Geoff Anders’ work, I would be very surprised if it could have big influences he wasn’t aware of (maybe if he delegated a lot of stuff to Objectivist students or something, or was heavily influenced by some Objectivist psychologist).
I’m not an expert on Objectivism, but one of Rand’s principles was to always pass moral judgement.
Connection theory has much less moral judgement to it than most approaches.
It’s conceivable that there’s a similar intellectual style of trying to understand the world by starting with abstractions, but that’s not necessarily a matter of direct influence.
Maybe you should add a note at the top of the comment explicitly stating that it is not really about Leverage and does not at all represent your views about them.
You are trolling Leverage because you have issues with SingInst? It just isn’t ok to slander an organization like that based, from what I can tell, on the fact that there are social affiliations between Leverage and another group you disapprove of.
I thought the point was that the comment showed how the arguments, which we’ve gotten used to and don’t fully question anymore, would look ridiculous when applied in a different context. (It was a pretty effective demonstration for me—the same responses did look far less convincing when they were put in the mouth of Leverage Research people rather than LW users..)
Exactly right.
Some remarks:
I don’t think the arguments LW/SI uses against its opponents are wrong but that reality is more complex than the recitation of a rationality mantra.
If you want to discuss or criticize people who are not aware of LW/SI then you should commit to an actual discussion rather than telling them that they haven’t read the sequences.
There is no reason for outsiders to suspect that LW/SI has any authority when it comes to arguments about AI, quantum physics or whatever.
If you want to convince outsiders then you should ask them questions and voice your own opinion. You should not tell them that you have it all figured out and that they just have to read those blog posts you wrote.
You should not portray yourself as the single bright shining hope for the redemption of the humanities collective intellect. That’s incredible arrogant and cultish.
You have to distill your subject matter and make it more palatable for the average person who really doesn’t care about being part of the Bayesian in-crowd.
Could you please stop such accusations, it’s becoming ridiculous. If you have nothing sensible to say then let the matter rest. Your main approach of gaining karma seems to be quantity rather than actual argumentation.
I was just making fun of the original post that described Leverage Research as “secular messianism”. At the same time I was pointing out something important about how some behavior here could be perceived.
You seem to be the actual troll here who hides behind the accusation of trolling.
The people being slandered here aren’t just strangers on the internet—they are people I know. If I see them being misrepresented then of course I am going to object. I spent a week taking classes from Geoff and he most certainly has studied (and researched) psychology. Yet his company is portrayed here in the role of uneducated. And then, by way of justification, you say:
I most certainly am going to make accusations about that because it just isn’t ok. You don’t go around misrepresenting the qualifications and credibility Leverage Research just because you have an issue with the Singularity Institute.
There’s only one way I was able to interpret XiXiDu’s top comment (the one you link to), and that was as a satire of responses to his many previous questions about SIAI. I can’t read it as a slander against Leverage at all. To me, this thread is roughly equivalent to attacking Jonathan Swift for his policy of baby-eating.
Now you are being hypocritical. The author of the original post was the one who was rude with respect to Leverag. But you have chosen to attack me instead, I suspect because you agree with the author of the original post but get all outrageous if someone does criticise your precious SI.
That’s basically a confession.
Or the result of having an accurate model of wedrifid.
When I read XiXiDu’s original comment, I also predicted wedrifid would respond negatively.
What’s wrong with rhetorical facades? I would even say they are one of my favorite things.
I don’t think that this is actually true. While I don’t know of Geoff’s specific plans to test CT, I do know that he’s interested in continuing to do so.
The author of the original post is skeptical about Leverage and I showed what would happen if Leverage was like lesswrong/SI. I am not criticizing Leverage.
...I’m not sure whether you’re making fun of Leverage Research or LessWrong in general here. Which is worrying.
To address the point behind the parody, the main difference between this and the analogous argument with “SIAI researcher”, besides user:Incorrect’s point and the fact that not being convinced is almost never automatically equated with motivated skepticism, is that the links to the blog posts don’t work. When they do, I don’t think the practice of linking to blog posts is problematic at all. It reduces the need to repeat arguments, and centralizes discussion of a particular issue to the comments of the corresponding post, instead of it being all over the place. Your dialog also gives the impression that you can find a post from the LW archives to support “anything”, that the linked post usually appears as incomprehensible and seemingly unrelated as a bunch of random digits, and that the act of giving someone a link to a relevant blog post is mainly a way to confuse and intimidate them with authority and the point is never for them to actually read it. Each of these impressions is false, as far as I can tell.