We wouldn’t actually want Elon clones; he’s combative and erratic and quite difficult to coordinate with (which incidentally seems to be relatively common for abnormally effective people who emerge seemingly randomly in the wild), but imagine 2,000 highly effective benevolent people who can get along with others.
I wonder how much of this is a consequence of being so different from your environment that you must ignore most of their advice in order to unleash your true powers. Perhaps 2000 highly effective benevolent people growing up together would be naturally more cooperative, because they would have someone worth cooperating with.
I realize that some of you may feel like this is off-topic. It’s maybe airing grievances, which you were hoping for, but it’s not sensational in any way and it’s not directly addressing any of the claims that have been made about Leverage.
Actually, these are the data I was looking for. I wanted to have a near-mode picture of what the day-to-day life in Leverage looked like. You provided your perspective, thank you! So did Zoe, now I have two pieces of the puzzle.
(I am also curious about the specific theories, but perhaps some other day someone else will leak the data.)
It’s the far-mode, when people keep saying that Geoff is either the most genius philosopher ever, or an evil cult leader, that is quite boring. Like, maybe one or the other conclusion is right, or maybe it’s something in between, or maybe it’s something else entirely… but give me the data, and I will make the conclusion myself. It may be a conclusion different from what you wanted me to make… but that’s exactly the point.
One thought I’ve had is that perhaps the Rationalist community in particular is uncomfortable allowing outside theories in without first insourcing them, but they don’t have a structure for processing and vetting content other than through their leadership, and their leadership is busy and hasn’t opted-in to serving that function. So instead there’s a cursory “eh, looks wrong” and then there’s not really anywhere to go from there for either actually engaging with hard-to-prove content or clearing the status hit. Or maybe the issue is more status-based to begin with—some kind of allergy to confidence?
I believe this is spot on. The community attracts the kind of people who love nitpicking. So when a theory is presented, someone always goes “this is not exactly right”, and it can quickly escalate to “this is complete bunk”. And it’s not really helpful that people are too familiar with the replication crisis, which allows them to dismiss even things they don’t have evidence against, by saying “seems like one of those things that won’t replicate”. In theory, replication crisis should make people update towards “I am not really sure whether this is right or wrong”, but in practice it often makes them update towards “I am sure this is wrong”.
And then the situation becomes completely different when a high-status member of the rationalist community vouches for a theory they found in… a scientific paper, a pop-science book, a self-help book, a pop-Buddhism book, or someone’s blog. Now, attacking the idea would be like attacking its supporter; only a higher-status person can do that safely. Like, Eliezer himself could say: “Excuse me, but how strong is the scientific evidence in favor of this theory?” and then the supporter might backpedal to: “eh, I was just using it as a fake framework that I personally found useful in certain situations, but of course I am not claiming that...” :D
Yes, the rationalist community has a problem with evaluating theories.
This does not automatically imply that Geoff’s theory is correct or useful. It also does not imply that Geoff’s theory was dismissed without a debate… because it wasn’t.
Let’s start with the fact that Connection Theory was introduced to LessWrong readers by Luke Muehlhauser, the former CEO of Singularity Institute. (I am not sure whether he was a CEO exactly at the moment the article was posted.) That sounds to me like a pretty strong insider support. Luke continues with:
I suspect many LWers are excited (like me) to see another organization working on (among other things) x-risk reduction and rationality training, especially one packed with LW members. But I also suspect many LWers (like me) have many concerns about your research methodology and about connection theory. I think this would be a good place for you to not just introduce yourself (and Leverage Research) but also to address some likely concerns your potential supporters may have (like I did for SI here and here).
(In the second link, Luke answers questions similar to what people had about Leverage: What does the Singularity Institute actually do? Why is their research not peer-reviewed? Did they develop anything practical? Luke admits that SI has a problem and he promises to fix it. He makes some specific promises, some of them even with specific deadlines, and says that he will work towards greater transparency of the organization. Later, Singularity Institute / Machine Intelligence Research Institute actually published some stuff.)
Then came the critique, which started with describing Connection Theory and Geoff’s evidence in favor of it. Then the author explained why the evidence does not mean what Geoff thinks it means. So, this was not a knee-jerk reaction.
Then Eliezer weighed in, and asked the unpleasant question: What predictions CT makes that mainstream cognitive science does not? Geoff answered, approximately, that there were no general predictions, but if you make a detailed map of someone’s beliefs, then you can make good specific predictions about the specific person. Then PJ Eby (who has read tons of books on psychology and self-help) said that if you spend a lot of time making a model of a specific human, this of course allows you to make good predictions about them, but that’s unrelated to whether the theory itself is true or not. It just means that having more data is usually better. Then he mentioned an existing theory called “Current Reality Tree”, which is similar but much simpler. (No reply.)
Geoff says that the theory worked for him to get rid of a specific addiction, and that no psychologist would have predicted that. PJ Eby tells him he is merely unfamiliar with mainstream psychological research, where the idea of “secondary gain” is already well known. (I confirm that even I have heard about it.) In a different comment he also mentions “Perceptual Control Theory” and “Method of Levels”, which again seem like simplified versions of CT. (The word “simplified” is not derogatory in this context, but points towards Occam’s razor.) Etc.
...from my perspective this all means that Geoff’s Connection Theory was nicely introduced to LessWrong audience, it just failed to impress the audience. Part of the reason was that some LW readers are actually quite familiar with current state of psychology / cognitive science, so they could point out that the useful parts of CT are already known to mainstream science, and the extra parts are not useful. And then… silence. On the side of Leverage, no one tried to convince the LW audience. On the side of LW, the topic seemed settled.
So to me it seems that Geoff decided (perhaps correctly) that if LW audience isn’t immediately impressed with his grand theory, it would be a waste of his time to provide more information or try to address some concerns. But this is not the same as LW refusing to engage with this theory. He actually got some useful feedback (for example that there already exist more elegant versions of his theory), he just seemed to ignore that entirely.
Another way to put it is that instead of trying to convince his audience, Geoff selects his audience from people who are easy to impress. Such as kids fresh out of school. When facing people who actually know something about mainstream psychology, and can tell him how specifically he is wrong, and that the good parts of his theory are actually not new… it is easier to withdraw, and continue impressing the kids in his kingdom.
Instead of assuming that because you can’t see our progress, it doesn’t exist/doesn’t have value, it seems like people should be more curious — recognize that they might be missing something important. “Why are so many smart and strategic people investing financial and human capital into this project?”
Ok, this one is easy. Donors are in the business of buying “lottery tickets”. They try a few things, not because they are convinced that it will definitely work, but because the probability seems sufficiently non-zero; so if they support 100 things, and 10 of them turn out to be okay, and 1 of them turns out to be really impressive… that is a success in their books.
Geoff is smart, charismatic, he was hanging out with the people from Rationalist community, his project seemed interesting. It was a ticket worth buying.
This fits in with my comments about the ways that we were, I think, poorly treated by some of the surrounding communities and orgs. Not wanting to acknowledge our contributions, not wanting to give us a platform, not wanting us to hire people, and sometimes being pretty attacky.
Uhm, if you want to make Less Wrong audience acknowledge your contributions, writing an article called “Report on Leverage Research contributions” sounds like the straightforward solution. Did you expect it would get banned? Were you waiting for Luke Muehlhauser to post another article on your behalf?
If the story had been that we (Geoff? Everyone who ended up in a leadership position?) had been trying to maliciously brainwash and manipulate and control people for 6 years and finally succeeded after year 7 and then shut it down…well, that seems like a pretty odd explanation.
Another possible story could be that after 6 years of tinkering with various “psychological techniques”, Geoff came across something so harmful that it made his group fall apart.
I am referring to the “intention research” here. I assume it was originally meant to be a weapon against outsiders, but it backfired, as different groups within Leverage started using it against each other.
The poison was so strong, that it was easier for Geoff to just burn down the whole thing, erase the website, change the organization name, and start again with new members somewhere else, where hopefully his previous reputation would not follow him.
Also, I suppose that the more time passed, the more outside observers became impatient about the lack of visible output. Excuses like “we are still working on Connection Theory” make more sense in year 1 than they make in year 6.
For example, speaking for myself, I was originally quite positive about Leverage Research. If you asked me a year or two ago, I would probably be like “oh, they are some awesome rationalists in the Bay Area, I don’t actually know any details about them, but that’s probably my fault because I am on the opposite side of the planet”. Then at some moment I realized that the only reason I believe that Leverage is doing something useful, is because Leverage told me so, without providing any tangible evidence. So I stopped believing that.
Thanks for writing this up, I had similar thoughts.
Overall I’m glad Cathleen wrote this post, as it gave me a lot more insight into what life in Leverage was like and why, and more empathy for what people there have been going through. I really hope she and everyone else manages to carry on, be successful, not be stigmatized, and keep working on things that are important to them.
But also, I have not updated in the direction of “Leverage was actually working on important or meaningful things that are valuable to other people outside its ecosystem.” I’m still waiting for that, and interested in seeing things that might indicate it. It would be fantastic to learn of new psychology research or insights that could help me and my friends and clients.
This also seems central to why the question of whether Leverage is being treated unfairly now or not feels very different than whether they were treated unfairly before.
I am referring to the “intention research” here. I assume it was originally meant to be a weapon against outsiders, but it backfired, as different groups within Leverage started using it against each other.
The point of intention research is not being a weapon. I expect that most of the problem was about people using it to do coaching interventions with other people without consent that were intended for the own good of the other person.
Another aspect is that people went through processes that made them more sensitive and thus more vulnerable in an attempt to increase their perceptive abilities to be better at the field.
A third problem is that it gets people to believe that they can accurately mind read other people which then leads to all sorts of people when it turns out that their mind reading isn’t fully accurate.
The poison was so strong, that it was easier for Geoff to just burn down the whole thing, erase the website, change the organization name, and start again with new members somewhere else, where hopefully his previous reputation would not follow him.
There’s no name change of the organization. It’s still Leverage Research.
Then at some moment I realized that the only reason I believe that Leverage is doing something useful, is because Leverage told me so, without providing any tangible evidence.
Belief Reporting is useful and I think you were regularly at the LessWrong Community Weekend, so you could have attended the one workshop (from someone who wasn’t itself at Leverage but who was at one of the Leverage workshops).
Belief Reporting is useful and I think you were regularly at the LessWrong Community Weekend, so you could have attended the one workshop (from someone who wasn’t itself at Leverage but who was at one of the Leverage workshops).
I think I was at one LW Community Weekend in Berlin, at least six years ago, and I no longer remember most of what happened there, so I can neither confirm nor deny this.
Going off-topic here: is it just me, or is Elon Musk’s greatest superpower the ability to get ridiculous amounts of funding for very unusual ventures? (SpaceX, Boring Company, Neuralink, etc?) There must be many more people besides Elon Musk who could be an effective CEO for a speculative technology company, but I suspect that if anyone other than Elon Musk had tried to pitch The Boring Company to venture capitalists they’d be laughed out of the room.
(Similarly, right now Warren Buffet makes money not by outwitting the market the way he did years and years ago, but because people are willing to give him better terms than other investors because having the Warren Buffet Seal Of Approval is valuable.)
I wonder how much of this is a consequence of being so different from your environment that you must ignore most of their advice in order to unleash your true powers. Perhaps 2000 highly effective benevolent people growing up together would be naturally more cooperative, because they would have someone worth cooperating with.
Actually, these are the data I was looking for. I wanted to have a near-mode picture of what the day-to-day life in Leverage looked like. You provided your perspective, thank you! So did Zoe, now I have two pieces of the puzzle.
(I am also curious about the specific theories, but perhaps some other day someone else will leak the data.)
It’s the far-mode, when people keep saying that Geoff is either the most genius philosopher ever, or an evil cult leader, that is quite boring. Like, maybe one or the other conclusion is right, or maybe it’s something in between, or maybe it’s something else entirely… but give me the data, and I will make the conclusion myself. It may be a conclusion different from what you wanted me to make… but that’s exactly the point.
I believe this is spot on. The community attracts the kind of people who love nitpicking. So when a theory is presented, someone always goes “this is not exactly right”, and it can quickly escalate to “this is complete bunk”. And it’s not really helpful that people are too familiar with the replication crisis, which allows them to dismiss even things they don’t have evidence against, by saying “seems like one of those things that won’t replicate”. In theory, replication crisis should make people update towards “I am not really sure whether this is right or wrong”, but in practice it often makes them update towards “I am sure this is wrong”.
And then the situation becomes completely different when a high-status member of the rationalist community vouches for a theory they found in… a scientific paper, a pop-science book, a self-help book, a pop-Buddhism book, or someone’s blog. Now, attacking the idea would be like attacking its supporter; only a higher-status person can do that safely. Like, Eliezer himself could say: “Excuse me, but how strong is the scientific evidence in favor of this theory?” and then the supporter might backpedal to: “eh, I was just using it as a fake framework that I personally found useful in certain situations, but of course I am not claiming that...” :D
Yes, the rationalist community has a problem with evaluating theories.
This does not automatically imply that Geoff’s theory is correct or useful. It also does not imply that Geoff’s theory was dismissed without a debate… because it wasn’t.
Let’s start with the fact that Connection Theory was introduced to LessWrong readers by Luke Muehlhauser, the former CEO of Singularity Institute. (I am not sure whether he was a CEO exactly at the moment the article was posted.) That sounds to me like a pretty strong insider support. Luke continues with:
(In the second link, Luke answers questions similar to what people had about Leverage: What does the Singularity Institute actually do? Why is their research not peer-reviewed? Did they develop anything practical? Luke admits that SI has a problem and he promises to fix it. He makes some specific promises, some of them even with specific deadlines, and says that he will work towards greater transparency of the organization. Later, Singularity Institute / Machine Intelligence Research Institute actually published some stuff.)
Then came the critique, which started with describing Connection Theory and Geoff’s evidence in favor of it. Then the author explained why the evidence does not mean what Geoff thinks it means. So, this was not a knee-jerk reaction.
Then Eliezer weighed in, and asked the unpleasant question: What predictions CT makes that mainstream cognitive science does not? Geoff answered, approximately, that there were no general predictions, but if you make a detailed map of someone’s beliefs, then you can make good specific predictions about the specific person. Then PJ Eby (who has read tons of books on psychology and self-help) said that if you spend a lot of time making a model of a specific human, this of course allows you to make good predictions about them, but that’s unrelated to whether the theory itself is true or not. It just means that having more data is usually better. Then he mentioned an existing theory called “Current Reality Tree”, which is similar but much simpler. (No reply.)
Geoff says that the theory worked for him to get rid of a specific addiction, and that no psychologist would have predicted that. PJ Eby tells him he is merely unfamiliar with mainstream psychological research, where the idea of “secondary gain” is already well known. (I confirm that even I have heard about it.) In a different comment he also mentions “Perceptual Control Theory” and “Method of Levels”, which again seem like simplified versions of CT. (The word “simplified” is not derogatory in this context, but points towards Occam’s razor.) Etc.
...from my perspective this all means that Geoff’s Connection Theory was nicely introduced to LessWrong audience, it just failed to impress the audience. Part of the reason was that some LW readers are actually quite familiar with current state of psychology / cognitive science, so they could point out that the useful parts of CT are already known to mainstream science, and the extra parts are not useful. And then… silence. On the side of Leverage, no one tried to convince the LW audience. On the side of LW, the topic seemed settled.
So to me it seems that Geoff decided (perhaps correctly) that if LW audience isn’t immediately impressed with his grand theory, it would be a waste of his time to provide more information or try to address some concerns. But this is not the same as LW refusing to engage with this theory. He actually got some useful feedback (for example that there already exist more elegant versions of his theory), he just seemed to ignore that entirely.
Another way to put it is that instead of trying to convince his audience, Geoff selects his audience from people who are easy to impress. Such as kids fresh out of school. When facing people who actually know something about mainstream psychology, and can tell him how specifically he is wrong, and that the good parts of his theory are actually not new… it is easier to withdraw, and continue impressing the kids in his kingdom.
Ok, this one is easy. Donors are in the business of buying “lottery tickets”. They try a few things, not because they are convinced that it will definitely work, but because the probability seems sufficiently non-zero; so if they support 100 things, and 10 of them turn out to be okay, and 1 of them turns out to be really impressive… that is a success in their books.
Geoff is smart, charismatic, he was hanging out with the people from Rationalist community, his project seemed interesting. It was a ticket worth buying.
Uhm, if you want to make Less Wrong audience acknowledge your contributions, writing an article called “Report on Leverage Research contributions” sounds like the straightforward solution. Did you expect it would get banned? Were you waiting for Luke Muehlhauser to post another article on your behalf?
Another possible story could be that after 6 years of tinkering with various “psychological techniques”, Geoff came across something so harmful that it made his group fall apart.
I am referring to the “intention research” here. I assume it was originally meant to be a weapon against outsiders, but it backfired, as different groups within Leverage started using it against each other.
The poison was so strong, that it was easier for Geoff to just burn down the whole thing, erase the website, change the organization name, and start again with new members somewhere else, where hopefully his previous reputation would not follow him.
Also, I suppose that the more time passed, the more outside observers became impatient about the lack of visible output. Excuses like “we are still working on Connection Theory” make more sense in year 1 than they make in year 6.
For example, speaking for myself, I was originally quite positive about Leverage Research. If you asked me a year or two ago, I would probably be like “oh, they are some awesome rationalists in the Bay Area, I don’t actually know any details about them, but that’s probably my fault because I am on the opposite side of the planet”. Then at some moment I realized that the only reason I believe that Leverage is doing something useful, is because Leverage told me so, without providing any tangible evidence. So I stopped believing that.
Thanks for writing this up, I had similar thoughts.
Overall I’m glad Cathleen wrote this post, as it gave me a lot more insight into what life in Leverage was like and why, and more empathy for what people there have been going through. I really hope she and everyone else manages to carry on, be successful, not be stigmatized, and keep working on things that are important to them.
But also, I have not updated in the direction of “Leverage was actually working on important or meaningful things that are valuable to other people outside its ecosystem.” I’m still waiting for that, and interested in seeing things that might indicate it. It would be fantastic to learn of new psychology research or insights that could help me and my friends and clients.
This also seems central to why the question of whether Leverage is being treated unfairly now or not feels very different than whether they were treated unfairly before.
Many of Viliam’s comments are at-or-above the quality standard of top-level posts. This is one of them.
The point of intention research is not being a weapon. I expect that most of the problem was about people using it to do coaching interventions with other people without consent that were intended for the own good of the other person.
Another aspect is that people went through processes that made them more sensitive and thus more vulnerable in an attempt to increase their perceptive abilities to be better at the field.
A third problem is that it gets people to believe that they can accurately mind read other people which then leads to all sorts of people when it turns out that their mind reading isn’t fully accurate.
There’s no name change of the organization. It’s still Leverage Research.
Belief Reporting is useful and I think you were regularly at the LessWrong Community Weekend, so you could have attended the one workshop (from someone who wasn’t itself at Leverage but who was at one of the Leverage workshops).
I think I was at one LW Community Weekend in Berlin, at least six years ago, and I no longer remember most of what happened there, so I can neither confirm nor deny this.
Going off-topic here: is it just me, or is Elon Musk’s greatest superpower the ability to get ridiculous amounts of funding for very unusual ventures? (SpaceX, Boring Company, Neuralink, etc?) There must be many more people besides Elon Musk who could be an effective CEO for a speculative technology company, but I suspect that if anyone other than Elon Musk had tried to pitch The Boring Company to venture capitalists they’d be laughed out of the room.
(Similarly, right now Warren Buffet makes money not by outwitting the market the way he did years and years ago, but because people are willing to give him better terms than other investors because having the Warren Buffet Seal Of Approval is valuable.)