I’m not sure what the meaning, if any, of the following fact is, but: I notice that I would feel very positively about Leverage as it’s portrayed here if there weren’t relationships with multiple younger subordinates (e.g. if the leader had been monogamously married), and as it is I feel mildly negative about it on net.
That wasn’t necessary evidence for me. The secrecy + “charting/debugging” + “the only organization with a plan that could possibly work, and the only real shot at saving the world” is (if true) adequate to label the organization a cult (in the colloquial sense). This are all ideas/systems/technologies that are consistently and systematically used by manipulative organizations to break a person’s ability to think straight. Any two of these might be okay if used extremely carefully (psychiatry uses secrecy + debugging) but having all three brings it solidly into cult territory. Also, psychiatry has lots of rules to prevent abuse, including public, well-established ethical standards.
Are Leverage’s standard operating procedures auditable knowledge to outsiders? If not, this is the mother of all red flags and we should default to “cult”.
Edit: LarissaRowe didn’t reply to this comment because Leverage doesn’t have a leg to stand on.
Edit ×2: Shaming someone into a response violates the norms of Less Wrong. The first edit was a mistake. I apologize.
In psychiatry there’s no secrecy for treatment protocols and there are no secrecy rules for patients that prevent them from sharing about their experience.
>”the only organization with a plan that could possibly work, and the only real shot at saving the world”
It’s definitely a warpy sort of belief. The issue to me, and why I could still feel positively about such an organization, is that the strong default for people and organizations might be a strong false lack of hope. In which case, it might be correct to have what seems like a delusional bubble of exceptionalism. It still seems to have some significant bad effects, and is still probably partly delusional, but if we don’t know how to get the magic of Hope without some delusion I don’t think that means we should throw away Hope.
>Are Leverage’s standard operating procedures auditable knowledge to outsiders?
It would be nice to live in a world where this standard were good and feasible, but I don’t think we do. Not holding this standard does open us up for the possibility of all sorts of abuse hiding in relative secrecy, but unfortunately I don’t see how to avoid that risk without becoming ineffective.
I think the things you point out are big risk factors, but to me don’t seem to indicate a “poison” in the telos of the organization. Whereas sexual/romantic stuff seems like significant evidence towards “poison”, in the sense of “it would actually be bad if these people were in power”.
The real problem is to have the belief that you are the only organization with a plan that might work while at the same time requiring secrecy that prevents the participants from feedback from the outside world that might make participants doubt that this is the case. If you then add strong self-modification techniques that also strengthen the belief, that’s no good enviroment.
I think it’s bad, possibly very bad, to have delusional beliefs like this. But I think by default we don’t already know how to decouple belief from intention. Saying “we’re the only ones with a plan to save the world that might work” is part belief (e.g., it implies that you expect to always find fatal flaws in others’s world-saving plans), and part intention (as in, I’m going to make myself have a plan that might work). We also can’t by default decouple belief from caring. Specialization can be interpreted as a belief that being a certain way is the best way for you to be; it’s not true, objectively, but it results in roughly the same actions. The intention to make your plans work and caring about the worlds in which you can possibly succeed is good, and if we can’t decouple these things, it might be worth having false beliefs (though of course it’s also extremely worth becoming able to decouple belief from caring and intention, and ameliorating the negative effects on the margin by forming separate beliefs about things that you are able to decouple, e.g. using explicit reason to figure out whether someone else’s plan might work, even if intuitively you’re “sure” that no one else’s plan could work).
I think it’s clearly bad to prevent feedback for the sake of protecting “beliefs”. But secrecy makes sense for other reasons. (Intentions matter because they affect many details of the implementation, which can add up to large overall effects on the outcomes.)
I think there are two kinds of secrecy. One is about not answering every questions that outsiders have. The other is about forbidding insiders from sharing information to the outside.
Power easily corrupts processes. Playing around with strong self modification is playing with a lot of power.
Secrecy has a lot easily visible benefits because you reduce your attack surface. But it has it’s costs and it’s generally wise to be skeptical of versions of it that prevent insiders from sharing information that’s not of a personal nature when doing radical projects.
if we don’t know how to get the magic of Hope without some delusion
Planning for success doesn’t require knowledge of success, doesn’t get better if you believe things that can’t be known. Hope is a good concept for this situation: a risk of success where the probability of success needn’t be significant, it’s the value of success that makes hope relevant.
Hope makes sense as a concept of curiosity more than as one of decision making, so that you are not vulnerable to misleading expected utility calculations, but get some guidance for filling in the chart of possible plans, taking steps towards enacting them.
Yeah, if I follow you, I think I agree that Hope is most essential in the realm of curiosity. It seems like Leverage was/is aimed at realms that are deeply ontologically uncertain (what are the possibilities for using my mind radically more effectively, what really matters for affecting the world), which entails that curiosity and probing possibility-space is a nearly permanent central feature of what they’re trying to do. To say it more concretely, asking a really weird question and trying out really weird answers might feel intuitively more appealing if you think that you’re exceptional, and if you think your social context is exceptionally able to pick up on weird but true/important results.
might feel intuitively more appealing if you think that you’re exceptional
More appealing compared to what alternative? Don’t stand still, do the work. There is rarely a reason to prefer a particular step of a large journey over all other steps. That’s the character of curiosity.
Hm, it seems like you’re arguing against the stance I’m describing, where my main point is just that this is a stance many people take. I sometimes find that I’ve been taking a stance like this; when I reflect on it I’ve never agreed with it, but that doesn’t mean it’s not happening. Maybe you’re rejecting putting effort into accommodating this stance, rather than unraveling it?
my main point is just that this is a stance many people take [...] putting effort into accommodating this stance, rather than unraveling it
Formulating what might be going on gives something specific to talk about. But then what’s the point to settling on an emotional valence? Discussing the error seems interesting, regardless of what attitude that props up. The patch I proposed actually preserves the positive qualities, isn’t a demonstration of their absence.
>There is rarely a reason to prefer a particular step of a large journey over all other steps.That’s the character of curiosity.
I didn’t get the essence of your proposal from this. Could you phrase this as advice to, for example, Elon Musk (taking Elon as an example of someone who’s making good use of slightly delusional “beliefs” about his plans, while still remaining very solidly in contact with reality)?
I agree he’s exceptionally well in contact with reality. But also part of his “setting goals” involves making “predictions” about timelines. Which are very often wrong, quantitatively (while being correct “in spirit” in the sense that they achieve the goal, just later than “predicted”).
When a civilization gets curious, each individual only gets to work on a few observations, and most of these observations are not going to be foreknowably more important than others, or useful in isolation from others that are not even anticipated at the time, yet the whole activity is worthwhile. So absence of a reason to pursue a particular activity compared to other activities is no reason for not taking it seriously. It’s only presence of a reason to take up a different activity that warrants change.
What if there’s an abundance of specific reasons to take up various activities, and which ones you want to invest in seems to depend heavily on “follow through”, i.e. “are people going to keep working on this”?
abundance of specific reasons to take up various activities [...] “are people going to keep working on this”?
With some transitivity of preference and a world that’s not perpetually chaotically unsettled, people or organizations should be able to find something to work on for which they have no clearly better alternatives. My point is that this is good and worth doing well even when there is no reason to see what they are currently doing as clearly better than the other things they might’ve been doing instead. And if not enough people work on something, it won’t get done, which is OK if there is no reason to prefer it to other things people are actually working on (assuming that neglectedness is not forgotten as a reason to prefer something).
And if not enough people work on something, it won’t get done, which is OK if there is no reason to prefer it to other things people are actually working on
Well, one might prefer that something rather than nothing gets done. In which case it matters whether other people will work on it. In particular, when an organization with multiple people “decides” to do something, that’s tied up with believing that they will work on it, which affects motivation to work on it.
even when there is no reason to see what they are currently doing as a clearly better alternative to the other things they might’ve been doing instead
So, if you believe that you’re doing an “objectively” better plan, in particular you think that other people will recognize that your plan is good, and will want to work on it; so your belief is tied up with acting in a way that will be successful if other people will continue your work.
It provides an alternative version for the motivation of the entire project. More disturbingly, the alternative seems to explain some facts better, such as why after all that work and money spent, after all the grandiose secret plans, there is still no tangible output.
EDIT: The part “no tangible output” was not fair, I apologize for that. I am not updating the comment, because it would feel like moving the goalpost.
I know that it was a meme about Leverage 1.0 that it was impossible to understand, but I think that is pretty unfair today. If anyone is curious here are some relevant links:
An overview and videos from the Bottlenecks in Science and Technology event we co-organized. (The event led to around $26M in donations for a “Fast Grant”-style project in longevity research.)
We’re no longer engaged with the Rationality community so this information might not have become common knowledge. Hopefully, this helps.
I’m not sure what the meaning, if any, of the following fact is, but: I notice that I would feel very positively about Leverage as it’s portrayed here if there weren’t relationships with multiple younger subordinates (e.g. if the leader had been monogamously married), and as it is I feel mildly negative about it on net.
That wasn’t necessary evidence for me. The secrecy + “charting/debugging” + “the only organization with a plan that could possibly work, and the only real shot at saving the world” is (if true) adequate to label the organization a cult (in the colloquial sense). This are all ideas/systems/technologies that are consistently and systematically used by manipulative organizations to break a person’s ability to think straight. Any two of these might be okay if used extremely carefully (psychiatry uses secrecy + debugging) but having all three brings it solidly into cult territory. Also, psychiatry has lots of rules to prevent abuse, including public, well-established ethical standards.
Are Leverage’s standard operating procedures auditable knowledge to outsiders? If not, this is the mother of all red flags and we should default to “cult”.
Edit: LarissaRowe didn’t reply to this comment because Leverage doesn’t have a leg to stand on.Edit ×2: Shaming someone into a response violates the norms of Less Wrong. The first edit was a mistake. I apologize.
In psychiatry there’s no secrecy for treatment protocols and there are no secrecy rules for patients that prevent them from sharing about their experience.
That’s a good point. The psychiatrist (who has power) is sworn to secrecy but the patient (who is vulnerable) isn’t.
>”the only organization with a plan that could possibly work, and the only real shot at saving the world”
It’s definitely a warpy sort of belief. The issue to me, and why I could still feel positively about such an organization, is that the strong default for people and organizations might be a strong false lack of hope. In which case, it might be correct to have what seems like a delusional bubble of exceptionalism. It still seems to have some significant bad effects, and is still probably partly delusional, but if we don’t know how to get the magic of Hope without some delusion I don’t think that means we should throw away Hope.
>Are Leverage’s standard operating procedures auditable knowledge to outsiders?
It would be nice to live in a world where this standard were good and feasible, but I don’t think we do. Not holding this standard does open us up for the possibility of all sorts of abuse hiding in relative secrecy, but unfortunately I don’t see how to avoid that risk without becoming ineffective.
I think the things you point out are big risk factors, but to me don’t seem to indicate a “poison” in the telos of the organization. Whereas sexual/romantic stuff seems like significant evidence towards “poison”, in the sense of “it would actually be bad if these people were in power”.
The real problem is to have the belief that you are the only organization with a plan that might work while at the same time requiring secrecy that prevents the participants from feedback from the outside world that might make participants doubt that this is the case. If you then add strong self-modification techniques that also strengthen the belief, that’s no good enviroment.
I’m not sure how to pinpoint disagreement here.
I think it’s bad, possibly very bad, to have delusional beliefs like this. But I think by default we don’t already know how to decouple belief from intention. Saying “we’re the only ones with a plan to save the world that might work” is part belief (e.g., it implies that you expect to always find fatal flaws in others’s world-saving plans), and part intention (as in, I’m going to make myself have a plan that might work). We also can’t by default decouple belief from caring. Specialization can be interpreted as a belief that being a certain way is the best way for you to be; it’s not true, objectively, but it results in roughly the same actions. The intention to make your plans work and caring about the worlds in which you can possibly succeed is good, and if we can’t decouple these things, it might be worth having false beliefs (though of course it’s also extremely worth becoming able to decouple belief from caring and intention, and ameliorating the negative effects on the margin by forming separate beliefs about things that you are able to decouple, e.g. using explicit reason to figure out whether someone else’s plan might work, even if intuitively you’re “sure” that no one else’s plan could work).
I think it’s clearly bad to prevent feedback for the sake of protecting “beliefs”. But secrecy makes sense for other reasons. (Intentions matter because they affect many details of the implementation, which can add up to large overall effects on the outcomes.)
I think there are two kinds of secrecy. One is about not answering every questions that outsiders have. The other is about forbidding insiders from sharing information to the outside.
Power easily corrupts processes. Playing around with strong self modification is playing with a lot of power.
Secrecy has a lot easily visible benefits because you reduce your attack surface. But it has it’s costs and it’s generally wise to be skeptical of versions of it that prevent insiders from sharing information that’s not of a personal nature when doing radical projects.
Formatting note — if you put a space between the ‘>’ and the next character, it’ll format correctly as a proper block quote.
Planning for success doesn’t require knowledge of success, doesn’t get better if you believe things that can’t be known. Hope is a good concept for this situation: a risk of success where the probability of success needn’t be significant, it’s the value of success that makes hope relevant.
Hope makes sense as a concept of curiosity more than as one of decision making, so that you are not vulnerable to misleading expected utility calculations, but get some guidance for filling in the chart of possible plans, taking steps towards enacting them.
Yeah, if I follow you, I think I agree that Hope is most essential in the realm of curiosity. It seems like Leverage was/is aimed at realms that are deeply ontologically uncertain (what are the possibilities for using my mind radically more effectively, what really matters for affecting the world), which entails that curiosity and probing possibility-space is a nearly permanent central feature of what they’re trying to do. To say it more concretely, asking a really weird question and trying out really weird answers might feel intuitively more appealing if you think that you’re exceptional, and if you think your social context is exceptionally able to pick up on weird but true/important results.
More appealing compared to what alternative? Don’t stand still, do the work. There is rarely a reason to prefer a particular step of a large journey over all other steps. That’s the character of curiosity.
Hm, it seems like you’re arguing against the stance I’m describing, where my main point is just that this is a stance many people take. I sometimes find that I’ve been taking a stance like this; when I reflect on it I’ve never agreed with it, but that doesn’t mean it’s not happening. Maybe you’re rejecting putting effort into accommodating this stance, rather than unraveling it?
Formulating what might be going on gives something specific to talk about. But then what’s the point to settling on an emotional valence? Discussing the error seems interesting, regardless of what attitude that props up. The patch I proposed actually preserves the positive qualities, isn’t a demonstration of their absence.
>There is rarely a reason to prefer a particular step of a large journey over all other steps.That’s the character of curiosity.
I didn’t get the essence of your proposal from this. Could you phrase this as advice to, for example, Elon Musk (taking Elon as an example of someone who’s making good use of slightly delusional “beliefs” about his plans, while still remaining very solidly in contact with reality)?
Elon is one of the least delusional people. Not many people start companies like Elon when they believe there’s only a ten percent chance of success.
Elon sets goals that often won’t be archieved but that’s not the same as having delusional beliefs.
I agree he’s exceptionally well in contact with reality. But also part of his “setting goals” involves making “predictions” about timelines. Which are very often wrong, quantitatively (while being correct “in spirit” in the sense that they achieve the goal, just later than “predicted”).
Elon generally is not public about the likelihood of various events in timelines and speaks about his timelines as being optimistic guesses.
When a civilization gets curious, each individual only gets to work on a few observations, and most of these observations are not going to be foreknowably more important than others, or useful in isolation from others that are not even anticipated at the time, yet the whole activity is worthwhile. So absence of a reason to pursue a particular activity compared to other activities is no reason for not taking it seriously. It’s only presence of a reason to take up a different activity that warrants change.
What if there’s an abundance of specific reasons to take up various activities, and which ones you want to invest in seems to depend heavily on “follow through”, i.e. “are people going to keep working on this”?
With some transitivity of preference and a world that’s not perpetually chaotically unsettled, people or organizations should be able to find something to work on for which they have no clearly better alternatives. My point is that this is good and worth doing well even when there is no reason to see what they are currently doing as clearly better than the other things they might’ve been doing instead. And if not enough people work on something, it won’t get done, which is OK if there is no reason to prefer it to other things people are actually working on (assuming that neglectedness is not forgotten as a reason to prefer something).
Well, one might prefer that something rather than nothing gets done. In which case it matters whether other people will work on it. In particular, when an organization with multiple people “decides” to do something, that’s tied up with believing that they will work on it, which affects motivation to work on it.
So, if you believe that you’re doing an “objectively” better plan, in particular you think that other people will recognize that your plan is good, and will want to work on it; so your belief is tied up with acting in a way that will be successful if other people will continue your work.
It provides an alternative version for the motivation of the entire project. More disturbingly, the alternative seems to explain some facts better, such as why after all that work and money spent, after all the grandiose secret plans, there is still no tangible output.
EDIT: The part “no tangible output” was not fair, I apologize for that. I am not updating the comment, because it would feel like moving the goalpost.
I appreciate the edit, Viliam.
I know that it was a meme about Leverage 1.0 that it was impossible to understand, but I think that is pretty unfair today. If anyone is curious here are some relevant links:
Our website
Research we’ve done on Volta’s Electrophorus, Gilbert’s Electricks, the Leyden Jar, and Ørsted’s discovery of Electromagnetism.
Our quarterly newsletter
Our 2019-2020 Annual Report
Research reports from Leverage 1.0 on Argument Mapping and Intellectual Practice Examination.
Our $50K grant from Emergent Ventures.
An overview and videos from the Bottlenecks in Science and Technology event we co-organized. (The event led to around $26M in donations for a “Fast Grant”-style project in longevity research.)
We’re no longer engaged with the Rationality community so this information might not have become common knowledge. Hopefully, this helps.
I added a sub-bullet to the main post, to clarify my epistemic status on that point.
I have now made an even more substantial edit to that bullet point.
I think Bismarck Analysis Consulting Company, Paradigm Academy Training, and Reserve Cryptocurrency all came out of Leverage.