The real problem is to have the belief that you are the only organization with a plan that might work while at the same time requiring secrecy that prevents the participants from feedback from the outside world that might make participants doubt that this is the case. If you then add strong self-modification techniques that also strengthen the belief, that’s no good enviroment.
I think it’s bad, possibly very bad, to have delusional beliefs like this. But I think by default we don’t already know how to decouple belief from intention. Saying “we’re the only ones with a plan to save the world that might work” is part belief (e.g., it implies that you expect to always find fatal flaws in others’s world-saving plans), and part intention (as in, I’m going to make myself have a plan that might work). We also can’t by default decouple belief from caring. Specialization can be interpreted as a belief that being a certain way is the best way for you to be; it’s not true, objectively, but it results in roughly the same actions. The intention to make your plans work and caring about the worlds in which you can possibly succeed is good, and if we can’t decouple these things, it might be worth having false beliefs (though of course it’s also extremely worth becoming able to decouple belief from caring and intention, and ameliorating the negative effects on the margin by forming separate beliefs about things that you are able to decouple, e.g. using explicit reason to figure out whether someone else’s plan might work, even if intuitively you’re “sure” that no one else’s plan could work).
I think it’s clearly bad to prevent feedback for the sake of protecting “beliefs”. But secrecy makes sense for other reasons. (Intentions matter because they affect many details of the implementation, which can add up to large overall effects on the outcomes.)
I think there are two kinds of secrecy. One is about not answering every questions that outsiders have. The other is about forbidding insiders from sharing information to the outside.
Power easily corrupts processes. Playing around with strong self modification is playing with a lot of power.
Secrecy has a lot easily visible benefits because you reduce your attack surface. But it has it’s costs and it’s generally wise to be skeptical of versions of it that prevent insiders from sharing information that’s not of a personal nature when doing radical projects.
The real problem is to have the belief that you are the only organization with a plan that might work while at the same time requiring secrecy that prevents the participants from feedback from the outside world that might make participants doubt that this is the case. If you then add strong self-modification techniques that also strengthen the belief, that’s no good enviroment.
I’m not sure how to pinpoint disagreement here.
I think it’s bad, possibly very bad, to have delusional beliefs like this. But I think by default we don’t already know how to decouple belief from intention. Saying “we’re the only ones with a plan to save the world that might work” is part belief (e.g., it implies that you expect to always find fatal flaws in others’s world-saving plans), and part intention (as in, I’m going to make myself have a plan that might work). We also can’t by default decouple belief from caring. Specialization can be interpreted as a belief that being a certain way is the best way for you to be; it’s not true, objectively, but it results in roughly the same actions. The intention to make your plans work and caring about the worlds in which you can possibly succeed is good, and if we can’t decouple these things, it might be worth having false beliefs (though of course it’s also extremely worth becoming able to decouple belief from caring and intention, and ameliorating the negative effects on the margin by forming separate beliefs about things that you are able to decouple, e.g. using explicit reason to figure out whether someone else’s plan might work, even if intuitively you’re “sure” that no one else’s plan could work).
I think it’s clearly bad to prevent feedback for the sake of protecting “beliefs”. But secrecy makes sense for other reasons. (Intentions matter because they affect many details of the implementation, which can add up to large overall effects on the outcomes.)
I think there are two kinds of secrecy. One is about not answering every questions that outsiders have. The other is about forbidding insiders from sharing information to the outside.
Power easily corrupts processes. Playing around with strong self modification is playing with a lot of power.
Secrecy has a lot easily visible benefits because you reduce your attack surface. But it has it’s costs and it’s generally wise to be skeptical of versions of it that prevent insiders from sharing information that’s not of a personal nature when doing radical projects.