The only flaw I can see it’s that this reasoning seems to put a lot of weight on the probability of meeting disaster by sticking with a well thought, well tested but imperfect set of rules, and not enough weight on the probability of meeting disaster by trying to be clever and do better than the rules, either by genuine fear of their limitations, or to follow a course of actions that you favour for reasons that aren’t really part of your goals but that you are then free to endorse by using the fear of the rules’ limits.
I get that the main point of the post is that you can’t just relax, get comfortable and think you finally got the perfect way of thinking if you want to actually get the right answers often enough, and I also agree with it. But I still feel that, when considering if to depart from the rules, even those of the lowly common sense, the possibility that you are just about to shoot yourself in the foot and meet some interesting new disaster for reasons you’ll see only after and that could have been avoided by sticking to it, has a significant probability compared to the possibility that you are actually in a situation where the rules aren’t enough and you’ll have to improve them to succeed, that this probability should be carefully weighted before choosing how to act, and that this warning should have been included in the reasoning above, as it has usually been included in other posts.
A practical example would be making this comment as a newcomer, ignoring the common sense consideration that as a flaw it feels kinda obvious and that someone else would have pointed it out by now if I weren’t simply missing the point of the post, but I still think that “don’t trust the rules” is a rule that requires an awful lot of caution before being mistrusted too much.
It’s easy to list flaws; for example the first paragraph admits a major flaw; and technically, if trust itself is a big part of what you value, then it could be crucially important to learn to “trust and think at the same time”.
Are either of those the flaw he found?
What we have to go on are “fairly inexcusable” and “affects one of the conclusions”. I’m not sure how to filter the claims into a set of more than one conclusion, since they circle around an idea which is supposed to be hard to put into words. Here’s an attempt.
Tentative observation: the impressive (actively growing) rationalists have early experiences which fall into a cluster.
The core of the cluster may be a breaking of “core emotional trust”.
We can spell out a vivid model where “core emotional trust” is blocking some people from advancing, and “core emotional trust” prevents a skill/activity called “lonely dissent”, and “lonely dissent” is crucial.
We can have (harmful, limiting) “core emotional trust” in science (and this example enriches our picture of it, and our picture of how much pretty obvious good “lonely dissent” can do).
There is no (known) human or mathematical system which is good (excusable, okay, safe) to put “core emotional trust” in.
“Core Emotional Trust” can only really be eliminated when we make our best synthesis of available external advice, then faithfully follow that synthesis, and then finally fail; face the failure squarely and recognize its source; and then continue trying by making our own methods.
More proposed flaws I thought of while spelling out the above:
An Orthodox Jewish background “seems to have had the same effect”, but then the later narrative attributes the effect to a break with Science. Similarly, the beginning of the post talks about childhood experiences, but the rest talks about Science and Bayescraft. In some ways this seems like a justifiable extrapolation, trying to use an observation to take the craft further. However, it is an extrapolation.
The post uses details to make possibilities seem more real. “Core emotional trust” is a complex model which is probably wrong somewhere. But, that doesn’t mean it’s entirely useless, and I don’t feel that’s the flaw.
The argument that Bayesianism can’t receive our core trust is slightly complex. Its points are good so far as they go, but to jump from there to “So you cannot trust” period is a bit abrupt.
It occurs to me that the entire post presupposes something like epistemic monism. Someone who is open to criticism, has a rich pool of critique, a rich pool of critique-generating habits, and constant motivation to examine such critiques and improve, could potentially have deep trust in Science or Bayescraft and still improve. Deep trust of the social web is a bit different—it prevents “lonely dissent”.
“Core emotional trust” can possibly be eliminated via other methods than the single, vividly described one at the end of the article. Following the initial example, seeing through a cult can be brought on when other members of the cult make huge errors, rather than onesself.
I suppose that’s given me plenty to think about, and I won’t try to guess the “real” flaw for now. I agree with, and have violated, the addendum: I had a scattered cloud of critical thoughts in order to feel more critical. (Also: I didn’t read all the existing comments first.)
The only flaw I can see it’s that this reasoning seems to put a lot of weight on the probability of meeting disaster by sticking with a well thought, well tested but imperfect set of rules, and not enough weight on the probability of meeting disaster by trying to be clever and do better than the rules, either by genuine fear of their limitations, or to follow a course of actions that you favour for reasons that aren’t really part of your goals but that you are then free to endorse by using the fear of the rules’ limits.
I get that the main point of the post is that you can’t just relax, get comfortable and think you finally got the perfect way of thinking if you want to actually get the right answers often enough, and I also agree with it. But I still feel that, when considering if to depart from the rules, even those of the lowly common sense, the possibility that you are just about to shoot yourself in the foot and meet some interesting new disaster for reasons you’ll see only after and that could have been avoided by sticking to it, has a significant probability compared to the possibility that you are actually in a situation where the rules aren’t enough and you’ll have to improve them to succeed, that this probability should be carefully weighted before choosing how to act, and that this warning should have been included in the reasoning above, as it has usually been included in other posts.
A practical example would be making this comment as a newcomer, ignoring the common sense consideration that as a flaw it feels kinda obvious and that someone else would have pointed it out by now if I weren’t simply missing the point of the post, but I still think that “don’t trust the rules” is a rule that requires an awful lot of caution before being mistrusted too much.
It’s easy to list flaws; for example the first paragraph admits a major flaw; and technically, if trust itself is a big part of what you value, then it could be crucially important to learn to “trust and think at the same time”.
Are either of those the flaw he found?
What we have to go on are “fairly inexcusable” and “affects one of the conclusions”. I’m not sure how to filter the claims into a set of more than one conclusion, since they circle around an idea which is supposed to be hard to put into words. Here’s an attempt.
Tentative observation: the impressive (actively growing) rationalists have early experiences which fall into a cluster.
The core of the cluster may be a breaking of “core emotional trust”.
We can spell out a vivid model where “core emotional trust” is blocking some people from advancing, and “core emotional trust” prevents a skill/activity called “lonely dissent”, and “lonely dissent” is crucial.
We can have (harmful, limiting) “core emotional trust” in science (and this example enriches our picture of it, and our picture of how much pretty obvious good “lonely dissent” can do).
There is no (known) human or mathematical system which is good (excusable, okay, safe) to put “core emotional trust” in.
“Core Emotional Trust” can only really be eliminated when we make our best synthesis of available external advice, then faithfully follow that synthesis, and then finally fail; face the failure squarely and recognize its source; and then continue trying by making our own methods.
More proposed flaws I thought of while spelling out the above:
An Orthodox Jewish background “seems to have had the same effect”, but then the later narrative attributes the effect to a break with Science. Similarly, the beginning of the post talks about childhood experiences, but the rest talks about Science and Bayescraft. In some ways this seems like a justifiable extrapolation, trying to use an observation to take the craft further. However, it is an extrapolation.
The post uses details to make possibilities seem more real. “Core emotional trust” is a complex model which is probably wrong somewhere. But, that doesn’t mean it’s entirely useless, and I don’t feel that’s the flaw.
The argument that Bayesianism can’t receive our core trust is slightly complex. Its points are good so far as they go, but to jump from there to “So you cannot trust” period is a bit abrupt.
It occurs to me that the entire post presupposes something like epistemic monism. Someone who is open to criticism, has a rich pool of critique, a rich pool of critique-generating habits, and constant motivation to examine such critiques and improve, could potentially have deep trust in Science or Bayescraft and still improve. Deep trust of the social web is a bit different—it prevents “lonely dissent”.
“Core emotional trust” can possibly be eliminated via other methods than the single, vividly described one at the end of the article. Following the initial example, seeing through a cult can be brought on when other members of the cult make huge errors, rather than onesself.
I suppose that’s given me plenty to think about, and I won’t try to guess the “real” flaw for now. I agree with, and have violated, the addendum: I had a scattered cloud of critical thoughts in order to feel more critical. (Also: I didn’t read all the existing comments first.)