“the tools of LW rationality are better at helping you converge towards the boundary of what is already known and worse at getting past that.”
For what it’s worth, in my neuroscience work, I think of myself as doing a lot of new-hypothesis-generation (or new-theoretical-framework-generation) and not so much evaluation-of-existing-hypotheses. If you believe me in that characterization, then my perspective is: I agree that The Sequences’ emphasis on Bayes rule is not too helpful for that activity, but there is lots of other stuff in The Sequences, and/or in later rationality stuff like CFAR-handbook and Scout Mindset, and/or in LW cultural norms, that constitute useful tools for the activity of new-hypothesis-generation. Examples include the “I notice that I’m confused” thing, ass numbers and probabilities, ITTs, changing-one’s-mind being generally praiseworthy, “my model says…”, and probably lots more that I’ve stopped noticing because they’re so in-the-water.
I don’t think any of those things are a royal road to great hypothesis-generation, by any means, but I do think they’re helpful on the margin.
(Compare that to my own physics PhD where I learned no useful skills or tools whatsoever for constructing new good hypotheses / theories. …Although to be fair, maybe I just had the wrong PIs for that, or never bothered to ask them for advice, or something.)
I feel like alkjash’s characterization of “correctness” is just not at all what the material I read was pointing towards.
The Sequences’ emphasis on Bayes rule
Maybe I’m misremembering. But for me, the core Thing this part of the Sequences imparted was “intelligence, beliefs, information, etc—it’s not arbitrary. It’s lawful. It has structure. Here, take a look. Get a feel for what it means for those sorts of things to ‘have structure, be lawful’. Bake it into your patterns of thought, that feeling.”
If a bunch of people are instead taking away as the core Thing “you can do explicit calculations to update your beliefs” I would feel pretty sad about that, I think?
You need your mind to have at least barely enough correctness-structure/Lawfulness to make your ideas semi-correct, or at least easy to correct them later.
Then you want to increase originality within that space.
And if you need more original ideas, you go outside that space (e.g. by assuming your premises are false, or by taking drugs; yes, these are the same class of thing), and then clawing those ideas back into the Lawfulness zone.
Reading things like this, and seeing how long it took them to remember “Babble vs Prune”, makes me wonder if people just forgot the existence of the “create, then edit” pattern. So people end up rounding off to “Youdon’t need to edit or learn more, because all of my creative ideas are also semi-correct in the first place”. Or “You can’t create good-in-hindsight ideas without editing tools X Y Z in your toolbelt”.
For what it’s worth, in my neuroscience work, I think of myself as doing a lot of new-hypothesis-generation (or new-theoretical-framework-generation) and not so much evaluation-of-existing-hypotheses. If you believe me in that characterization, then my perspective is: I agree that The Sequences’ emphasis on Bayes rule is not too helpful for that activity, but there is lots of other stuff in The Sequences, and/or in later rationality stuff like CFAR-handbook and Scout Mindset, and/or in LW cultural norms, that constitute useful tools for the activity of new-hypothesis-generation. Examples include the “I notice that I’m confused” thing, ass numbers and probabilities, ITTs, changing-one’s-mind being generally praiseworthy, “my model says…”, and probably lots more that I’ve stopped noticing because they’re so in-the-water.
I don’t think any of those things are a royal road to great hypothesis-generation, by any means, but I do think they’re helpful on the margin.
(Compare that to my own physics PhD where I learned no useful skills or tools whatsoever for constructing new good hypotheses / theories. …Although to be fair, maybe I just had the wrong PIs for that, or never bothered to ask them for advice, or something.)
I feel like alkjash’s characterization of “correctness” is just not at all what the material I read was pointing towards.
Maybe I’m misremembering. But for me, the core Thing this part of the Sequences imparted was “intelligence, beliefs, information, etc—it’s not arbitrary. It’s lawful. It has structure. Here, take a look. Get a feel for what it means for those sorts of things to ‘have structure, be lawful’. Bake it into your patterns of thought, that feeling.”
If a bunch of people are instead taking away as the core Thing “you can do explicit calculations to update your beliefs” I would feel pretty sad about that, I think?
Agreed. I think of it as:
You need your mind to have at least barely enough correctness-structure/Lawfulness to make your ideas semi-correct, or at least easy to correct them later.
Then you want to increase originality within that space.
And if you need more original ideas, you go outside that space (e.g. by assuming your premises are false, or by taking drugs; yes, these are the same class of thing), and then clawing those ideas back into the Lawfulness zone.
Reading things like this, and seeing how long it took them to remember “Babble vs Prune”, makes me wonder if people just forgot the existence of the “create, then edit” pattern. So people end up rounding off to “You don’t need to edit or learn more, because all of my creative ideas are also semi-correct in the first place”. Or “You can’t create good-in-hindsight ideas without editing tools X Y Z in your toolbelt”.
The answer is probably closer to one of these than the other, and yadda yadda social engineering something something community beliefs, but man do people talk like they believe these trivially-false extreme cases.