Ezra asks him what the important features of this memeplex are. He points to epistemic rigor and scope sensitivity. I agree, I think you can get very far just with those two, and where rationalists often fail instrumentally is in not sharpening scope sensitivity over time. Marginal arguments naturally fall out of consistently doing this which kills a lot of failure modes. I would add locus of control, which winds up being more subtle than initially obvious and kills a lot of additional failure modes related to prioritization.
First off, I am saying something like this: There are lots of different kinds of true things. There are also lots of things that seem big and important. If you were a truth maximizer and only cared about, say, how many truths you had available you might start cataloging useless facts. If you were a big important things maximizer, you might just fall prey to whichever megachurch preacher got to you first. But if you are both trying to carefully assess what is true, and trying to preferentially devote attention to the largest, most cross domain, most person affecting, etc such truths, this will cut out a lot of failure modes. Some of them non-obvious as epistemic rigor and scope sensitivity are both deep skill trees.
I.e. if we are limited in skill points, I want to know which skill trees give me the biggest unfair advantages the fastest.
Now why did I add locus of control? Technically I suppose we could regard it as a special case of epistemic rigor (after all, what you can and can’t affect are facts about the world that are discoverable) but I thought it was worthy of special singling out. Locus of control prevents you from doing common ineffective stuff like thinking about national politics, or perfectionism paralysis, or goal/task disconnection. At higher levels of skill point investment it allows for making dramatically more robust plans (high reliability taps are much harder to create without a well calibrated locus of control), causing growth mindset to work correctly (there is some empirical support for locus of control being the hidden variable in whether growth mindset works, people who applied growth mindset to inputs saw success and those who applied growth mindset to outputs failed) and other stuff I’m forgetting.
I’m not sure why I said marginal arguments fall out of scope sensitivity since it seems like they more fall out of locus of control. A clean taxonomy of the important seeming skill trees is something I have taken a few cracks at but none have been super satisfying so far. Duhigg’s ad hoc description of successful people skills in Smarter Faster Better is actually pretty good, but there is a sense that more is possible here.
If you don’t like iTunes, you can listen to it here.
Ezra asks him what the important features of this memeplex are. He points to epistemic rigor and scope sensitivity. I agree, I think you can get very far just with those two, and where rationalists often fail instrumentally is in not sharpening scope sensitivity over time. Marginal arguments naturally fall out of consistently doing this which kills a lot of failure modes. I would add locus of control, which winds up being more subtle than initially obvious and kills a lot of additional failure modes related to prioritization.
I feel these are important things that you are saying, but I totally fail to parse them. Could you provide examples or explanations?
Maybe, not sure I will succeed.
First off, I am saying something like this: There are lots of different kinds of true things. There are also lots of things that seem big and important. If you were a truth maximizer and only cared about, say, how many truths you had available you might start cataloging useless facts. If you were a big important things maximizer, you might just fall prey to whichever megachurch preacher got to you first. But if you are both trying to carefully assess what is true, and trying to preferentially devote attention to the largest, most cross domain, most person affecting, etc such truths, this will cut out a lot of failure modes. Some of them non-obvious as epistemic rigor and scope sensitivity are both deep skill trees. I.e. if we are limited in skill points, I want to know which skill trees give me the biggest unfair advantages the fastest.
Now why did I add locus of control? Technically I suppose we could regard it as a special case of epistemic rigor (after all, what you can and can’t affect are facts about the world that are discoverable) but I thought it was worthy of special singling out. Locus of control prevents you from doing common ineffective stuff like thinking about national politics, or perfectionism paralysis, or goal/task disconnection. At higher levels of skill point investment it allows for making dramatically more robust plans (high reliability taps are much harder to create without a well calibrated locus of control), causing growth mindset to work correctly (there is some empirical support for locus of control being the hidden variable in whether growth mindset works, people who applied growth mindset to inputs saw success and those who applied growth mindset to outputs failed) and other stuff I’m forgetting.
I’m not sure why I said marginal arguments fall out of scope sensitivity since it seems like they more fall out of locus of control. A clean taxonomy of the important seeming skill trees is something I have taken a few cracks at but none have been super satisfying so far. Duhigg’s ad hoc description of successful people skills in Smarter Faster Better is actually pretty good, but there is a sense that more is possible here.
Thanks, this definitely cleans up your thought for me.
There are always more lurkers than you expect.
Also, nice to see that we dissipated just in time to go mainstream :P
Hopefully we’ll… reissipate, just in time to catch the fame and glory.
lesswrong.com had a big population drop, but the rationalist diaspora is booming.
I don’t always leave comments on the internet, but when I do I hope that billionaires like Patrick Collison read them.
Former Stripe CTO (Greg Brockman) co-founded OpenAI and also shares much of the memeplex.
Uh, is it just me, or does this link lead to some ‘optimize[dot]com’? Is this an experiment?