I don’t really know what to tell you. My mindset basically boils down to “epistemic learned helplessness”, I guess?
It’s like, if you see a dozen different inventors try to elaborate ways to go to the moon based on Aristotelian physics, and you know the last dozen attempts failed, you’re going to expect them to fail as well, even if you don’t have the tools to articulate why. The precise answer is “because you guys haven’t invented Newtonian physics and you don’t know what you’re doing”, but the only answer you can give is “Your proposal for how to get to the moon uses a lot of very convincing words but the last twelve attempts used a lot of very convincing words too, and you’re not giving evidence of useful work that you did and these guys didn’t (or at least, not work that’s meaningfully different from the work these guys did and the guys before them didn’t).”
And overall, the general posture of your article gives me a lot of “Aristotelian rocket” vibes. The scattershot approach of making many claims, supporting them with a collage of stories, having a skill tree where you need ~15 (fifteen!) skills to supposedly unlock the final skill, strikes me as the kind of model you build when you’re trying to build redundancy into your claims because you’re not extremely confident in any one part. In other words, too many epicycles.
I especially notice that the one empirical experiment you ran, trying to invent tacit knowledge transfer with George in one hour, seems to have failed a lot more than it succeeded, and you basically didn’t update on that. The start of the post says:
“I somehow believe in my heart that it is more tractable to spend an hour trying to invent Tacit Soulful Knowledge Transfer via talking, than me spending 40-120 hours practicing chess. Also, Tacit Soulful Transfer seems way bigger-if-true than the Competitive Deliberate Practice thing. Also if it doesn’t work I can still go do the chess thing later.”
The end says:
That all sounds vaguely profound, but one thing our conversation wrapped up without is “what actions would I do differently, in the worlds where I had truly integrated all of that?”
We didn’t actually get to that part.
And yet (correct me if I’m wrong) you didn’t go do the chess thing.
Here are my current guesses:
No! Don’t!
I’m actually angry at you there. Imagine me saying rude things.
You can’t say “I didn’t get far enough to learn the actual lesson, but here’s the lesson I think I would have learned”! Emotionally honest people don’t do that! You don’t get to say “Well this is speculative, buuuuuut”! No “but”! Everything after the “but” is basically Yudkowsky’s proverbial bottom line.
if you (PoignardAzur) set about to say “okay, is there a wisdom I want to listen to, or convey to a particular person? How would I do that?”
Through failure. My big theory of learning is that you learn through trying to do something, and failing.
So if you want to teach someone something, you set up a frame for them where they try to do it, and fail, and then you iterate from there. You can surround them with other people who are trying the same thing so they can compare notes, do post-mortems with them, etc (that’s how my software engineering school worked), but in every case the secret ingredient is failure.
I agree with ‘failure being an important part of learning’. If you end up trying to do something somewhere in this vicinity and failing I am happy to take a stab at helping.
I think it’s quite reasonable to have the epistemic state of “seems like I should treat this as bogus until I get more evidence”, esp. if you don’t have anywhere close to the prerequisite skills such that ‘solidify 1-2 more skills and then try actually doing the final Thing’ doesn’t feel within spitting distance.
To be clear, I think you should treat this as bogus until you have evidence better than what you listed.
You’re trying to do a thing where, historically, a lot of people have had clever ideas that they were sincerely persuaded were groundbreaking, and have been able to find examples of their grand theory working, even though it didn’t amount to anything. So you should treat your own sincere enthusiasm and your own “subjectively it feels like it’s working” vibes with suspicion. You should actively be looking for ways to falsify your theory, which I’m not seeing anywhere in your post.
Again, I note that you haven’t tried the chess thing.
Okay while I disagree with a bunch of your framing here, I do think “look for ways to falsify” is indeed important and not really how I was orienting to it, and should be at least a part of how I’m orienting to it.
The way I was orienting to it was “try it, see how well it works, iterate, report honestly how it went.” (Fwiw this is more like a side project for me, and my mainline rationality development work is much more oriented to “design the process so it’s easier to evaluate and falsify.”)
I will mull over “explicitly aim to falsify” here, although I think this is the sort of question where that’s actually pretty hard. I think most forms of self-improvement are hard to justify scientifically, take a long time to pay off, and effects are subtle and intertwined enough that it’s hard to tell what works.
I don’t see offhand a better approach than “try it a bunch and see if it seems to work, and eventually give up if it doesn’t.
(FYI the practical place I’m most focused on testing this out is in teaching junior programmers “debugging” and “code taste”. I don’t think that can result in data that should persuade you, or even that should particularly change your mind that it should have persuaded me)
...
I do think I have deep disagreements about orientation here, where I think a lot of early stage development of rationality ideas requires going out on a limb, it’ll be years before it’s really clear whether it works or not. You can do RCTs, but they are very expensive and don’t make sense until you basically already know it works because you have large effect sizes and you want to justify it to skeptics.
I know some rationality training developers who have been extremely careful about not posting things until they’re confident they’re real, and honestly I think that attitude had more downsides than upsides (I think it basically killed LessWrong-as-a-place-where-people-do-rationality-development). I think it is much better to post your updates as you do them, with appropriate epistemic caveats.
...
I do think the claim here is… actually just not very unreasonable?
The claim here is:
You can deliberately try to learn why a person has a made a subtle update.
If they are good at explaining or you are good at interviewing, you can learn the details about what you’d do differently if you yourself made the update, and ask yourself whether the update actually makes sense.
It may help to imagine more viscerally the situations that caused the person to make the update.
I would be extremely surprised if this didn’t help at all. I wouldn’t be too surprised if it turns out to never be as good as actually burning out / playing chess for 200 hours / etc.
If they are good at explaining or you are good at interviewing, you can learn the details about what you’d do differently if you yourself made the update, and ask yourself whether the update actually makes sense.
I would be extremely surprised if this didn’t help at all.
I wouldn’t be very surprised. One, it seems coherent with what the world looks like.
Two, I suspect for the kind of wisdom / tacit knowledge you want, you need to register the information in types of memory that are never activated by verbal discussion or visceral imagining, by design.
Otherwise, yeah, I agree that it’s worth posting ideas even if you’re not sure of them, and I do appreciate the epistemic warning at the top of the post.
I don’t really know what to tell you. My mindset basically boils down to “epistemic learned helplessness”, I guess?
It’s like, if you see a dozen different inventors try to elaborate ways to go to the moon based on Aristotelian physics, and you know the last dozen attempts failed, you’re going to expect them to fail as well, even if you don’t have the tools to articulate why. The precise answer is “because you guys haven’t invented Newtonian physics and you don’t know what you’re doing”, but the only answer you can give is “Your proposal for how to get to the moon uses a lot of very convincing words but the last twelve attempts used a lot of very convincing words too, and you’re not giving evidence of useful work that you did and these guys didn’t (or at least, not work that’s meaningfully different from the work these guys did and the guys before them didn’t).”
And overall, the general posture of your article gives me a lot of “Aristotelian rocket” vibes. The scattershot approach of making many claims, supporting them with a collage of stories, having a skill tree where you need ~15 (fifteen!) skills to supposedly unlock the final skill, strikes me as the kind of model you build when you’re trying to build redundancy into your claims because you’re not extremely confident in any one part. In other words, too many epicycles.
I especially notice that the one empirical experiment you ran, trying to invent tacit knowledge transfer with George in one hour, seems to have failed a lot more than it succeeded, and you basically didn’t update on that. The start of the post says:
The end says:
And yet (correct me if I’m wrong) you didn’t go do the chess thing.
No! Don’t!
I’m actually angry at you there. Imagine me saying rude things.
You can’t say “I didn’t get far enough to learn the actual lesson, but here’s the lesson I think I would have learned”! Emotionally honest people don’t do that! You don’t get to say “Well this is speculative, buuuuuut”! No “but”! Everything after the “but” is basically Yudkowsky’s proverbial bottom line.
Through failure. My big theory of learning is that you learn through trying to do something, and failing.
So if you want to teach someone something, you set up a frame for them where they try to do it, and fail, and then you iterate from there. You can surround them with other people who are trying the same thing so they can compare notes, do post-mortems with them, etc (that’s how my software engineering school worked), but in every case the secret ingredient is failure.
I agree with ‘failure being an important part of learning’. If you end up trying to do something somewhere in this vicinity and failing I am happy to take a stab at helping.
I think it’s quite reasonable to have the epistemic state of “seems like I should treat this as bogus until I get more evidence”, esp. if you don’t have anywhere close to the prerequisite skills such that ‘solidify 1-2 more skills and then try actually doing the final Thing’ doesn’t feel within spitting distance.
To be clear, I think you should treat this as bogus until you have evidence better than what you listed.
You’re trying to do a thing where, historically, a lot of people have had clever ideas that they were sincerely persuaded were groundbreaking, and have been able to find examples of their grand theory working, even though it didn’t amount to anything. So you should treat your own sincere enthusiasm and your own “subjectively it feels like it’s working” vibes with suspicion. You should actively be looking for ways to falsify your theory, which I’m not seeing anywhere in your post.
Again, I note that you haven’t tried the chess thing.
Okay while I disagree with a bunch of your framing here, I do think “look for ways to falsify” is indeed important and not really how I was orienting to it, and should be at least a part of how I’m orienting to it.
The way I was orienting to it was “try it, see how well it works, iterate, report honestly how it went.” (Fwiw this is more like a side project for me, and my mainline rationality development work is much more oriented to “design the process so it’s easier to evaluate and falsify.”)
I will mull over “explicitly aim to falsify” here, although I think this is the sort of question where that’s actually pretty hard. I think most forms of self-improvement are hard to justify scientifically, take a long time to pay off, and effects are subtle and intertwined enough that it’s hard to tell what works.
I don’t see offhand a better approach than “try it a bunch and see if it seems to work, and eventually give up if it doesn’t.
(FYI the practical place I’m most focused on testing this out is in teaching junior programmers “debugging” and “code taste”. I don’t think that can result in data that should persuade you, or even that should particularly change your mind that it should have persuaded me)
...
I do think I have deep disagreements about orientation here, where I think a lot of early stage development of rationality ideas requires going out on a limb, it’ll be years before it’s really clear whether it works or not. You can do RCTs, but they are very expensive and don’t make sense until you basically already know it works because you have large effect sizes and you want to justify it to skeptics.
I know some rationality training developers who have been extremely careful about not posting things until they’re confident they’re real, and honestly I think that attitude had more downsides than upsides (I think it basically killed LessWrong-as-a-place-where-people-do-rationality-development). I think it is much better to post your updates as you do them, with appropriate epistemic caveats.
...
I do think the claim here is… actually just not very unreasonable?
The claim here is:
I would be extremely surprised if this didn’t help at all. I wouldn’t be too surprised if it turns out to never be as good as actually burning out / playing chess for 200 hours / etc.
I wouldn’t be very surprised. One, it seems coherent with what the world looks like.
Two, I suspect for the kind of wisdom / tacit knowledge you want, you need to register the information in types of memory that are never activated by verbal discussion or visceral imagining, by design.
Otherwise, yeah, I agree that it’s worth posting ideas even if you’re not sure of them, and I do appreciate the epistemic warning at the top of the post.