I don’t really know what to tell you. My mindset basically boils down to “epistemic learned helplessness”, I guess?
It’s like, if you see a dozen different inventors try to elaborate ways to go to the moon based on Aristotelian physics, and you know the last dozen attempts failed, you’re going to expect them to fail as well, even if you don’t have the tools to articulate why. The precise answer is “because you guys haven’t invented Newtonian physics and you don’t know what you’re doing”, but the only answer you can give is “Your proposal for how to get to the moon uses a lot of very convincing words but the last twelve attempts used a lot of very convincing words too, and you’re not giving evidence of useful work that you did and these guys didn’t (or at least, not work that’s meaningfully different from the work these guys did and the guys before them didn’t).”
And overall, the general posture of your article gives me a lot of “Aristotelian rocket” vibes. The scattershot approach of making many claims, supporting them with a collage of stories, having a skill tree where you need ~15 (fifteen!) skills to supposedly unlock the final skill, strikes me as the kind of model you build when you’re trying to build redundancy into your claims because you’re not extremely confident in any one part. In other words, too many epicycles.
I especially notice that the one empirical experiment you ran, trying to invent tacit knowledge transfer with George in one hour, seems to have failed a lot more than it succeeded, and you basically didn’t update on that. The start of the post says:
“I somehow believe in my heart that it is more tractable to spend an hour trying to invent Tacit Soulful Knowledge Transfer via talking, than me spending 40-120 hours practicing chess. Also, Tacit Soulful Transfer seems way bigger-if-true than the Competitive Deliberate Practice thing. Also if it doesn’t work I can still go do the chess thing later.”
The end says:
That all sounds vaguely profound, but one thing our conversation wrapped up without is “what actions would I do differently, in the worlds where I had truly integrated all of that?”
We didn’t actually get to that part.
And yet (correct me if I’m wrong) you didn’t go do the chess thing.
Here are my current guesses:
No! Don’t!
I’m actually angry at you there. Imagine me saying rude things.
You can’t say “I didn’t get far enough to learn the actual lesson, but here’s the lesson I think I would have learned”! Emotionally honest people don’t do that! You don’t get to say “Well this is speculative, buuuuuut”! No “but”! Everything after the “but” is basically Yudkowsky’s proverbial bottom line.
if you (PoignardAzur) set about to say “okay, is there a wisdom I want to listen to, or convey to a particular person? How would I do that?”
Through failure. My big theory of learning is that you learn through trying to do something, and failing.
So if you want to teach someone something, you set up a frame for them where they try to do it, and fail, and then you iterate from there. You can surround them with other people who are trying the same thing so they can compare notes, do post-mortems with them, etc (that’s how my software engineering school worked), but in every case the secret ingredient is failure.
To be clear, I think you should treat this as bogus until you have evidence better than what you listed.
You’re trying to do a thing where, historically, a lot of people have had clever ideas that they were sincerely persuaded were groundbreaking, and have been able to find examples of their grand theory working, even though it didn’t amount to anything. So you should treat your own sincere enthusiasm and your own “subjectively it feels like it’s working” vibes with suspicion. You should actively be looking for ways to falsify your theory, which I’m not seeing anywhere in your post.
Again, I note that you haven’t tried the chess thing.