To be clear, I think you should treat this as bogus until you have evidence better than what you listed.
You’re trying to do a thing where, historically, a lot of people have had clever ideas that they were sincerely persuaded were groundbreaking, and have been able to find examples of their grand theory working, even though it didn’t amount to anything. So you should treat your own sincere enthusiasm and your own “subjectively it feels like it’s working” vibes with suspicion. You should actively be looking for ways to falsify your theory, which I’m not seeing anywhere in your post.
Again, I note that you haven’t tried the chess thing.
Okay while I disagree with a bunch of your framing here, I do think “look for ways to falsify” is indeed important and not really how I was orienting to it, and should be at least a part of how I’m orienting to it.
The way I was orienting to it was “try it, see how well it works, iterate, report honestly how it went.” (Fwiw this is more like a side project for me, and my mainline rationality development work is much more oriented to “design the process so it’s easier to evaluate and falsify.”)
I will mull over “explicitly aim to falsify” here, although I think this is the sort of question where that’s actually pretty hard. I think most forms of self-improvement are hard to justify scientifically, take a long time to pay off, and effects are subtle and intertwined enough that it’s hard to tell what works.
I don’t see offhand a better approach than “try it a bunch and see if it seems to work, and eventually give up if it doesn’t.
(FYI the practical place I’m most focused on testing this out is in teaching junior programmers “debugging” and “code taste”. I don’t think that can result in data that should persuade you, or even that should particularly change your mind that it should have persuaded me)
...
I do think I have deep disagreements about orientation here, where I think a lot of early stage development of rationality ideas requires going out on a limb, it’ll be years before it’s really clear whether it works or not. You can do RCTs, but they are very expensive and don’t make sense until you basically already know it works because you have large effect sizes and you want to justify it to skeptics.
I know some rationality training developers who have been extremely careful about not posting things until they’re confident they’re real, and honestly I think that attitude had more downsides than upsides (I think it basically killed LessWrong-as-a-place-where-people-do-rationality-development). I think it is much better to post your updates as you do them, with appropriate epistemic caveats.
...
I do think the claim here is… actually just not very unreasonable?
The claim here is:
You can deliberately try to learn why a person has a made a subtle update.
If they are good at explaining or you are good at interviewing, you can learn the details about what you’d do differently if you yourself made the update, and ask yourself whether the update actually makes sense.
It may help to imagine more viscerally the situations that caused the person to make the update.
I would be extremely surprised if this didn’t help at all. I wouldn’t be too surprised if it turns out to never be as good as actually burning out / playing chess for 200 hours / etc.
If they are good at explaining or you are good at interviewing, you can learn the details about what you’d do differently if you yourself made the update, and ask yourself whether the update actually makes sense.
I would be extremely surprised if this didn’t help at all.
I wouldn’t be very surprised. One, it seems coherent with what the world looks like.
Two, I suspect for the kind of wisdom / tacit knowledge you want, you need to register the information in types of memory that are never activated by verbal discussion or visceral imagining, by design.
Otherwise, yeah, I agree that it’s worth posting ideas even if you’re not sure of them, and I do appreciate the epistemic warning at the top of the post.
To be clear, I think you should treat this as bogus until you have evidence better than what you listed.
You’re trying to do a thing where, historically, a lot of people have had clever ideas that they were sincerely persuaded were groundbreaking, and have been able to find examples of their grand theory working, even though it didn’t amount to anything. So you should treat your own sincere enthusiasm and your own “subjectively it feels like it’s working” vibes with suspicion. You should actively be looking for ways to falsify your theory, which I’m not seeing anywhere in your post.
Again, I note that you haven’t tried the chess thing.
Okay while I disagree with a bunch of your framing here, I do think “look for ways to falsify” is indeed important and not really how I was orienting to it, and should be at least a part of how I’m orienting to it.
The way I was orienting to it was “try it, see how well it works, iterate, report honestly how it went.” (Fwiw this is more like a side project for me, and my mainline rationality development work is much more oriented to “design the process so it’s easier to evaluate and falsify.”)
I will mull over “explicitly aim to falsify” here, although I think this is the sort of question where that’s actually pretty hard. I think most forms of self-improvement are hard to justify scientifically, take a long time to pay off, and effects are subtle and intertwined enough that it’s hard to tell what works.
I don’t see offhand a better approach than “try it a bunch and see if it seems to work, and eventually give up if it doesn’t.
(FYI the practical place I’m most focused on testing this out is in teaching junior programmers “debugging” and “code taste”. I don’t think that can result in data that should persuade you, or even that should particularly change your mind that it should have persuaded me)
...
I do think I have deep disagreements about orientation here, where I think a lot of early stage development of rationality ideas requires going out on a limb, it’ll be years before it’s really clear whether it works or not. You can do RCTs, but they are very expensive and don’t make sense until you basically already know it works because you have large effect sizes and you want to justify it to skeptics.
I know some rationality training developers who have been extremely careful about not posting things until they’re confident they’re real, and honestly I think that attitude had more downsides than upsides (I think it basically killed LessWrong-as-a-place-where-people-do-rationality-development). I think it is much better to post your updates as you do them, with appropriate epistemic caveats.
...
I do think the claim here is… actually just not very unreasonable?
The claim here is:
I would be extremely surprised if this didn’t help at all. I wouldn’t be too surprised if it turns out to never be as good as actually burning out / playing chess for 200 hours / etc.
I wouldn’t be very surprised. One, it seems coherent with what the world looks like.
Two, I suspect for the kind of wisdom / tacit knowledge you want, you need to register the information in types of memory that are never activated by verbal discussion or visceral imagining, by design.
Otherwise, yeah, I agree that it’s worth posting ideas even if you’re not sure of them, and I do appreciate the epistemic warning at the top of the post.