I fully admit that I do not have strong outside-view evidence that this method will objectively improve your rationality—if I did, I would post it. But many (most?) rationality techniques discussed here lack such evidence as well.
Anecdotally, I can say that it seems to have been quite effective for me and there are many inside-view elements pointing towards this as a strong method.
That may not be fully convincing, and I agree it’s a problem. Indeed, one of the main reasons that I posted this is that I hope others will attempt the same or similar and we can get a broader picture of this space.
Anecdotally, I can say that it seems to have been quite effective for me and there are many inside-view elements pointing towards this as a strong method.
—Writing fiction has improved my rationality because, in writing about characters who don’t know all the information I know, I’ve come to viscerally understand the distinction between map and territory.
—Surrounding myself with rationalists has improved my rationality because social incentives push me to actually do the things we all agree are good ideas.
How does lucid dreaming improve rationality? You’ve asserted that it does, but I don’t know what relevant skills it trains, or how. (You mention the phrase “noticing confusion,” but that’s all I could find.)
Lucid dreaming has improved my rationality because one of the key skills of rationality is noticing that you are confused, and one of the key skills that can be used to induce lucid dreaming is noticing that you are confused.
Further, lucid dreaming gives me the opportunity to practice coming to the correct conclusion in spite of my brain’s efforts to the contrary.
Further, lucid dreaming is an opportunity for deliberate practice with high aliveness.
Is any of the above not clear from the original post? If so, I should probably rewrite it—the reason that I asked what you meant is because I thought the above was apparent.
Further, lucid dreaming is an opportunity for deliberate practice with high aliveness.
Could you expand on “aliveness”, please? I haven’t heard the term before, and Google’s mostly giving me obviously unrelated stuff mixed in with a bit of fluff that I don’t trust.
Ack. Sorry, I thought that was fundamental to LW but I got my communities mixed up. It definitely merits a post of its own, which I’ll put up within the week.
The first example is exactly the sort of thing I was hoping for—thanks! That clarifies what you meant in the original post. I’m not sure what the other two examples mean, probably because I know basically nothing about lucid dreaming. What are “your brain’s efforts to the contrary”? How does lucid dreaming invoke deliberate practice? What is “high aliveness”? I expect this probably connects to something useful, but the inferential distance is too great for me to get anything from it.
Examples of the sorts of examples I’m looking for:
—Brienne’s post
—Writing fiction has improved my rationality because, in writing about characters who don’t know all the information I know, I’ve come to viscerally understand the distinction between map and territory.
—Surrounding myself with rationalists has improved my rationality because social incentives push me to actually do the things we all agree are good ideas.
How does lucid dreaming improve rationality? You’ve asserted that it does, but I don’t know what relevant skills it trains, or how. (You mention the phrase “noticing confusion,” but that’s all I could find.)
I fully admit that I do not have strong outside-view evidence that this method will objectively improve your rationality—if I did, I would post it. But many (most?) rationality techniques discussed here lack such evidence as well.
This seems like something that should be fixed. A few ideas:
Scientific studies—seems to slow, requires dealing with academia… I think we can do better.
Well designed self-experimentation—Gwern’s studies on nootropics are the best examples I know of, but there are others like [Seth Robert’s] self experiments.
Studies done by a more formal organization—For example, I think CFAR might be doing studies like this.
Regarding my second point, it seems that this sort of thing could benefit a lot from a division of labor—where a small group of people design the experiments, and many more people just follow the instructions. It might be worth trying to organize a group of people willing to participate in these sorts of experiments, so that it is easier to test rationality techniques.
I fully admit that I do not have strong outside-view evidence that this method will objectively improve your rationality—if I did, I would post it. But many (most?) rationality techniques discussed here lack such evidence as well.
Anecdotally, I can say that it seems to have been quite effective for me and there are many inside-view elements pointing towards this as a strong method.
That may not be fully convincing, and I agree it’s a problem. Indeed, one of the main reasons that I posted this is that I hope others will attempt the same or similar and we can get a broader picture of this space.
Can you give examples?
Sure, what sorts of examples are you looking for?
Examples of the sorts of examples I’m looking for:
—Brienne’s post
—Writing fiction has improved my rationality because, in writing about characters who don’t know all the information I know, I’ve come to viscerally understand the distinction between map and territory.
—Surrounding myself with rationalists has improved my rationality because social incentives push me to actually do the things we all agree are good ideas.
How does lucid dreaming improve rationality? You’ve asserted that it does, but I don’t know what relevant skills it trains, or how. (You mention the phrase “noticing confusion,” but that’s all I could find.)
Lucid dreaming has improved my rationality because one of the key skills of rationality is noticing that you are confused, and one of the key skills that can be used to induce lucid dreaming is noticing that you are confused.
Further, lucid dreaming gives me the opportunity to practice coming to the correct conclusion in spite of my brain’s efforts to the contrary.
Further, lucid dreaming is an opportunity for deliberate practice with high aliveness.
Is any of the above not clear from the original post? If so, I should probably rewrite it—the reason that I asked what you meant is because I thought the above was apparent.
Could you expand on “aliveness”, please? I haven’t heard the term before, and Google’s mostly giving me obviously unrelated stuff mixed in with a bit of fluff that I don’t trust.
Ack. Sorry, I thought that was fundamental to LW but I got my communities mixed up. It definitely merits a post of its own, which I’ll put up within the week.
Post complete.
Is it related to EY’s impression that CEOs of tech companies seem “more alive” than other people?
Not at all.
The first example is exactly the sort of thing I was hoping for—thanks! That clarifies what you meant in the original post. I’m not sure what the other two examples mean, probably because I know basically nothing about lucid dreaming. What are “your brain’s efforts to the contrary”? How does lucid dreaming invoke deliberate practice? What is “high aliveness”? I expect this probably connects to something useful, but the inferential distance is too great for me to get anything from it.
Examples of the sorts of examples I’m looking for:
—Brienne’s post —Writing fiction has improved my rationality because, in writing about characters who don’t know all the information I know, I’ve come to viscerally understand the distinction between map and territory. —Surrounding myself with rationalists has improved my rationality because social incentives push me to actually do the things we all agree are good ideas.
How does lucid dreaming improve rationality? You’ve asserted that it does, but I don’t know what relevant skills it trains, or how. (You mention the phrase “noticing confusion,” but that’s all I could find.)
This seems like something that should be fixed. A few ideas:
Scientific studies—seems to slow, requires dealing with academia… I think we can do better.
Well designed self-experimentation—Gwern’s studies on nootropics are the best examples I know of, but there are others like [Seth Robert’s] self experiments.
Studies done by a more formal organization—For example, I think CFAR might be doing studies like this.
Regarding my second point, it seems that this sort of thing could benefit a lot from a division of labor—where a small group of people design the experiments, and many more people just follow the instructions. It might be worth trying to organize a group of people willing to participate in these sorts of experiments, so that it is easier to test rationality techniques.