This post raises an aspect of a topic that I’ve considered, and seems potentially relevant to rationality—how to formalize and train the art of learning, particularly in knowledge/skills that don’t reduce simply to the kind of propositional knowledge you can look up on Wikipedia. There’s a lot of knowledge out there that could be useful to a rationalist, and at least until Eliezer’s secret-knowledge weirdtopia gets implemented we might as well look for ways to climb onto the giants’ shoulders. After all, rationality ought to be good for something other than becoming more rational.
Unfortunately, most extant discussions of learning I’ve seen are written around the perspective of a teacher assuming default passivity from students, and usually are somewhat lacking in empiricism.
I agree that this is a really interesting question. A couple of half-baked thoughts:
Alicorn’s formulation here is basically a search algorithm. The first two stages (Saturation and Distillation) are ways of using existing information to find decent initial values; the final stage (Experimentation) is the stepping algorithm. Thinking about it this way, it’s immediately obvious that there’s a lot more that could go into this last part: how to carve up the search space, how to decide which direction to step, whether to accept a step, etc. all of which have been explored extensively in other contexts. (Note: I’m not suggesting that this makes the problem trivial, or we should just think about this in terms of existing search algorithm paradigms; merely that thinking about things in this way could provide useful insights.)
One interesting facet of this sort of problem is that the precise mode of “failure” of a particular experiment can give information about where to step next. At a very basic level, you have things like burning, which, as most people will realize, suggest cooking at a lower temperature or for less time. At a higher level, you have things like the failure to form peaks, which, unless you can get more information from elsewhere, or you have a good understanding of food chemistry, you probably won’t have much of an idea how to fix.
IMO, it has to involve—amongst other things! -- personal practice of the sort I describe in the following.
If you want to avoid falling into mental ‘traps’ (not to make assumptions, not to fall victim to biases, etc), then you have to be able to spot when you are about to, or just have, commited one.
You need a highly-refined “bullshit detector”. And it is crucial that you can use it automatically. When you’re in the midst of thinking about something you can’t just sit back and consciously reason about your own thinking in order to spot every mistake. You have to be really good at automatically spotting them.
If it needs to be automatic then it has to come from practice. So I say get as much practice as you can: anywhere and anyhow.
Always have it as a consicous priority to want to avoid falling into any mental traps. Always consider how what you’re thinking could possibly be wrong. The same for anything you hear anyone else say. And in particular, for anything that you read. I always have a pencil with me when I’m reading things and whenever I think I’ve found someone falling into some sort of mental trap I try to note the details of it.
(This is about having a pretty constant critical attitude but it doesn’t mean being a jerk who has a negative outlook on life, or who needlessly attack others’ views in public, or who shows off by pointing out mistakes, etc.)
Over time, you’ll build up a more substantial toolkit, and those capabilities will become more automatic.
This post raises an aspect of a topic that I’ve considered, and seems potentially relevant to rationality—how to formalize and train the art of learning, particularly in knowledge/skills that don’t reduce simply to the kind of propositional knowledge you can look up on Wikipedia. There’s a lot of knowledge out there that could be useful to a rationalist, and at least until Eliezer’s secret-knowledge weirdtopia gets implemented we might as well look for ways to climb onto the giants’ shoulders. After all, rationality ought to be good for something other than becoming more rational.
Unfortunately, most extant discussions of learning I’ve seen are written around the perspective of a teacher assuming default passivity from students, and usually are somewhat lacking in empiricism.
I agree that this is a really interesting question. A couple of half-baked thoughts:
Alicorn’s formulation here is basically a search algorithm. The first two stages (Saturation and Distillation) are ways of using existing information to find decent initial values; the final stage (Experimentation) is the stepping algorithm. Thinking about it this way, it’s immediately obvious that there’s a lot more that could go into this last part: how to carve up the search space, how to decide which direction to step, whether to accept a step, etc. all of which have been explored extensively in other contexts. (Note: I’m not suggesting that this makes the problem trivial, or we should just think about this in terms of existing search algorithm paradigms; merely that thinking about things in this way could provide useful insights.)
One interesting facet of this sort of problem is that the precise mode of “failure” of a particular experiment can give information about where to step next. At a very basic level, you have things like burning, which, as most people will realize, suggest cooking at a lower temperature or for less time. At a higher level, you have things like the failure to form peaks, which, unless you can get more information from elsewhere, or you have a good understanding of food chemistry, you probably won’t have much of an idea how to fix.
IMO, it has to involve—amongst other things! -- personal practice of the sort I describe in the following.
If you want to avoid falling into mental ‘traps’ (not to make assumptions, not to fall victim to biases, etc), then you have to be able to spot when you are about to, or just have, commited one.
You need a highly-refined “bullshit detector”. And it is crucial that you can use it automatically. When you’re in the midst of thinking about something you can’t just sit back and consciously reason about your own thinking in order to spot every mistake. You have to be really good at automatically spotting them.
If it needs to be automatic then it has to come from practice. So I say get as much practice as you can: anywhere and anyhow.
Always have it as a consicous priority to want to avoid falling into any mental traps. Always consider how what you’re thinking could possibly be wrong. The same for anything you hear anyone else say. And in particular, for anything that you read. I always have a pencil with me when I’m reading things and whenever I think I’ve found someone falling into some sort of mental trap I try to note the details of it.
(This is about having a pretty constant critical attitude but it doesn’t mean being a jerk who has a negative outlook on life, or who needlessly attack others’ views in public, or who shows off by pointing out mistakes, etc.)
Over time, you’ll build up a more substantial toolkit, and those capabilities will become more automatic.