Certainly; I think this is a case where there are 3 types of causality going on:
Using Less Wrong makes you a better programmer. (This is pretty weak; for most programmers, there are probably other things that will improve your programming skill a hundred times faster than reading Less Wrong.)
Improving as a programmer makes you more attracted to Less Wrong.
Innate rationality aptitude makes you a better programmer and more attracted to Less Wrong. (The strongest factor.)
I am planning an article about how to use LW-ideas for debugging. However there is a meta-idea behind a lot of LW-ideas that I have not yet seen really written down and I wonder what would be the right term to use. It is roughly that in order to figure out what could cause an effect, you need to look at not only stuff but primarily on the differences between stuff. So if a bug appears in situation 1 and not in 2, don’t look at all aspects of situation 1, just the aspects that differ from situation 2. Does this have a name? It sounds very basic but I was not really doing this before, because I had the mentality that to really solve a problem I need to understand all parts of a “machine”, not just the malfunctioning ones.
I don’t think it really does… or even that it is necessarily true. The kind of issues I find tend to have a more smoother distribution. It really depends on the categorization. Is user error one category or one per module or one per function or?…
Certainly; I think this is a case where there are 3 types of causality going on:
Using Less Wrong makes you a better programmer. (This is pretty weak; for most programmers, there are probably other things that will improve your programming skill a hundred times faster than reading Less Wrong.)
Improving as a programmer makes you more attracted to Less Wrong.
Innate rationality aptitude makes you a better programmer and more attracted to Less Wrong. (The strongest factor.)
I am planning an article about how to use LW-ideas for debugging. However there is a meta-idea behind a lot of LW-ideas that I have not yet seen really written down and I wonder what would be the right term to use. It is roughly that in order to figure out what could cause an effect, you need to look at not only stuff but primarily on the differences between stuff. So if a bug appears in situation 1 and not in 2, don’t look at all aspects of situation 1, just the aspects that differ from situation 2. Does this have a name? It sounds very basic but I was not really doing this before, because I had the mentality that to really solve a problem I need to understand all parts of a “machine”, not just the malfunctioning ones.
Does it not follow from the Pareto principle?
I don’t think it really does… or even that it is necessarily true. The kind of issues I find tend to have a more smoother distribution. It really depends on the categorization. Is user error one category or one per module or one per function or?…