If I’m understanding you correctly, I would agree with you that “rationality” as you’re using it in this comment doesn’t map particularly well to any form of optimization, and I endorse using “rationality” to refer to what I think you’re talking about..
I would also say that “rationality” as it is frequently used on this site doesn’t map particularly well to “rationality” as you’re using it in this comment.
Huh. If the folks downvoting this can explain to me how to map the uses of “rationality” in “rationality is not that great for most purposes” and “rationalists should win” (for example) to one another, I’d appreciate it… they sure do seem like different things to me. They are both valuable, but they are different, and seem mostly incommensurable.
Winning doesn’t necessarily involve being really good at winning. While winning is a good thing, it’s not a given that you should personally implement it. For example, humans are good at lifting heavy things, but with use of heavy lifting machinery, and not extraordinary power of their own improved muscles.
I think you’re misunderstanding Nesov’s comment. Becoming a better optimizer loses a useful distinction- to see this you need to take an outside view of optimization in general. Rationalists want to optimize their behaviors relative to their values/goals—which includes only a very narrow slice of things to optimize for (generally not the number of paperclips in the universe any other process that might be encoded in a utility function) and a specific set of heuristics to master. Hence the claim that rationality isn’t that great for many purposes- there are only relatively few purposes we wish to actually pursue currently.
Even though becoming a sufficiently stronger optimizer-in-general will help you achieve your narrow range of goals, unless you specifically work towards optimizing for your value set, it’s not optimal to do so relative to your actual utility function. An optimizer-in-general, strictly speaking, will on average be just as good at optimizing for the number of paperclips in the universe as you will at managing your relationships. The useful distinction is lost here.
I agree that a sufficiently general optimizer can optimize its environment for a wide range of values, the vast majority of which aren’t mine, and a significant number of which are opposed to mine. As you say, an optimizer-in-general is as good at paperclips as it is at anything else (though of course a human optimizer is not, because humans are the result of a lot of evolutionary fine-tuning for specific functions).
I would say that a sufficiently general rationalist can do exactly the same thing. That is, a rationalist-in-general (at least, as the term is frequently used here) is as good at paperclips as it is at anything else (though of course a human rationalist is not, as above).
I would also say that the symmetry is not a coincidence.
I agree that if this is what Nesov meant, then I completely misunderstood his comment. I’m somewhat skeptical that this is what Nesov meant.
I was thinking about whether telling someone I’m an aspiring optimizer is going to result in less confusion than telling them that I’m an aspiring rationalist. I think that the term ‘optimizer’ needs a little more specification to work; how about Decision Optimization? If I tell someone I’m working on decision optimization, I pretty effectively convey what I’m doing—learning and practicing heuristics in order to make better decisions.
I probably agree that “I’m working on decision optimization” conveys more information in that case than “I’m working on rationality” but I suspect that neither is really what I’d want to say in a similar situation… I’d probably say instead that “I’m working on making more consistent probability estimates,” or “I’m working on updating my beliefs based on contradictory evidence rather than rejecting it,” or whatever it was. (Conversely, if I didn’t know what I was working on more specifically, I would question how I knew I was working on it.)
If I’m understanding you correctly, I would agree with you that “rationality” as you’re using it in this comment doesn’t map particularly well to any form of optimization, and I endorse using “rationality” to refer to what I think you’re talking about..
I would also say that “rationality” as it is frequently used on this site doesn’t map particularly well to “rationality” as you’re using it in this comment.
Huh. If the folks downvoting this can explain to me how to map the uses of “rationality” in “rationality is not that great for most purposes” and “rationalists should win” (for example) to one another, I’d appreciate it… they sure do seem like different things to me. They are both valuable, but they are different, and seem mostly incommensurable.
Winning doesn’t necessarily involve being really good at winning. While winning is a good thing, it’s not a given that you should personally implement it. For example, humans are good at lifting heavy things, but with use of heavy lifting machinery, and not extraordinary power of their own improved muscles.
I didn’t down-vote, but my two cents:
I think you’re misunderstanding Nesov’s comment. Becoming a better optimizer loses a useful distinction- to see this you need to take an outside view of optimization in general. Rationalists want to optimize their behaviors relative to their values/goals—which includes only a very narrow slice of things to optimize for (generally not the number of paperclips in the universe any other process that might be encoded in a utility function) and a specific set of heuristics to master. Hence the claim that rationality isn’t that great for many purposes- there are only relatively few purposes we wish to actually pursue currently.
Even though becoming a sufficiently stronger optimizer-in-general will help you achieve your narrow range of goals, unless you specifically work towards optimizing for your value set, it’s not optimal to do so relative to your actual utility function. An optimizer-in-general, strictly speaking, will on average be just as good at optimizing for the number of paperclips in the universe as you will at managing your relationships. The useful distinction is lost here.
I agree that a sufficiently general optimizer can optimize its environment for a wide range of values, the vast majority of which aren’t mine, and a significant number of which are opposed to mine. As you say, an optimizer-in-general is as good at paperclips as it is at anything else (though of course a human optimizer is not, because humans are the result of a lot of evolutionary fine-tuning for specific functions).
I would say that a sufficiently general rationalist can do exactly the same thing. That is, a rationalist-in-general (at least, as the term is frequently used here) is as good at paperclips as it is at anything else (though of course a human rationalist is not, as above).
I would also say that the symmetry is not a coincidence.
I agree that if this is what Nesov meant, then I completely misunderstood his comment. I’m somewhat skeptical that this is what Nesov meant.
I was thinking about whether telling someone I’m an aspiring optimizer is going to result in less confusion than telling them that I’m an aspiring rationalist. I think that the term ‘optimizer’ needs a little more specification to work; how about Decision Optimization? If I tell someone I’m working on decision optimization, I pretty effectively convey what I’m doing—learning and practicing heuristics in order to make better decisions.
I probably agree that “I’m working on decision optimization” conveys more information in that case than “I’m working on rationality” but I suspect that neither is really what I’d want to say in a similar situation… I’d probably say instead that “I’m working on making more consistent probability estimates,” or “I’m working on updating my beliefs based on contradictory evidence rather than rejecting it,” or whatever it was. (Conversely, if I didn’t know what I was working on more specifically, I would question how I knew I was working on it.)
One s. One o.
Opss. Fixed.