I agree that a sufficiently general optimizer can optimize its environment for a wide range of values, the vast majority of which aren’t mine, and a significant number of which are opposed to mine. As you say, an optimizer-in-general is as good at paperclips as it is at anything else (though of course a human optimizer is not, because humans are the result of a lot of evolutionary fine-tuning for specific functions).
I would say that a sufficiently general rationalist can do exactly the same thing. That is, a rationalist-in-general (at least, as the term is frequently used here) is as good at paperclips as it is at anything else (though of course a human rationalist is not, as above).
I would also say that the symmetry is not a coincidence.
I agree that if this is what Nesov meant, then I completely misunderstood his comment. I’m somewhat skeptical that this is what Nesov meant.
I was thinking about whether telling someone I’m an aspiring optimizer is going to result in less confusion than telling them that I’m an aspiring rationalist. I think that the term ‘optimizer’ needs a little more specification to work; how about Decision Optimization? If I tell someone I’m working on decision optimization, I pretty effectively convey what I’m doing—learning and practicing heuristics in order to make better decisions.
I probably agree that “I’m working on decision optimization” conveys more information in that case than “I’m working on rationality” but I suspect that neither is really what I’d want to say in a similar situation… I’d probably say instead that “I’m working on making more consistent probability estimates,” or “I’m working on updating my beliefs based on contradictory evidence rather than rejecting it,” or whatever it was. (Conversely, if I didn’t know what I was working on more specifically, I would question how I knew I was working on it.)
I agree that a sufficiently general optimizer can optimize its environment for a wide range of values, the vast majority of which aren’t mine, and a significant number of which are opposed to mine. As you say, an optimizer-in-general is as good at paperclips as it is at anything else (though of course a human optimizer is not, because humans are the result of a lot of evolutionary fine-tuning for specific functions).
I would say that a sufficiently general rationalist can do exactly the same thing. That is, a rationalist-in-general (at least, as the term is frequently used here) is as good at paperclips as it is at anything else (though of course a human rationalist is not, as above).
I would also say that the symmetry is not a coincidence.
I agree that if this is what Nesov meant, then I completely misunderstood his comment. I’m somewhat skeptical that this is what Nesov meant.
I was thinking about whether telling someone I’m an aspiring optimizer is going to result in less confusion than telling them that I’m an aspiring rationalist. I think that the term ‘optimizer’ needs a little more specification to work; how about Decision Optimization? If I tell someone I’m working on decision optimization, I pretty effectively convey what I’m doing—learning and practicing heuristics in order to make better decisions.
I probably agree that “I’m working on decision optimization” conveys more information in that case than “I’m working on rationality” but I suspect that neither is really what I’d want to say in a similar situation… I’d probably say instead that “I’m working on making more consistent probability estimates,” or “I’m working on updating my beliefs based on contradictory evidence rather than rejecting it,” or whatever it was. (Conversely, if I didn’t know what I was working on more specifically, I would question how I knew I was working on it.)