ETA: I also have a model of you being less convinced by realism about rationality than others in the “MIRI crowd”; in particular, selection vs. control seems decidedly less “realist” than mesa-optimizers (which didn’t have to be “realist”, but was quite “realist” the way it was written, especially in its focus on search).
Just a quick reply to this part for now (but thanks for the extensive comment, I’ll try to get to it at some point).
It makes sense. My recent series on myopia also fits this theme. But I don’t get much* push-back on these things. Some others seem even less realist than I am. I see myself as trying to carefully deconstruct my notions of “agency” into component parts that are less fake. I guess I do feel confused why other people seem less interested in directly deconstructing agency the way I am. I feel somewhat like others kind of nod along to distinctions like selection vs control but then go back to using a unitary notion of “optimization”. (This applies to people at MIRI and also people outside MIRI.)
*The one person who has given me push-back is Scott.
Just a quick reply to this part for now (but thanks for the extensive comment, I’ll try to get to it at some point).
It makes sense. My recent series on myopia also fits this theme. But I don’t get much* push-back on these things. Some others seem even less realist than I am. I see myself as trying to carefully deconstruct my notions of “agency” into component parts that are less fake. I guess I do feel confused why other people seem less interested in directly deconstructing agency the way I am. I feel somewhat like others kind of nod along to distinctions like selection vs control but then go back to using a unitary notion of “optimization”. (This applies to people at MIRI and also people outside MIRI.)
*The one person who has given me push-back is Scott.