(note, this comment is kinda grumpy but, to be clear, comes from the context of me generally quite respecting you as a writer. :P)
I can’t remember if I’ve complained about this elsewhere, but I have no idea what you mean by myopia, and I was about to comment (on another post) asking if you could write a post that succinctly defined what you meant by myopia (or if the point is that it’s hard to define, say that explicitly and give a few short attempted descriptions that could help me triangulate it).
Then I searched to see if you’d already done that, and found this post, which seems like it really wants to open with a succinct description of what myopia is, but doesn’t, and leaves me even more confused about it by the end.
(I can see the dictionary definition of myopia is “lack of imagination, foresight, or intellectual insight”, but I don’t know exactly how that’s connecting to the overall model or ideas your building towards here)
Sorry for somehow missing/ignoring this comment for about 5 months. The short answer is that I’ve been treating “myopia” as a focusing object, and am likely to think any definitions (including my own definitions in the OP) are too hasty and don’t capture everything I want to point at. In fact I initially tried to use the new term “partial agency” to make sure people didn’t think I was talking about more well-defined versions.
My attempt to give others a handle for the same focusing object was in the first post of the sequence, where I try to triangulate what I’m getting at with a number of examples right at the start:
Stop-gradients. This is a perfectly well-defined idea, used in deep learning.
Myopia in the computer science sense: a failure to look ahead. AKA “greedy algorithms”/”greedy approximations”.
Map/territory fit; honing in on the way it’s not something you’re supposed to just optimize in an ordinary sense.
Goal/territory fit; in the same way as the map should be made to fit the territory and not the other way around, the territory is supposed to be made to fit the goal and not the other way around.
Causal decision theory: performing “causal surgery” on a graph makes you ignore some lines of influence, like stop-gradients.
Perhaps I could give a better short/informal definition now if I sat down to think about it.
Thanks! Understanding that it’s more of a focusing object is helpful.
I do think it’d be handy to have some sort of short summary on the wiki/tag page now, even if it is a bit vague and says “this is a concept Abram was exploring that hasn’t fully solidified yet that is sorta-kinda-about-X”.
“this is a concept Abram was exploring that hasn’t fully solidified yet that is sorta-kinda-about-X”
Is Abram responsible for introducing this term in an AI safety context? At least on LW, the first reference (that I could find) is this post from Evan Hubinger, which says:
We can think of a myopic agent as one that only considers how best to answer the single question that you give to it rather than considering any sort of long-term consequences, which can be implemented as a stop gradient preventing any optimization outside of that domain.
Ah. My somewhat grumpy original comment came from me trying to figure out what myopia meant and doing a LW search, and I think Abram’s posts had it in the title, so those were the ones most prominent, and I assumed he had originated it.
(note, this comment is kinda grumpy but, to be clear, comes from the context of me generally quite respecting you as a writer. :P)
I can’t remember if I’ve complained about this elsewhere, but I have no idea what you mean by myopia, and I was about to comment (on another post) asking if you could write a post that succinctly defined what you meant by myopia (or if the point is that it’s hard to define, say that explicitly and give a few short attempted descriptions that could help me triangulate it).
Then I searched to see if you’d already done that, and found this post, which seems like it really wants to open with a succinct description of what myopia is, but doesn’t, and leaves me even more confused about it by the end.
(I can see the dictionary definition of myopia is “lack of imagination, foresight, or intellectual insight”, but I don’t know exactly how that’s connecting to the overall model or ideas your building towards here)
Sorry for somehow missing/ignoring this comment for about 5 months. The short answer is that I’ve been treating “myopia” as a focusing object, and am likely to think any definitions (including my own definitions in the OP) are too hasty and don’t capture everything I want to point at. In fact I initially tried to use the new term “partial agency” to make sure people didn’t think I was talking about more well-defined versions.
My attempt to give others a handle for the same focusing object was in the first post of the sequence, where I try to triangulate what I’m getting at with a number of examples right at the start:
Stop-gradients. This is a perfectly well-defined idea, used in deep learning.
Myopia in the computer science sense: a failure to look ahead. AKA “greedy algorithms”/”greedy approximations”.
Map/territory fit; honing in on the way it’s not something you’re supposed to just optimize in an ordinary sense.
Goal/territory fit; in the same way as the map should be made to fit the territory and not the other way around, the territory is supposed to be made to fit the goal and not the other way around.
Causal decision theory: performing “causal surgery” on a graph makes you ignore some lines of influence, like stop-gradients.
Perhaps I could give a better short/informal definition now if I sat down to think about it.
Thanks! Understanding that it’s more of a focusing object is helpful.
I do think it’d be handy to have some sort of short summary on the wiki/tag page now, even if it is a bit vague and says “this is a concept Abram was exploring that hasn’t fully solidified yet that is sorta-kinda-about-X”.
Is Abram responsible for introducing this term in an AI safety context? At least on LW, the first reference (that I could find) is this post from Evan Hubinger, which says:
Ah. My somewhat grumpy original comment came from me trying to figure out what myopia meant and doing a LW search, and I think Abram’s posts had it in the title, so those were the ones most prominent, and I assumed he had originated it.
(another place one could put a short definition of Myopia now is the Myopia Tag)
I agree. While interesting, the contents and title of this post seem pretty mismatched.