Thanks! Understanding that it’s more of a focusing object is helpful.
I do think it’d be handy to have some sort of short summary on the wiki/tag page now, even if it is a bit vague and says “this is a concept Abram was exploring that hasn’t fully solidified yet that is sorta-kinda-about-X”.
“this is a concept Abram was exploring that hasn’t fully solidified yet that is sorta-kinda-about-X”
Is Abram responsible for introducing this term in an AI safety context? At least on LW, the first reference (that I could find) is this post from Evan Hubinger, which says:
We can think of a myopic agent as one that only considers how best to answer the single question that you give to it rather than considering any sort of long-term consequences, which can be implemented as a stop gradient preventing any optimization outside of that domain.
Ah. My somewhat grumpy original comment came from me trying to figure out what myopia meant and doing a LW search, and I think Abram’s posts had it in the title, so those were the ones most prominent, and I assumed he had originated it.
Thanks! Understanding that it’s more of a focusing object is helpful.
I do think it’d be handy to have some sort of short summary on the wiki/tag page now, even if it is a bit vague and says “this is a concept Abram was exploring that hasn’t fully solidified yet that is sorta-kinda-about-X”.
Is Abram responsible for introducing this term in an AI safety context? At least on LW, the first reference (that I could find) is this post from Evan Hubinger, which says:
Ah. My somewhat grumpy original comment came from me trying to figure out what myopia meant and doing a LW search, and I think Abram’s posts had it in the title, so those were the ones most prominent, and I assumed he had originated it.