Is that a core part of the definition of myopia in AI/ML? I understood it only to mean that models lose accuracy if the environment (the non-measured inputs to real-world outcomes) changes significantly from the training/testing set.
Is that a core part of the definition of myopia in AI/ML?
To the best of my knowledge, the use of ‘myopia’ in the AI safety context was introduced by evhub, maybe here, and is not a term used more broadly in ML.
I understood it only to mean that models lose accuracy if the environment (the non-measured inputs to real-world outcomes) changes significantly from the training/testing set.
This is typically referred to as ‘distributional shift.’
Is that a core part of the definition of myopia in AI/ML? I understood it only to mean that models lose accuracy if the environment (the non-measured inputs to real-world outcomes) changes significantly from the training/testing set.
To the best of my knowledge, the use of ‘myopia’ in the AI safety context was introduced by evhub, maybe here, and is not a term used more broadly in ML.
This is typically referred to as ‘distributional shift.’