Thanks! I guess myopia is a specific example of one form of scope-insensitivity (which has to do with longterm thinking, according to this at least), yes.
> This is plausibly a beneficial alignment property, but like every plausibly beneficial alignment property, we don’t yet know how to instill them in a system via ML training.
I didn’t follow discussions around myopia and didn’t have this context (e.g., I thought maybe people didn’t find myopia promising at all to begin with or something) so thanks a lot. That’s very helpful.
Thanks! I guess myopia is a specific example of one form of scope-insensitivity (which has to do with longterm thinking, according to this at least), yes.
> This is plausibly a beneficial alignment property, but like every plausibly beneficial alignment property, we don’t yet know how to instill them in a system via ML training.
I didn’t follow discussions around myopia and didn’t have this context (e.g., I thought maybe people didn’t find myopia promising at all to begin with or something) so thanks a lot. That’s very helpful.