I think of myopia as part of a broader research direction of finding ways to weaken the notion of an agent (“agent” meaning that it chooses actions based on their predicted consequences) so the AI follows some commonsense notion of “does what you ask without making its own plans.”
So if you remove all prediction of consequences beyond some timeframe, then you will get non-agent behavior relative to things outside the timeframe. This helps clarify for me why implicit predictions, like the AI modeling a human who makes predictions of the future, might lead to agential behavior even in an almost-myopic agent—the AI might now have incentives to choose actions based on their predicted (albeit by the human) consequences.
I think there are two endgames people have in mind for this research direction—using non-agent AI as an oracle and using it to help us choose good actions (implicitly making an agent of the combined human-oracle system), or using non-agent parts as understandable building blocks in an agent, which is supposed to be safe by virtue of having some particular structure. I think both of these have their own problems, but they do sort of mitigate your points about the direct performance of a myopic AI.
I think of myopia as part of a broader research direction of finding ways to weaken the notion of an agent (“agent” meaning that it chooses actions based on their predicted consequences) so the AI follows some commonsense notion of “does what you ask without making its own plans.”
So if you remove all prediction of consequences beyond some timeframe, then you will get non-agent behavior relative to things outside the timeframe. This helps clarify for me why implicit predictions, like the AI modeling a human who makes predictions of the future, might lead to agential behavior even in an almost-myopic agent—the AI might now have incentives to choose actions based on their predicted (albeit by the human) consequences.
I think there are two endgames people have in mind for this research direction—using non-agent AI as an oracle and using it to help us choose good actions (implicitly making an agent of the combined human-oracle system), or using non-agent parts as understandable building blocks in an agent, which is supposed to be safe by virtue of having some particular structure. I think both of these have their own problems, but they do sort of mitigate your points about the direct performance of a myopic AI.