Thinking more, my current sense is that this is not an AI-specific thing, but a broader societal problem where people fail to think long-term. Peter Thiel very helpfully writes about it as a distinction between “definite” and “indefinite” attitudes to the future, where in the former it is understandable and lawful and in the latter it will happen no matter what you do (fatalism). My sense is that when I have told myself to focus on short-timelines, if it’s been unhealthy it’s been a general excuse for not having to look at hard problems.
Thinking more, my current sense is that this is not an AI-specific thing, but a broader societal problem where people fail to think long-term. Peter Thiel very helpfully writes about it as a distinction between “definite” and “indefinite” attitudes to the future, where in the former it is understandable and lawful and in the latter it will happen no matter what you do (fatalism). My sense is that when I have told myself to focus on short-timelines, if it’s been unhealthy it’s been a general excuse for not having to look at hard problems.