It seems quite easy to imagine similarly compelling socio-political and subconscious reasons why people working on AI could be biased against short AGI timelines. For example
short timelines estimates make broader public agitated, which may lead to state regulation or similar interference [historical examples: industries trying to suppress info about risks]
researchers mostly want to work on technical problems, instead of thinking about nebulous future impacts of their work; putting more weight on short timelines would force some people to pause and think about responsibility, or suffer some cognitive dissonance, which may be unappealing/unpleasant for S1 reasons [historical examples: physicists working on nuclear weapons]
fears claims about short timelines would get pattern-matched as doomsday fear-mongering / sensationalist / subject of scifi movies …
While I agree motivated reasoning is a serious concern, I don’t think it’s clear how do the incentives sum up. If anything, claims like “AGI is unrealistic or very far away, however practical applications of narrow AI will be profound” seems to capture most of the purported benefits (AI is important) and avoid the negatives (no need to think).
[purely personal view]
It seems quite easy to imagine similarly compelling socio-political and subconscious reasons why people working on AI could be biased against short AGI timelines. For example
short timelines estimates make broader public agitated, which may lead to state regulation or similar interference [historical examples: industries trying to suppress info about risks]
researchers mostly want to work on technical problems, instead of thinking about nebulous future impacts of their work; putting more weight on short timelines would force some people to pause and think about responsibility, or suffer some cognitive dissonance, which may be unappealing/unpleasant for S1 reasons [historical examples: physicists working on nuclear weapons]
fears claims about short timelines would get pattern-matched as doomsday fear-mongering / sensationalist / subject of scifi movies …
While I agree motivated reasoning is a serious concern, I don’t think it’s clear how do the incentives sum up. If anything, claims like “AGI is unrealistic or very far away, however practical applications of narrow AI will be profound” seems to capture most of the purported benefits (AI is important) and avoid the negatives (no need to think).