According to a “no fire alarm” model for AI risk, your prediction (every survey shows a later date for doomsday) is exactly what should be expected, right up until doomsday happens and there are no more surveys.
In practice, I think there are some leading indicators and numerous members of the community have shortened their timelines over the past year due to more rapid progress than expected in some respects. I don’t know of anyone who has lengthened their timelines over the same period.
According to a “no fire alarm” model for AI risk, your prediction (every survey shows a later date for doomsday) is exactly what should be expected, right up until doomsday happens and there are no more surveys.
In practice, I think there are some leading indicators and numerous members of the community have shortened their timelines over the past year due to more rapid progress than expected in some respects. I don’t know of anyone who has lengthened their timelines over the same period.