I’m really curious about the variable of whether people believe that AI will ‘fizzle’, as Zvi puts it. If you think that current LLMs are quite close to the peak of what is achievable in the next 40 years, I expect that that modifies whether you think humanity is in danger from AI in the next 40 years.
Also I’m curious about phrasing things as “conditional on <AI strength level> in <timeframe>, with no substantial shifts in AI regulation or the AI leaders up until that point.”
Basically, I think a lot of unworried people would be a lot more worried if they agreed with me on short timelines. I’m interested in how people are going to react in the future, once they see further evidence of continued progress in AI.
I’m really curious about the variable of whether people believe that AI will ‘fizzle’, as Zvi puts it. If you think that current LLMs are quite close to the peak of what is achievable in the next 40 years, I expect that that modifies whether you think humanity is in danger from AI in the next 40 years.
Also I’m curious about phrasing things as “conditional on <AI strength level> in <timeframe>, with no substantial shifts in AI regulation or the AI leaders up until that point.”
Basically, I think a lot of unworried people would be a lot more worried if they agreed with me on short timelines. I’m interested in how people are going to react in the future, once they see further evidence of continued progress in AI.