I also found that take very unusual, especially when combined with this:
Maybe it will happen in a major AGI lab, maybe in some new startup. By default, everyone will die in <1 year after that.
The last sentence seems extremely overconfident, especially combined with the otherwise bearish conclusions in this post. I’m surprised no one else has mentioned it.
I also found that take very unusual, especially when combined with this:
The last sentence seems extremely overconfident, especially combined with the otherwise bearish conclusions in this post. I’m surprised no one else has mentioned it.
Yeah, I agree—overall I agree pretty closely with Thane about LLMs but his final conclusions don’t seem to follow from the model presented here.