A lot of your arguments boil down to “This ignores ML and prosaic alignment” so I think it would be helpful if you explained why ML and prosaic alignment are important.
The obvious reply would be that ML now seems likely to produce AGI, perhaps alongside minor new discoveries, in a fairly short time. (That at least is what EY now seems to assert.) Now, the grandparent goes far beyond that, and I don’t think I agree with most of the additions. However, the importance of ML sadly seems well-supported.
A lot of your arguments boil down to “This ignores ML and prosaic alignment” so I think it would be helpful if you explained why ML and prosaic alignment are important.
The obvious reply would be that ML now seems likely to produce AGI, perhaps alongside minor new discoveries, in a fairly short time. (That at least is what EY now seems to assert.) Now, the grandparent goes far beyond that, and I don’t think I agree with most of the additions. However, the importance of ML sadly seems well-supported.