Because it’s pretty obvious that there’s at least some chance of AGI etc. happening soon. Many important lines of evidence support this:
--Many renowned world experts in AI and AGI forecasting say so, possibly even most —Just look at ChatGPT4 --Read the Bio Anchors report —Learn more about AI, deep learning, etc. and in particular about scaling laws and the lottery ticket hypothesis etc. and then get up to speed with everything OpenAI and other labs are doing, and then imagine what sorts of things could be built in the next few years using bigger models with more compute and data etc....
--Note the scarcity of any decent object-level argument that it won’t happen soon. Bio Anchors has the best arguments that it won’t happen this decade, IMO. If you know of any better one I’d be interested to be linked to it or have it explained to me!
Because it’s pretty obvious that there’s at least some chance of AGI etc. happening soon. Many important lines of evidence support this:
--Many renowned world experts in AI and AGI forecasting say so, possibly even most
—Just look at ChatGPT4
--Read the Bio Anchors report
—Learn more about AI, deep learning, etc. and in particular about scaling laws and the lottery ticket hypothesis etc. and then get up to speed with everything OpenAI and other labs are doing, and then imagine what sorts of things could be built in the next few years using bigger models with more compute and data etc....
--Note the scarcity of any decent object-level argument that it won’t happen soon. Bio Anchors has the best arguments that it won’t happen this decade, IMO. If you know of any better one I’d be interested to be linked to it or have it explained to me!
Ah, so your complaint is that the author is ignoring evidence pointing to shorter timelines. I understand your position better now :)