The quoted sentence is about what people like Dario Amodei, Miles Brundage, and @Daniel Kokotajlopredict that AI will be able to do by the end of the decade.
And although I haven’t asked them, I would be pretty surprised if I were wrong here, hence “surely.”
In the post, I quoted this bit from Amodei:
It can engage in any actions, communications, or remote operations enabled by this interface, including taking actions on the internet, taking or giving directions to humans, ordering materials, directing experiments, watching videos, making videos, and so on. It does all of these tasks with, again, a skill exceeding that of the most capable humans in the world.
Do you really think that he means “it can do ‘any actions, communications, or remote operations enabled by this interface’ with a skill exceeding that of the most capable humans in the world – except for writing blog posts or comments”?
Do you think he would endorse this caveat if I were to ask him about it?
If so, why?
Likewise with Brundage, who writes:
AI that exceeds human performance in nearly every cognitive domain is almost certain to be built and deployed in the next few years.
I mean, he did say “nearly every,” so there are some “cognitive domains” in which this thing is still not superhuman. But do we really think that Brundage thinks “blogging” is likely to be an exception? Seriously?
(Among other things, note that both of these people are talking about AIs that could automate basically any job doable by a remote worker on a computer. There exist remote jobs which require communication skills + having-interesting-ideas skills such that doing them effectively involves “writing interesting blog posts,” just in another venue, e.g. research reports, Slack messages… sometimes these things are even framed as “posts on a company-internal blog” [in my last job I often wrote up my research in posts on a “Confluence blog”].
If you suppose that the AI can do these sorts of jobs, then you either have to infer it’s good at blogging too, or you have to invent some very weirdly shaped generalization failure gerrymandered specifically to avoid this otherwise natural conclusion.)
this is a fair response, and to be honest i was skimming your post a bit. i do think my point somewhat holds, that there is no “intelligence skill tree” where you must unlock the level 1 skills before you progress to level 2.
i think a more fair response to your post is:
companies are trying to make software engineer agents, not bloggers, so the optimization is towards the former.
making a blog that’s actually worth reading is hard. no one reads 99% of blogs.
i wouldn’t act so confident that we aren’t surrounded by LLM comments and posts. are you really sure that everything you’re reading is from a human? all the random comments and posts you see on social media, do you check every single one of them to gauge if they’re human?
lots of dumb bots can just copy posts and content written by other people and still make an impact. scammers and propagandists can just pay an indian or philipino $2/hr and get pretty good. writing original text is not a bottleneck.
The quoted sentence is about what people like Dario Amodei, Miles Brundage, and @Daniel Kokotajlo predict that AI will be able to do by the end of the decade.
And although I haven’t asked them, I would be pretty surprised if I were wrong here, hence “surely.”
In the post, I quoted this bit from Amodei:
Do you really think that he means “it can do ‘any actions, communications, or remote operations enabled by this interface’ with a skill exceeding that of the most capable humans in the world – except for writing blog posts or comments”?
Do you think he would endorse this caveat if I were to ask him about it?
If so, why?
Likewise with Brundage, who writes:
I mean, he did say “nearly every,” so there are some “cognitive domains” in which this thing is still not superhuman. But do we really think that Brundage thinks “blogging” is likely to be an exception? Seriously?
(Among other things, note that both of these people are talking about AIs that could automate basically any job doable by a remote worker on a computer. There exist remote jobs which require communication skills + having-interesting-ideas skills such that doing them effectively involves “writing interesting blog posts,” just in another venue, e.g. research reports, Slack messages… sometimes these things are even framed as “posts on a company-internal blog” [in my last job I often wrote up my research in posts on a “Confluence blog”].
If you suppose that the AI can do these sorts of jobs, then you either have to infer it’s good at blogging too, or you have to invent some very weirdly shaped generalization failure gerrymandered specifically to avoid this otherwise natural conclusion.)
this is a fair response, and to be honest i was skimming your post a bit. i do think my point somewhat holds, that there is no “intelligence skill tree” where you must unlock the level 1 skills before you progress to level 2.
i think a more fair response to your post is:
companies are trying to make software engineer agents, not bloggers, so the optimization is towards the former.
making a blog that’s actually worth reading is hard. no one reads 99% of blogs.
i wouldn’t act so confident that we aren’t surrounded by LLM comments and posts. are you really sure that everything you’re reading is from a human? all the random comments and posts you see on social media, do you check every single one of them to gauge if they’re human?
lots of dumb bots can just copy posts and content written by other people and still make an impact. scammers and propagandists can just pay an indian or philipino $2/hr and get pretty good. writing original text is not a bottleneck.