I think Holden’s The Most Important Century sequence is probably the best reading here, with This Can’t Go On being the post most directly responding to your question (but I think it’ll make more sense if you read the whole thing.
(Really I think most of Lesswrong’s corpus is aiming to answer this question, with many different posts tackling it from different directions. I don’t know if there’s a single post specifically arguing that Jetsons world is silly, but lots of posts pointing at different intuitions that feed into the question. Superintelligence FAQ is relevant. Tim Urban’s The Artificial Intelligence Revolution is also relevant).
The two short answers are:
Even if you somehow get AI to human level and stop, you’d still have a whole new species that was capable of duplicating themselves, which would radically change how the world works.
It’s really unlikely for us to get to human level and then stop, given that that’s not generally how most progress works, not how the evolution of intelligence worked in the biological world, and not how our progress in AI has worked in many subdomains so far.
Even if you somehow get AI to human level and stop, you’d still have a whole new species that was capable of duplicating themselves
Also capable of doing AI research themselves. It would be incredibly strange if automating AI research didn’t accelerate AI research.
It’s really unlikely for us to get to human level and then stop, given that that’s [...] not how the evolution of intelligence worked in the biological world
Well, evolution did “stop” at human-level intelligence!
I would add that “human-level intelligence” is just a very specific level to be at. Cf. “bald-eagle-level carrying capacity”; on priors you wouldn’t expect airplanes to exactly hit that narrow target out of the full range of physically possible carrying capacities.
I think Holden’s The Most Important Century sequence is probably the best reading here, with This Can’t Go On being the post most directly responding to your question (but I think it’ll make more sense if you read the whole thing.
(Really I think most of Lesswrong’s corpus is aiming to answer this question, with many different posts tackling it from different directions. I don’t know if there’s a single post specifically arguing that Jetsons world is silly, but lots of posts pointing at different intuitions that feed into the question. Superintelligence FAQ is relevant. Tim Urban’s The Artificial Intelligence Revolution is also relevant).
The two short answers are:
Even if you somehow get AI to human level and stop, you’d still have a whole new species that was capable of duplicating themselves, which would radically change how the world works.
It’s really unlikely for us to get to human level and then stop, given that that’s not generally how most progress works, not how the evolution of intelligence worked in the biological world, and not how our progress in AI has worked in many subdomains so far.
Also capable of doing AI research themselves. It would be incredibly strange if automating AI research didn’t accelerate AI research.
Well, evolution did “stop” at human-level intelligence!
I would add that “human-level intelligence” is just a very specific level to be at. Cf. “bald-eagle-level carrying capacity”; on priors you wouldn’t expect airplanes to exactly hit that narrow target out of the full range of physically possible carrying capacities.