This feels like trying hard to come up with arguments for why maybe everything will be okay, rather than searching for the truth. The arguments are all in one direction.
As Daniel and others point out, this still seems to not account for continued progress. You mention that robotics advances would be bad. But of course they’ll happen. The question isn’t whether, it’s when. Have you been tracking progress in robotics? It’s happening about as rapidly as progress in other types of AI and for similar reasons.
Horses aren’t perfect substitutes for engines. Horses have near perfect autopilot for just one example. But pointing out specific flaws seems beside the point when you’re just not meeting the arguments at their strong points.
I wish economists were taking the scenario seriously. It seems like something about the whole discipline is bending people towards putting their heads in the sand and refusing to address the implications of continued rapid progress in AI and robotics.
This feels like trying hard to come up with arguments for why maybe everything will be okay, rather than searching for the truth. The arguments are all in one direction.
As Daniel and others point out, this still seems to not account for continued progress. You mention that robotics advances would be bad. But of course they’ll happen. The question isn’t whether, it’s when. Have you been tracking progress in robotics? It’s happening about as rapidly as progress in other types of AI and for similar reasons.
Horses aren’t perfect substitutes for engines. Horses have near perfect autopilot for just one example. But pointing out specific flaws seems beside the point when you’re just not meeting the arguments at their strong points.
I wish economists were taking the scenario seriously. It seems like something about the whole discipline is bending people towards putting their heads in the sand and refusing to address the implications of continued rapid progress in AI and robotics.