I think your audio could stand to be 33% louder, and Phil’s 50% louder.
It still kind of makes me smile how much even singularitarian economists like to use “business as usual” assumptions as much as they possibly can. Like, when you bring up how coding new AIs is a job that is really well suited to “AI scientists” not needing much residual human input, Phil was like “yes, but what about all the other parts of research that we’ll still need humans for?” It seems like he was thinking economist-y straightforward thoughts like “okay, so research in AI could have a really high labor multiplier, but this just means humans are going to be doing other things in the short term, and in the long term are going to be small but highly-paid cogs in an enormous economy that takes advantage of AIs coding AIs.” In contrast, I’d expect that if AI-coding-AIs have their own little labor productivity singularity, this is going to have weird effects on humanity that break a lot of assumptions behind economic intuitions.
Among other things, Phil’s literature review studies to what extent will human labor be a bottleneck for economic growth as AI substitutes for labor. I agree with you that AI-coding-AIs would have weird effects… but do you agree with the point that it won’t be enough to sustain growth, or are you thinking about other paths where certain bottlenecks might not really be a problem?
I think that humans would still be necessary for human society for a reasonable amount of time (months or more) if things go well. If things don’t go well, we’re toast, which is a pretty big deviation from the economic model. But even if things go well, I think the presence of things like superhuman persuasion lead to a breakdown of assumptions behind normal economic behavior in humans, even in that period where human labor is still a cost-effective input to the (now superhumanly-planned) economy.
This is a really great introduction!
I think your audio could stand to be 33% louder, and Phil’s 50% louder.
It still kind of makes me smile how much even singularitarian economists like to use “business as usual” assumptions as much as they possibly can. Like, when you bring up how coding new AIs is a job that is really well suited to “AI scientists” not needing much residual human input, Phil was like “yes, but what about all the other parts of research that we’ll still need humans for?” It seems like he was thinking economist-y straightforward thoughts like “okay, so research in AI could have a really high labor multiplier, but this just means humans are going to be doing other things in the short term, and in the long term are going to be small but highly-paid cogs in an enormous economy that takes advantage of AIs coding AIs.” In contrast, I’d expect that if AI-coding-AIs have their own little labor productivity singularity, this is going to have weird effects on humanity that break a lot of assumptions behind economic intuitions.
Among other things, Phil’s literature review studies to what extent will human labor be a bottleneck for economic growth as AI substitutes for labor. I agree with you that AI-coding-AIs would have weird effects… but do you agree with the point that it won’t be enough to sustain growth, or are you thinking about other paths where certain bottlenecks might not really be a problem?
I think that humans would still be necessary for human society for a reasonable amount of time (months or more) if things go well. If things don’t go well, we’re toast, which is a pretty big deviation from the economic model. But even if things go well, I think the presence of things like superhuman persuasion lead to a breakdown of assumptions behind normal economic behavior in humans, even in that period where human labor is still a cost-effective input to the (now superhumanly-planned) economy.