[...] We will quickly hit superintelligence, and, assuming the superintelligence is aligned, live in a post-scarcity technological wonderland where everything is possible.
Note, firstly, that money will continue being a thing, at least unless we have one single AI system doing all economic planning. Prices are largely about communicating information. If there are many actors and they trade with each other, the strong assumption should be that there are prices (even if humans do not see them or interact with them). Remember too that however sharp the singularity, abundance will still be finite, and must therefore be allocated.
Personally, I am reluctant to tell superintelligences how they should coordinate. It feels like some ants looking at the moon and thinking “surely if some animal is going to make it to the moon, it will be a winged ant.”Just because market economies have absolutely dominated the period of human development we might call ‘civilization’, it is not clear that ASIs will not come up with something better.
The era of human achievement in hard sciences will probably end within a few years because of the rate of AI progress in anything with crisp reward signals.
As an experimental physicist, I have opinions about that statement. Doing stuff in the physical world is hard. The business case for AI systems which can drive motor vehicles on the road is obvious to anyone, and yet autonomous vehicles remain the exception rather than the rule. (Yes, regulations are part of that story, but not all of it.) By contrast, the business case for an AI system which can cable up a particle detector is basically non-existent. I can see an AI using either a generic mobile robot developed for other purposes for plugging in all the BNC cables, or I can see it using a minimum wage worker with a head up display as a bio-drone—but more likely in two decades than a few years.
Of course, experimental physics these days is very much a team effort—the low-hanging fruits have mostly been picked, nobody is going to discover radium or fission again, at the most they will be a small cog in a large machine which discovers the Higgs boson or gravity waves.[1] So you might argue that experimental physics today is already not a place for peak human excellence (a la the Humanists in Terra Ignota).
More broadly, I agree that if ASI happens, most unaugmented humans are unlikely to stay at the helm of our collective destiny (to the limited degree they ever were). Even if some billionaire manages to align the first ASI to maximize his personal wealth, if he is clever he will obey the ASI just like all the peasants. His agency is reduced to not following the advice of his AI on some trivial matters. (“I have calculated that you should wear a blue shirt today for optimal outcomes.”—“I am willing to take a slight hit in happiness and success by making the suboptimal choice to wear a green shirt, though.”)
Some comments.
Personally, I am reluctant to tell superintelligences how they should coordinate. It feels like some ants looking at the moon and thinking “surely if some animal is going to make it to the moon, it will be a winged ant.”Just because market economies have absolutely dominated the period of human development we might call ‘civilization’, it is not clear that ASIs will not come up with something better.
As an experimental physicist, I have opinions about that statement. Doing stuff in the physical world is hard. The business case for AI systems which can drive motor vehicles on the road is obvious to anyone, and yet autonomous vehicles remain the exception rather than the rule. (Yes, regulations are part of that story, but not all of it.) By contrast, the business case for an AI system which can cable up a particle detector is basically non-existent. I can see an AI using either a generic mobile robot developed for other purposes for plugging in all the BNC cables, or I can see it using a minimum wage worker with a head up display as a bio-drone—but more likely in two decades than a few years.
Of course, experimental physics these days is very much a team effort—the low-hanging fruits have mostly been picked, nobody is going to discover radium or fission again, at the most they will be a small cog in a large machine which discovers the Higgs boson or gravity waves.[1] So you might argue that experimental physics today is already not a place for peak human excellence (a la the Humanists in Terra Ignota).
More broadly, I agree that if ASI happens, most unaugmented humans are unlikely to stay at the helm of our collective destiny (to the limited degree they ever were). Even if some billionaire manages to align the first ASI to maximize his personal wealth, if he is clever he will obey the ASI just like all the peasants. His agency is reduced to not following the advice of his AI on some trivial matters. (“I have calculated that you should wear a blue shirt today for optimal outcomes.”—“I am willing to take a slight hit in happiness and success by making the suboptimal choice to wear a green shirt, though.”)
Relevant fiction: Scott Alexander, The Whispering Earring.
Also, if we fail to align the first ASI, human inequality will drop to zero.
Of course, being a small cog in some large machine, I will say that.