A recently commonly heard viewpoint on the development of AI states that AI will be economically impactful but will not upend the dominancy of humans. Instead AI and humans will flourish together, trading and cooperating with one another. This view is particularly popular with a certain kind of libertarian economist: Tyler Cowen, Matthew Barnett, Robin Hanson.
They share the curious conviction that the probablity of AI-caused extinction p(Doom) is neglible. They base this with analogizing AI with previous technological transition of humanity, like the industrial revolution or the development of new communication mediums. A core assumption/argument is that AI will not disempower humanity because they will respect the existing legal system, apparently because they can gain from trades with humans.
The most extreme version of the GMU economist view is Hanson’s Age of EM; it hypothesizes radical change in the form of a new species of human-derived uploaded electronic people which curiously have just the same dreary office jobs as we do but way faster.
Why is there trade & specialization in the first place?
Trade and specialization seems to mainly important in a world where: There are many individuals; those individuals have different skills and resources and a limited ability to transfer skills.
Domain of Biology: direct copying of genes but not brain, yes recombination, no or very low bandwith communication
highly adversarial, less cooperation,, no planning, much specialization, not centralized, vastly many sovereign individuals
Domain of Economics: no direct copying, yes recombination, medium bandwith communication
mildly adversarial, mostly cooperation,medium planning, much specialization, little centralization, many somewhat soverign individuals
!AIs can copy, share and merge their weights!
Domain of Future AI society: direct copying of brain and machines, yes recombination, very high bandwith communication
?minimally adversarial, ?very high cooperation, ?cosmic-scale mathematized planning, ? little specialization, ? high centralization?, ? singleton sovereign individual
It is often imagined that in a ‘good’ transhumanist future the sovereign AI will be like a loving and caring parent for the billions and trillions of uploads. In this case, while there is one all-powerful Sovereign entity there are still many individuals who remain free and have their rights protected perhaps through cryptographic incantations. The closest cultural artifact would be Ian Banks’ Culture series.
There is another, more radical foreboding wherein the logic of ultra-high-bandwith sharing of weights is taken to the logical extreme and individuals merge into one transcendent hivemind.
A recently commonly heard viewpoint on the development of AI states that AI will be economically impactful but will not upend the dominancy of humans. Instead AI and humans will flourish together, trading and cooperating with one another. This view is particularly popular with a certain kind of libertarian economist: Tyler Cowen, Matthew Barnett, Robin Hanson.
They share the curious conviction that the probablity of AI-caused extinction p(Doom) is neglible. They base this with analogizing AI with previous technological transition of humanity, like the industrial revolution or the development of new communication mediums. A core assumption/argument is that AI will not disempower humanity because they will respect the existing legal system, apparently because they can gain from trades with humans.
I think this summarizes my view quite poorly on a number of points. For example, I think that:
AI is likely to be much more impactful than the development of new communication mediums. My default prediction is that AI will fundamentally increase the economic growth rate, rather than merely continuing the trend of the last few centuries.
Biological humans are very unlikely to remain dominant in the future, pretty much no matter how this is measured. Instead, I predict that artificial minds and humans who upgrade their cognition will likely capture the majority of future wealth, political influence, and social power, with non-upgraded biological humans becoming an increasingly small force in the world over time.
The legal system will likely evolve to cope with the challenges of incorporating and integrating non-human minds. This will likely involve a series of fundamental reforms, and will eventually look very different from the idea of “AIs will fit neatly into human social roles and obey human-controlled institutions indefinitely”.
A more accurate description of my view is that humans will become economically obsolete after AGI, but this obsolescence will happen peacefully, without a massive genocide of biological humans. In the scenario I find most likely, humans will have time to prepare and adapt to the changing world, allowing us to secure a comfortable retirement, and/or join the AIs via mind uploading. Trade between AIs and humans will likely persist even into our retirement, but this doesn’t mean that humans will own everything or control the whole legal system forever.
I see, thank you for the clarification. I should have been more careful with mischaracterizing your views.
I do have a question or two about your views if you would entertain me. You say humans wikl be economically obsolete and will ‘retire’ but there will still be trade between humans and AI. Does trade here just means humans consuming, I.e. trading money for AI goods and services? That doesn’t sound like trading in the usual sense where it is a reciprocal exchange of goods and services.
How many ‘different’ AI individuals do you expect there to be ?
Does trade here just means humans consuming, I.e. trading money for AI goods and services? That doesn’t sound like trading in the usual sense where it is a reciprocal exchange of goods and services.
Trade can involve anything that someone “owns”, which includes both their labor and their property, and government welfare. Retired people are generally characterized by trading their property and government welfare for goods and services, rather than primarily trading their labor. This is the basic picture I was trying to present.
How many ‘different’ AI individuals do you expect there to be ?
I think the answer to this question depends on how we individuate AIs. I don’t think most AIs will be as cleanly separable from each other as humans are, as most (non-robotic) AIs will lack bodies, and will be able to share information with each other more easily than humans can. It’s a bit like asking how many “ant units” there are. There are many individual ants per colony, but each colony can be treated as a unit by itself. I suppose the real answer is that it depends on context and what you’re trying to figure out by asking the question.
Will there be >1 individual per solar system?
A recently commonly heard viewpoint on the development of AI states that AI will be economically impactful but will not upend the dominancy of humans. Instead AI and humans will flourish together, trading and cooperating with one another. This view is particularly popular with a certain kind of libertarian economist: Tyler Cowen, Matthew Barnett, Robin Hanson.
They share the curious conviction that the probablity of AI-caused extinction p(Doom) is neglible. They base this with analogizing AI with previous technological transition of humanity, like the industrial revolution or the development of new communication mediums. A core assumption/argument is that AI will not disempower humanity because they will respect the existing legal system, apparently because they can gain from trades with humans.
The most extreme version of the GMU economist view is Hanson’s Age of EM; it hypothesizes radical change in the form of a new species of human-derived uploaded electronic people which curiously have just the same dreary office jobs as we do but way faster.
Why is there trade & specialization in the first place?
Trade and specialization seems to mainly important in a world where: There are many individuals; those individuals have different skills and resources and a limited ability to transfer skills.
Domain of Biology: direct copying of genes but not brain, yes recombination, no or very low bandwith communication
highly adversarial, less cooperation,, no planning, much specialization, not centralized, vastly many sovereign individuals
Domain of Economics: no direct copying, yes recombination, medium bandwith communication
mildly adversarial, mostly cooperation,medium planning, much specialization, little centralization, many somewhat soverign individuals
!AIs can copy, share and merge their weights!
Domain of Future AI society: direct copying of brain and machines, yes recombination, very high bandwith communication
?minimally adversarial, ?very high cooperation, ?cosmic-scale mathematized planning, ? little specialization, ? high centralization?, ? singleton sovereign individual
It is often imagined that in a ‘good’ transhumanist future the sovereign AI will be like a loving and caring parent for the billions and trillions of uploads. In this case, while there is one all-powerful Sovereign entity there are still many individuals who remain free and have their rights protected perhaps through cryptographic incantations. The closest cultural artifact would be Ian Banks’ Culture series.
There is another, more radical foreboding wherein the logic of ultra-high-bandwith sharing of weights is taken to the logical extreme and individuals merge into one transcendent hivemind.
I think this summarizes my view quite poorly on a number of points. For example, I think that:
AI is likely to be much more impactful than the development of new communication mediums. My default prediction is that AI will fundamentally increase the economic growth rate, rather than merely continuing the trend of the last few centuries.
Biological humans are very unlikely to remain dominant in the future, pretty much no matter how this is measured. Instead, I predict that artificial minds and humans who upgrade their cognition will likely capture the majority of future wealth, political influence, and social power, with non-upgraded biological humans becoming an increasingly small force in the world over time.
The legal system will likely evolve to cope with the challenges of incorporating and integrating non-human minds. This will likely involve a series of fundamental reforms, and will eventually look very different from the idea of “AIs will fit neatly into human social roles and obey human-controlled institutions indefinitely”.
A more accurate description of my view is that humans will become economically obsolete after AGI, but this obsolescence will happen peacefully, without a massive genocide of biological humans. In the scenario I find most likely, humans will have time to prepare and adapt to the changing world, allowing us to secure a comfortable retirement, and/or join the AIs via mind uploading. Trade between AIs and humans will likely persist even into our retirement, but this doesn’t mean that humans will own everything or control the whole legal system forever.
I see, thank you for the clarification. I should have been more careful with mischaracterizing your views.
I do have a question or two about your views if you would entertain me. You say humans wikl be economically obsolete and will ‘retire’ but there will still be trade between humans and AI. Does trade here just means humans consuming, I.e. trading money for AI goods and services? That doesn’t sound like trading in the usual sense where it is a reciprocal exchange of goods and services.
How many ‘different’ AI individuals do you expect there to be ?
Trade can involve anything that someone “owns”, which includes both their labor and their property, and government welfare. Retired people are generally characterized by trading their property and government welfare for goods and services, rather than primarily trading their labor. This is the basic picture I was trying to present.
I think the answer to this question depends on how we individuate AIs. I don’t think most AIs will be as cleanly separable from each other as humans are, as most (non-robotic) AIs will lack bodies, and will be able to share information with each other more easily than humans can. It’s a bit like asking how many “ant units” there are. There are many individual ants per colony, but each colony can be treated as a unit by itself. I suppose the real answer is that it depends on context and what you’re trying to figure out by asking the question.