Workers regularly trade with billionaires and earn more than $77 in wages, despite vast differences in wealth. Countries trade with each other despite vast differences in military power. In fact, some countries don’t even have military forces, or at least have a very small one, and yet do not get invaded by their neighbors or by the United States.
It is possible that these facts are explained by generosity on behalf of billionaires and other countries, but the standard social science explanation says that this is not the case. Rather, the standard explanation is that war is usually (though not always) more costly than trade, when compromise is a viable option. Thus, people usually choose to trade, rather than go to war with each other when they want stuff. This is true even in the presence of large differences in power.
I mostly don’t see this post as engaging with any of the best reasons one might expect smarter-than-human AIs to compromise with humans. By contrast to you, I think it’s important that AIs will be created within an existing system of law and property rights. Unlike animals, they’ll be able to communicate with us and make contracts. It therefore seems perfectly plausible for AIs to simply get rich within the system we have already established, and make productive compromises, rather than violently overthrowing the system itself.
That doesn’t rule out the possibility that the future will be very alien, or that it will turn out in a way that humans do not endorse. I’m also not saying that humans will always own all the wealth and control everything permanently forever. I’m simply arguing against the point that smart AIs will automatically turn violent and steal from agents who are less smart than they are, unless they’re value aligned. This is a claim that I don’t think has been established with any reasonable degree of rigor.
As far as I remember, across last 3500 years of history, only 8% was entirely without war. Current relatively peaceful times is a unique combination in international law and postindustrial economy, when qualified labor is expencive and requires large investments in capital and resources are relatively cheap, which is not the case after singularity, when you can get arbitrary amounts of labor for the price of hardware and resources is a bottleneck.
So, “people usually choose to trade, rather than go to war with each other when they want stuff” is not very warranted statement.
Workers regularly trade with billionaires and earn more than $77 in wages, despite vast differences in wealth. Countries trade with each other despite vast differences in military power. In fact, some countries don’t even have military forces, or at least have a very small one, and yet do not get invaded by their neighbors or by the United States.
It is possible that these facts are explained by generosity on behalf of billionaires and other countries, but the standard social science explanation says that this is not the case. Rather, the standard explanation is that war is usually (though not always) more costly than trade, when compromise is a viable option. Thus, people usually choose to trade, rather than go to war with each other when they want stuff. This is true even in the presence of large differences in power.
I mostly don’t see this post as engaging with any of the best reasons one might expect smarter-than-human AIs to compromise with humans. By contrast to you, I think it’s important that AIs will be created within an existing system of law and property rights. Unlike animals, they’ll be able to communicate with us and make contracts. It therefore seems perfectly plausible for AIs to simply get rich within the system we have already established, and make productive compromises, rather than violently overthrowing the system itself.
That doesn’t rule out the possibility that the future will be very alien, or that it will turn out in a way that humans do not endorse. I’m also not saying that humans will always own all the wealth and control everything permanently forever. I’m simply arguing against the point that smart AIs will automatically turn violent and steal from agents who are less smart than they are, unless they’re value aligned. This is a claim that I don’t think has been established with any reasonable degree of rigor.
As far as I remember, across last 3500 years of history, only 8% was entirely without war. Current relatively peaceful times is a unique combination in international law and postindustrial economy, when qualified labor is expencive and requires large investments in capital and resources are relatively cheap, which is not the case after singularity, when you can get arbitrary amounts of labor for the price of hardware and resources is a bottleneck.
So, “people usually choose to trade, rather than go to war with each other when they want stuff” is not very warranted statement.