I mean, this depends on competition right? Like it’s not clear that the AIs can reap these gains because you can just train an AI to compete?
[ETA: Apologies, it appears I misinterpreted you as defending the claim that AIs will have an incentive to steal or commit murder if they are subject to competition.]
That’s true for humans too, at various levels of social organization, and yet I don’t think humans have a strong incentive to kill off or steal from weaker/less intelligent people or countries etc. To understand what’s going on here, I think it’s important to analyze these arguments in existing economic frameworks—and not because I’m applying a simplistic “AIs will be like humans” argument but rather because I think these frameworks are simply our best existing, empirically validated models of what happens when a bunch of agents with different values and levels of power are in competition with each other.
In these models, it is generally not accurate to say that powerful agents have strong convergent incentives to kill or steal from weaker agents, which is the primary thing I’m arguing against. Trade is not assumed to happen in these models because all agents consider themselves roughly all equally powerful, or because the agents have the same moral views, or because there’s no way to be unseated by cheap competition, and so on. These models generally refer to abstract agents of varying levels of power and differing values, in a diverse range of circumstances, and yet still predict peaceful trade because of the efficiency advantages of lawful interactions and compromise.
Oh, sorry, to be clear I wasn’t arguing that this results in an incentive to kill or steal. I was just pushing back on a local point that seemed wrong to me.
[ETA: Apologies, it appears I misinterpreted you as defending the claim that AIs will have an incentive to steal or commit murder if they are subject to competition.]
That’s true for humans too, at various levels of social organization, and yet I don’t think humans have a strong incentive to kill off or steal from weaker/less intelligent people or countries etc. To understand what’s going on here, I think it’s important to analyze these arguments in existing economic frameworks—and not because I’m applying a simplistic “AIs will be like humans” argument but rather because I think these frameworks are simply our best existing, empirically validated models of what happens when a bunch of agents with different values and levels of power are in competition with each other.
In these models, it is generally not accurate to say that powerful agents have strong convergent incentives to kill or steal from weaker agents, which is the primary thing I’m arguing against. Trade is not assumed to happen in these models because all agents consider themselves roughly all equally powerful, or because the agents have the same moral views, or because there’s no way to be unseated by cheap competition, and so on. These models generally refer to abstract agents of varying levels of power and differing values, in a diverse range of circumstances, and yet still predict peaceful trade because of the efficiency advantages of lawful interactions and compromise.
Oh, sorry, to be clear I wasn’t arguing that this results in an incentive to kill or steal. I was just pushing back on a local point that seemed wrong to me.