If AIs simply sold their labor honestly on an open market, they could easily become vastly richer than humans …
I mean, this depends on competition right? Like it’s not clear that the AIs can reap these gains because you can just train an AI to compete? (And the main reason why this competition argument could fail is that it’s too hard to ensure that your AI works for you productively because ensuring sufficient alignment/etc is too hard. Or legal reasons.)
[Edit: I edited this comment to make it clear that I was just arguing about whether AIs could easily become vastly richer and about the implications of this. I wasn’t trying to argue about theft/murder here though I do probably disagree here also in some important ways.]
Separately, in this sort of scenario, it sounds to me like AIs gain control over a high fraction of the cosmic endowment. Personally, what happens with the cosmic endowment is a high fraction of what I care about (maybe about 95% of what I care about), so this seems probably about as bad as violent takeover (perhaps one difference is in the selection effects on AIs).
I mean, this depends on competition right? Like it’s not clear that the AIs can reap these gains because you can just train an AI to compete?
[ETA: Apologies, it appears I misinterpreted you as defending the claim that AIs will have an incentive to steal or commit murder if they are subject to competition.]
That’s true for humans too, at various levels of social organization, and yet I don’t think humans have a strong incentive to kill off or steal from weaker/less intelligent people or countries etc. To understand what’s going on here, I think it’s important to analyze these arguments in existing economic frameworks—and not because I’m applying a simplistic “AIs will be like humans” argument but rather because I think these frameworks are simply our best existing, empirically validated models of what happens when a bunch of agents with different values and levels of power are in competition with each other.
In these models, it is generally not accurate to say that powerful agents have strong convergent incentives to kill or steal from weaker agents, which is the primary thing I’m arguing against. Trade is not assumed to happen in these models because all agents consider themselves roughly all equally powerful, or because the agents have the same moral views, or because there’s no way to be unseated by cheap competition, and so on. These models generally refer to abstract agents of varying levels of power and differing values, in a diverse range of circumstances, and yet still predict peaceful trade because of the efficiency advantages of lawful interactions and compromise.
Oh, sorry, to be clear I wasn’t arguing that this results in an incentive to kill or steal. I was just pushing back on a local point that seemed wrong to me.
I mean, this depends on competition right? Like it’s not clear that the AIs can reap these gains because you can just train an AI to compete? (And the main reason why this competition argument could fail is that it’s too hard to ensure that your AI works for you productively because ensuring sufficient alignment/etc is too hard. Or legal reasons.)
[Edit: I edited this comment to make it clear that I was just arguing about whether AIs could easily become vastly richer and about the implications of this. I wasn’t trying to argue about theft/murder here though I do probably disagree here also in some important ways.]
Separately, in this sort of scenario, it sounds to me like AIs gain control over a high fraction of the cosmic endowment. Personally, what happens with the cosmic endowment is a high fraction of what I care about (maybe about 95% of what I care about), so this seems probably about as bad as violent takeover (perhaps one difference is in the selection effects on AIs).
[ETA: Apologies, it appears I misinterpreted you as defending the claim that AIs will have an incentive to steal or commit murder if they are subject to competition.]
That’s true for humans too, at various levels of social organization, and yet I don’t think humans have a strong incentive to kill off or steal from weaker/less intelligent people or countries etc. To understand what’s going on here, I think it’s important to analyze these arguments in existing economic frameworks—and not because I’m applying a simplistic “AIs will be like humans” argument but rather because I think these frameworks are simply our best existing, empirically validated models of what happens when a bunch of agents with different values and levels of power are in competition with each other.
In these models, it is generally not accurate to say that powerful agents have strong convergent incentives to kill or steal from weaker agents, which is the primary thing I’m arguing against. Trade is not assumed to happen in these models because all agents consider themselves roughly all equally powerful, or because the agents have the same moral views, or because there’s no way to be unseated by cheap competition, and so on. These models generally refer to abstract agents of varying levels of power and differing values, in a diverse range of circumstances, and yet still predict peaceful trade because of the efficiency advantages of lawful interactions and compromise.
Oh, sorry, to be clear I wasn’t arguing that this results in an incentive to kill or steal. I was just pushing back on a local point that seemed wrong to me.