‘in worlds where acausal decision theorists are more consequentialist, we have an increased ability to enter into multiverse-wide acausal trades which are beneficial from the perspective of both parties. We should thus increase the number of consequentialists, so that more trades of this kind are made.’
This only holds to the extent that creating consequentialists has no other downsides, and that they are trading for things we want.
Suppose omega told me that there are gazillions of powerful agents in other universes, that are willing to fill their universe with paperclips in exchange for making one small staple in this universe. This would not encourage me to build a paperclip maximizer. A paperclip maximizer in this universe would be able to gain enormous amounts of paperclips from multiversal cooperation, but I don’t particularly want paperclips, so while it benefits both parties, it doesn’t benefit me.
If we are making a friendly AI, we might prefer it to be able to partake in multiverse wide trades.
This was my reconstruction of Caspar’s argument, which may be wrong. But I took the argument to be that we should promote consequentialism in the world as we find it now, where Omega (fingers crossed!) isn’t going to tell me claims of this sort, and people do not, in general, explicitly optimise for things we greatly disvalue. In this world, if people are more consequentialist, then there is a greater potential for positive-sum trades with other agents in the multiverse. As agents, in this world, have some overlap with our values, we should encourage consequentialism, as consequentialist agents we can causally interact with will get more of what they want, and so we get more of what we want.
This only holds to the extent that creating consequentialists has no other downsides, and that they are trading for things we want.
Suppose omega told me that there are gazillions of powerful agents in other universes, that are willing to fill their universe with paperclips in exchange for making one small staple in this universe. This would not encourage me to build a paperclip maximizer. A paperclip maximizer in this universe would be able to gain enormous amounts of paperclips from multiversal cooperation, but I don’t particularly want paperclips, so while it benefits both parties, it doesn’t benefit me.
If we are making a friendly AI, we might prefer it to be able to partake in multiverse wide trades.
This was my reconstruction of Caspar’s argument, which may be wrong. But I took the argument to be that we should promote consequentialism in the world as we find it now, where Omega (fingers crossed!) isn’t going to tell me claims of this sort, and people do not, in general, explicitly optimise for things we greatly disvalue. In this world, if people are more consequentialist, then there is a greater potential for positive-sum trades with other agents in the multiverse. As agents, in this world, have some overlap with our values, we should encourage consequentialism, as consequentialist agents we can causally interact with will get more of what they want, and so we get more of what we want.