My worry wasn’t about the initial 10%, but about the possibility of the process being iterated such that you end up with almost all bargaining power in the hands of power-keepers.
I’m not sure what you mean here, but also the process is not iterated: the initial bargaining is deciding the outcome once and for all. At least that’s the mathematical ideal we’re approximating.
In the end, I think my concern is that we won’t get buy-in from a large majority of users:
In order to accommodate some proportion with odd moral views it seems likely you’ll be throwing away huge amounts of expected value in others’ views
I don’t think so? The bargaining system does advantage large groups over small groups.
In practice, I think that for the most part people don’t care much about what happens “far” from them (for some definition of “far”, not physical distance) so giving them private utopias is close to optimal from each individual perspective. Although it’s true they might pretend to care more than they do for the usual reasons, if they’re thinking in “far-mode”.
I would certainly be very concerned about any system that gives even more power to majority views. For example, what if the majority of people are disgusted by gay sex and prefer it not the happen anywhere? I would rather accept things I disapprove of happening far away from me than allow other people to control my own life.
Ofc the system also mandates win-win exchanges. For example, if Alice’s and Bob’s private utopias each contain something strongly unpalatable to the other but not strongly important to the respective customer, the bargaining outcome will remove both unpalatable things.
E.g. if you strong-denose anyone who’s too willing to allow bargaining failure [everyone dies] you might end up filtering out altruists who worry about suffering risks.
I’m fine with strong-denosing negative utlitarianists who would truly stick to their guns about negative utilitarianism (but I also don’t think there are many).
Ah, I was just being an idiot on the bargaining system w.r.t. small numbers of people being able to hold it to ransom. Oops. Agreed that more majority power isn’t desirable. [re iteration, I only meant that the bargaining could become iterated if the initial bargaining result were to decide upon iteration (to include more future users). I now don’t think this is particularly significant.]
I think my remaining uncertainty (/confusion) is all related to the issue I first mentioned (embedded copy experiences). It strikes me that something like this can also happen where minds grow/merge/overlap.
This operator will declare both the manifesting and evaluation of the source codes of other users to be “out of scope” for a given user. Hence, a preference of i to observe the suffering of j would be “satisfied” by observing nearly anything, since the maximization can interpret anything as a simulation of j.
Does this avoid the problem if i’s preferences use indirection? It seems to me that a robust pointer to j may be enough: that with a robust pointer it may be possible to implicitly require something like source-code-access without explicitly referencing it. E.g. where i has a preference to “experience j suffering in circumstances where there’s strong evidence it’s actually j suffering, given that these circumstances were the outcome of this bargaining process”.
If i can’t robustly specify things like this, then I’d guess there’d be significant trouble in specifying quite a few (mutually) desirable situations involving other users too. IIUC, this would only be any problem for the denosed bargaining to find a good d1: for the second bargaining on the true utility functions there’s no need to put anything “out of scope” (right?), so win-wins are easily achieved.
I’m not sure what you mean here, but also the process is not iterated: the initial bargaining is deciding the outcome once and for all. At least that’s the mathematical ideal we’re approximating.
I don’t think so? The bargaining system does advantage large groups over small groups.
In practice, I think that for the most part people don’t care much about what happens “far” from them (for some definition of “far”, not physical distance) so giving them private utopias is close to optimal from each individual perspective. Although it’s true they might pretend to care more than they do for the usual reasons, if they’re thinking in “far-mode”.
I would certainly be very concerned about any system that gives even more power to majority views. For example, what if the majority of people are disgusted by gay sex and prefer it not the happen anywhere? I would rather accept things I disapprove of happening far away from me than allow other people to control my own life.
Ofc the system also mandates win-win exchanges. For example, if Alice’s and Bob’s private utopias each contain something strongly unpalatable to the other but not strongly important to the respective customer, the bargaining outcome will remove both unpalatable things.
I’m fine with strong-denosing negative utlitarianists who would truly stick to their guns about negative utilitarianism (but I also don’t think there are many).
Ah, I was just being an idiot on the bargaining system w.r.t. small numbers of people being able to hold it to ransom. Oops. Agreed that more majority power isn’t desirable.
[re iteration, I only meant that the bargaining could become iterated if the initial bargaining result were to decide upon iteration (to include more future users). I now don’t think this is particularly significant.]
I think my remaining uncertainty (/confusion) is all related to the issue I first mentioned (embedded copy experiences). It strikes me that something like this can also happen where minds grow/merge/overlap.
Does this avoid the problem if i’s preferences use indirection? It seems to me that a robust pointer to j may be enough: that with a robust pointer it may be possible to implicitly require something like source-code-access without explicitly referencing it. E.g. where i has a preference to “experience j suffering in circumstances where there’s strong evidence it’s actually j suffering, given that these circumstances were the outcome of this bargaining process”.
If i can’t robustly specify things like this, then I’d guess there’d be significant trouble in specifying quite a few (mutually) desirable situations involving other users too. IIUC, this would only be any problem for the denosed bargaining to find a good d1: for the second bargaining on the true utility functions there’s no need to put anything “out of scope” (right?), so win-wins are easily achieved.