They are prevented from simulating other pre-existing people without their consent
Why do you think this will be the result of the value aggregation (or a lower bound on how good the aggregation will be)? For example, if there is a big block of people who all want to simulate person X in order to punish that person, and only X and a few other people object, why won’t the value aggregation be “nobody pre-existing except X (and Y and Z etc.) can be simulated”?
Given some assumptions about the domains of the utility functions, it is possible to do better than what I described in the previous comment. Let Xi be the space of possible experience histories[1] of user i and Y the space of everything else the utility functions depend on (things that nobody can observe directly). Suppose that the domain of the utility functions is Z:=∏iXi×Y. Then, we can define the “denosing[2] operator” Di:C(Z)→C(Z) for user i by
(Diu)(xi,x−i,y):=maxx′∈∏j≠iXju(xi,x′,y)
Here, xi is the argument of u that ranges in Xi, x−i are the arguments that range in Xj for j≠i and y is the argument that ranges in Y.
That is, Di modifies a utility function by having it “imagine” that the experiences of all users other than i have been optimized, for the experiences of user i and the unobservables held constant.
Let ui:Z→R be the utility function of user i, and d0∈Rn the initial disagreement point (everyone dying), where n is the number of users. We then perform cooperative bargaining on the denosed utility functions Diui with disagreement point d0, producing some outcome μ0∈Δ(Z). Define d1∈Rn by d1i:=Eμ[ui]. Now we do another cooperative bargaining with d1 as the disagreement point and the original utility functions ui. This gives us the final outcome μ1.
Among other benefits, there is now much less need to remove outliers. Perhaps, instead of removing them we still want to mitigate them by applying “amplified denosing” to them which also removes the dependence on Y.
For this procedure, there is a much better case that the lower bound will be met.
This is very interesting (and “denosing operator” is delightful).
Some thoughts:
If I understand correctly, I think there can still be a problem where user i wants an experience history such that part of the history is isomorphic to a simulation of user j suffering (i wants to fully experience j suffering in every detail).
Here a fixed xi may entail some fixed xj for (some copy of) some j.
It seems the above approach can’t then avoid leaving one of i or j badly off: If i is permitted to freely determine the experience of the embedded j copy, the disagreement point in the second bargaining will bake this in: j may be horrified to see that i wants to experience its copy suffer, but will be powerless to stop it (if i won’t budge in the bargaining).
Conversely, if the embedded j is treated as a user which i will imagine is exactly to i’s liking, but who actually gets what j wants, then the selected μ0 will be horrible for i (e.g. perhaps i wants to fully experience Hitler suffering, and instead gets to fully experience Hitler’s wildest fantasies being realized).
I don’t think it’s possible to do anything like denosing to avoid this.
It may seem like this isn’t a practical problem, since we could reasonably disallow such embedding. However, I think that’s still tricky since there’s a less exotic version of the issue: my experiences likely already are a collection of subagents’ experiences. Presumably my maximisation over xjoe is permitted to determine all the xsubjoe.
It’s hard to see how you draw a principled line here: the ideal future for most people may easily be transhumanist to the point where today’s users are tomorrow’s subpersonalities (and beyond).
A case that may have to be ruled out separately is where i wants to become a suffering j. Depending on what I consider ‘me’, I might be entirely fine with it if ‘I’ wake up tomorrow as suffering j (if I’m done living and think j deserves to suffer). Or perhaps I want to clone myself 1010 times, and then have all copies convert themselves to suffering js after a while. [in general, it seems there has to be some mechanism to distribute resources reasonably—but it’s not entirely clear what that should be]
I think that a rigorous treatment of such issues will require some variant of IB physicalism (in which the monotonicity problem has been solved, somehow). I am cautiously optimistic that a denosing operator exists there which dodges these problems. This operator will declare both the manifesting and evaluation of the source codes of other users to be “out of scope” for a given user. Hence, a preference of i to observe the suffering of j would be “satisfied” by observing nearly anything, since the maximization can interpret anything as a simulation of j.
The “subjoe” problem is different: it is irrelevant because “subjoe” is not a user, only Joe is a user. All the transhumanist magic that happens later doesn’t change this. Users are people living during the AI launch, and only them. The status of any future (trans/post)humans is determined entirely according to the utility functions of users. Why? For two reasons: (i) the AI can only have access and stable pointers to existing people (ii) we only need the buy-in of existing people to launch the AI. If existing people want future people to be treated well, then they have nothing to worry about since this preference is part of the existing people’s utility functions.
Ah—that’s cool if IB physicalism might address this kind of thing (still on my to-read list).
Agreed that the subjoe thing isn’t directly a problem. My worry is mainly whether it’s harder to rule out i experiencing a simulation of xsubj−suffering, since subj isn’t a user. However, if you can avoid the suffering js by limiting access to information, the same should presumably work for relevant sub-js.
If existing people want future people to be treated well, then they have nothing to worry about since this preference is part of the existing people’s utility functions.
This isn’t so clear (to me at least) if:
Most, but not all current users want future people to be treated well.
Part of being “treated well” includes being involved in an ongoing bargaining process which decides the AI’s/future’s trajectory.
For instance, suppose initially 90% of people would like to have an iterated bargaining process that includes future (trans/post)humans as users, once they exist. The other 10% are only willing to accept such a situation if they maintain their bargaining power in future iterations (by whatever mechanism).
If you iterate this process, the bargaining process ends up dominated by users who won’t relinquish any power to future users. 90% of initial users might prefer drift over lock-in, but we get lock-in regardless (the disagreement point also amounting to lock-in).
Unless I’m confusing myself, this kind of thing seems like a problem. (not in terms of reaching some non-terrible lower bound, but in terms of realising potential) Wherever there’s this kind of asymmetry/degradation over bargaining iterations, I think there’s an argument for building in a way to avoid it from the start—since anything short of 100% just limits to 0 over time. [it’s by no means clear that we do want to make future people users on an equal footing to today’s people; it just seems to me that we have to do it at step zero or not at all]
Ah—that’s cool if IB physicalism might address this kind of thing
I admit that at this stage it’s unclear because physicalism brings in the monotonicity principle that creates bigger problems than what we discuss here. But maybe some variant can work.
For instance, suppose initially 90% of people would like to have an iterated bargaining process that includes future (trans/post)humans as users, once they exist. The other 10% are only willing to accept such a situation if they maintain their bargaining power in future iterations (by whatever mechanism).
Roughly speaking, in this case the 10% preserve their 10% of the power forever. I think it’s fine because I want the buy-in of this 10% and the cost seems acceptable to me. I’m also not sure there is any viable alternative which doesn’t have even bigger problems.
Sure, I’m not sure there’s a viable alternative either. This kind of approach seems promising—but I want to better understand any downsides.
My worry wasn’t about the initial 10%, but about the possibility of the process being iterated such that you end up with almost all bargaining power in the hands of power-keepers.
In retrospect, this is probably silly: if there’s a designable-by-us mechanism that better achieves what we want, the first bargaining iteration should find it. If not, then what I’m gesturing at must either be incoherent, or not endorsed by the 10% - so hard-coding it into the initial mechanism wouldn’t get the buy-in of the 10% to the extent that they understood the mechanism.
In the end, I think my concern is that we won’t get buy-in from a large majority of users: In order to accommodate some proportion with odd moral views it seems likely you’ll be throwing away huge amounts of expected value in others’ views—if I’m correctly interpreting your proposal (please correct me if I’m confused).
Is this where you’d want to apply amplified denosing? So, rather than filtering out the undesirable i, for these i you use:
(Diu)(xi,x−i,y):=maxx′∈∏j≠iXj,y′∈Yu(xi,x′,y′) [i.e. ignoring y and imagining it’s optimal]
However, it’s not clear to me how we’d decide who gets strong denosing (clearly not everyone, or we don’t pick a y). E.g. if you strong-denose anyone who’s too willing to allow bargaining failure [everyone dies] you might end up filtering out altruists who worry about suffering risks. Does that make sense?
My worry wasn’t about the initial 10%, but about the possibility of the process being iterated such that you end up with almost all bargaining power in the hands of power-keepers.
I’m not sure what you mean here, but also the process is not iterated: the initial bargaining is deciding the outcome once and for all. At least that’s the mathematical ideal we’re approximating.
In the end, I think my concern is that we won’t get buy-in from a large majority of users:
In order to accommodate some proportion with odd moral views it seems likely you’ll be throwing away huge amounts of expected value in others’ views
I don’t think so? The bargaining system does advantage large groups over small groups.
In practice, I think that for the most part people don’t care much about what happens “far” from them (for some definition of “far”, not physical distance) so giving them private utopias is close to optimal from each individual perspective. Although it’s true they might pretend to care more than they do for the usual reasons, if they’re thinking in “far-mode”.
I would certainly be very concerned about any system that gives even more power to majority views. For example, what if the majority of people are disgusted by gay sex and prefer it not the happen anywhere? I would rather accept things I disapprove of happening far away from me than allow other people to control my own life.
Ofc the system also mandates win-win exchanges. For example, if Alice’s and Bob’s private utopias each contain something strongly unpalatable to the other but not strongly important to the respective customer, the bargaining outcome will remove both unpalatable things.
E.g. if you strong-denose anyone who’s too willing to allow bargaining failure [everyone dies] you might end up filtering out altruists who worry about suffering risks.
I’m fine with strong-denosing negative utlitarianists who would truly stick to their guns about negative utilitarianism (but I also don’t think there are many).
Ah, I was just being an idiot on the bargaining system w.r.t. small numbers of people being able to hold it to ransom. Oops. Agreed that more majority power isn’t desirable. [re iteration, I only meant that the bargaining could become iterated if the initial bargaining result were to decide upon iteration (to include more future users). I now don’t think this is particularly significant.]
I think my remaining uncertainty (/confusion) is all related to the issue I first mentioned (embedded copy experiences). It strikes me that something like this can also happen where minds grow/merge/overlap.
This operator will declare both the manifesting and evaluation of the source codes of other users to be “out of scope” for a given user. Hence, a preference of i to observe the suffering of j would be “satisfied” by observing nearly anything, since the maximization can interpret anything as a simulation of j.
Does this avoid the problem if i’s preferences use indirection? It seems to me that a robust pointer to j may be enough: that with a robust pointer it may be possible to implicitly require something like source-code-access without explicitly referencing it. E.g. where i has a preference to “experience j suffering in circumstances where there’s strong evidence it’s actually j suffering, given that these circumstances were the outcome of this bargaining process”.
If i can’t robustly specify things like this, then I’d guess there’d be significant trouble in specifying quite a few (mutually) desirable situations involving other users too. IIUC, this would only be any problem for the denosed bargaining to find a good d1: for the second bargaining on the true utility functions there’s no need to put anything “out of scope” (right?), so win-wins are easily achieved.
I’m imagining cooperative bargaining between all users, where the disagreement point is everyone dying[1][2] (this is a natural choice assuming that if we don’t build aligned TAI we get paperclips). This guarantees that every user will receive an outcome that’s at least not worse than death.
With Nash bargaining, we can still get issues for (in)famous people that millions of people want to do unpleasant things to. Their outcome will be better than death, but maybe worse than in my claimed “lower bound”.
With Kalai-Smorodinsky bargaining things look better, since essentially we’re maximizing a minimum over all users. This should admit my lower bound, unless it is somehow disrupted by enormous asymmetries in the maximal payoffs of different users.
In either case, we might need to do some kind of outlier filtering: if e.g. literally every person on Earth is a user, then maybe some of them are utterly insane in ways that cause the Pareto frontier to collapse.
Bargaining assumes we can access the utility function. In reality, even if we solve the value learning problem in the single user case, once you go to the multi-user case it becomes a mechanism design problem: users have incentives to lie / misrepresent their utility functions. A perfect solution might be impossible, but I proposed mitigating this by assigning each user a virtual “AI lawyer” that provides optimal input on their behalf into the bargaining system. In this case they at least have no incentive to lie to the lawyer, and the outcome will not be skewed in favor of users who are better in this game, but we don’t get the optimal bargaining solution either.
All of this assumes the TAI is based on some kind of value learning. If the first-stage TAI is based on something else, the problem might become easier or harder. Easier because the first-stage TAI will produce better solutions to the multi-user problem for the second-stage TAI. Harder because it can allow the small group of people controlling it to impose their own preferences.
For IDA-of-imitation, democratization seems like a hard problem because the mechanism by which IDA-of-imitation solves AI risk is precisely by empowering a small group of people over everyone else (since the source of AI risk comes from other people launching unaligned TAI). Adding transparency can entirely undermine safety.
For quantilized debate, adding transparency opens us to an attack vector where the AI manipulates public opinion. This significantly lowers the optimization pressure bar for manipulation, compared to manipulating the (carefully selected) judges, which might undermine the key assumption that effective dishonest strategies are harder to find than effective honest strategies.
This can be formalized by literally having the AI consider the possibility of optimizing for some unaligned utility function. This is a weird and risky approach but it works to 1st approximation.
Bargaining assumes we can access the utility function. In reality, even if we solve the value learning problem in the single user case, once you go to the multi-user case it becomes a mechanism design problem: users have incentives to lie / misrepresent their utility functions. A perfect solution might be impossible, but I proposed mitigating this by assigning each user a virtual “AI lawyer” that provides optimal input on their behalf into the bargaining system. In this case they at least have no incentive to lie to the lawyer, and the outcome will not be skewed in favor of users who are better in this game, but we don’t get the optimal bargaining solution either.
Assuming each lawyer has the same incentive to lie as its client, it has an incentive to misrepresent that some preferable-to-death outcomes are “worse-than-death” (in order to force those outcomes out of the set of “feasible agreements” in hope of getting a more preferred outcome as the actual outcome), and this at equilibrium is balanced by the marginal increase in the probability of getting “everyone dies” as the outcome (due to feasible agreements becoming a null set) caused by the lie. So the probability of “everyone dies” in this game has to be non-zero.
(It’s the same kind of problem as in the AI race or tragedy of commons: people not taking into account the full social costs of their actions as they reach for private benefits.)
Of course in actuality everyone dying may not be a realistic consequence of failure to reach agreement, but if the real consequence is better than that, and the AI lawyers know this, they would be more willing to lie since the perceived downside of lying would be smaller, so you end up with a higher chance of no agreement.
Yes, it’s not a very satisfactory solution. Some alternative/complementary solutions:
Somehow use non-transformative AI to do my mind uploading, and then have the TAI to learn by inspecting the uploads. Would be great for single-user alignment as well.
Somehow use non-transformative AI to create perfect lie detectors, and use this to enforce honesty in the mechanism. (But, is it possible to detect self-deception?)
Have the TAI learn from past data which wasn’t affected by the incentives created by the TAI. (But, is there enough information there?)
Shape the TAI’s prior about human values in order to rule out at least the most blatant lies.
Some clever mechanism design I haven’t thought of. The problem with this is, most mechanism designs rely on money and money that doesn’t seem applicable, whereas when you don’t have money there are many impossibility theorems.
In either case, we might need to do some kind of outlier filtering: if e.g. literally every person on Earth is a user, then maybe some of them are utterly insane in ways that cause the Pareto frontier to collapse.
This seems near guaranteed to me: a non-zero amount of people will be that crazy (in our terms), so filtering will be necessary.
Then I’m curious about how we draw the line on outlier filtering. What filtering rule do we use? I don’t yet see a good principled rule (e.g. if we want to throw out people who’d collapse agreement to the disagreement point, there’s more than one way to do that).
Maybe crazy behaviour correlates with less intelligence
Depending what we mean by ‘crazy’ I think that’s unlikely—particularly when what we care about here are highly unusual moral stances. I’d see intelligence as a multiplier, rather than something which points you in the ‘right’ direction. Outliers will be at both extremes of intelligence—and I think you’ll get a much wider moral variety on the high end.
For instance, I don’t think you’ll find many low-intelligence antinatalists—and here I mean the stronger, non-obvious claim: not simply that most people calling themselves antinatalists, or advocating for antinatalism will have fairly high intelligence, but rather that most people with such a moral stance (perhaps not articulated) will have fairly high intelligence.
Generally, I think there are many weird moral stances you might think your way into that you’d be highly unlikely to find ‘naturally’ (through e.g. absorption of cultural norms). I’d also expect creativity to positively correlate with outlier moralities. Minds that habitually throw together seven disparate concepts will find crazier notions than those which don’t get beyond three.
First, I think we want to be thinking in terms of [personal morality we’d reflectively endorse] rather than [all the base, weird, conflicting… drivers of behaviour that happen to be in our heads].
There are things most of us would wish to change about ourselves if we could. There’s no sense in baking them in for all eternity (or bargaining on their behalf), just because they happen to form part of what drives us now. [though one does have to be a bit careful here, since it’s easy to miss the upside of qualities we regard as flaws]
With this in mind, reflectively endorsed antinatalism really is a problem: yes, some people will endorse sacrificing everything just to get to a world where there’s no suffering (because there are no people).
Note that the kinds of bargaining approach Vanessa is advocating are aimed at guaranteeing a lower bound for everyone (who’s not pre-filtered out) - so you only need to include one person with a particularly weird view to fail to reach a sensible bargain. [though her most recent version should avoid this]
Why do you think this will be the result of the value aggregation (or a lower bound on how good the aggregation will be)? For example, if there is a big block of people who all want to simulate person X in order to punish that person, and only X and a few other people object, why won’t the value aggregation be “nobody pre-existing except X (and Y and Z etc.) can be simulated”?
Given some assumptions about the domains of the utility functions, it is possible to do better than what I described in the previous comment. Let Xi be the space of possible experience histories[1] of user i and Y the space of everything else the utility functions depend on (things that nobody can observe directly). Suppose that the domain of the utility functions is Z:=∏iXi×Y. Then, we can define the “denosing[2] operator” Di:C(Z)→C(Z) for user i by
(Diu)(xi,x−i,y):=maxx′∈∏j≠iXju(xi,x′,y)
Here, xi is the argument of u that ranges in Xi, x−i are the arguments that range in Xj for j≠i and y is the argument that ranges in Y.
That is, Di modifies a utility function by having it “imagine” that the experiences of all users other than i have been optimized, for the experiences of user i and the unobservables held constant.
Let ui:Z→R be the utility function of user i, and d0∈Rn the initial disagreement point (everyone dying), where n is the number of users. We then perform cooperative bargaining on the denosed utility functions Diui with disagreement point d0, producing some outcome μ0∈Δ(Z). Define d1∈Rn by d1i:=Eμ[ui]. Now we do another cooperative bargaining with d1 as the disagreement point and the original utility functions ui. This gives us the final outcome μ1.
Among other benefits, there is now much less need to remove outliers. Perhaps, instead of removing them we still want to mitigate them by applying “amplified denosing” to them which also removes the dependence on Y.
For this procedure, there is a much better case that the lower bound will be met.
In the standard RL formalism this is the space of action-observation sequences (A×O)ω.
From the expression “nosy preferences”, see e.g. here.
This is very interesting (and “denosing operator” is delightful).
Some thoughts:
If I understand correctly, I think there can still be a problem where user i wants an experience history such that part of the history is isomorphic to a simulation of user j suffering (i wants to fully experience j suffering in every detail).
Here a fixed xi may entail some fixed xj for (some copy of) some j.
It seems the above approach can’t then avoid leaving one of i or j badly off:
If i is permitted to freely determine the experience of the embedded j copy, the disagreement point in the second bargaining will bake this in: j may be horrified to see that i wants to experience its copy suffer, but will be powerless to stop it (if i won’t budge in the bargaining).
Conversely, if the embedded j is treated as a user which i will imagine is exactly to i’s liking, but who actually gets what j wants, then the selected μ0 will be horrible for i (e.g. perhaps i wants to fully experience Hitler suffering, and instead gets to fully experience Hitler’s wildest fantasies being realized).
I don’t think it’s possible to do anything like denosing to avoid this.
It may seem like this isn’t a practical problem, since we could reasonably disallow such embedding. However, I think that’s still tricky since there’s a less exotic version of the issue: my experiences likely already are a collection of subagents’ experiences. Presumably my maximisation over xjoe is permitted to determine all the xsubjoe.
It’s hard to see how you draw a principled line here: the ideal future for most people may easily be transhumanist to the point where today’s users are tomorrow’s subpersonalities (and beyond).
A case that may have to be ruled out separately is where i wants to become a suffering j. Depending on what I consider ‘me’, I might be entirely fine with it if ‘I’ wake up tomorrow as suffering j (if I’m done living and think j deserves to suffer).
Or perhaps I want to clone myself 1010 times, and then have all copies convert themselves to suffering js after a while. [in general, it seems there has to be some mechanism to distribute resources reasonably—but it’s not entirely clear what that should be]
I think that a rigorous treatment of such issues will require some variant of IB physicalism (in which the monotonicity problem has been solved, somehow). I am cautiously optimistic that a denosing operator exists there which dodges these problems. This operator will declare both the manifesting and evaluation of the source codes of other users to be “out of scope” for a given user. Hence, a preference of i to observe the suffering of j would be “satisfied” by observing nearly anything, since the maximization can interpret anything as a simulation of j.
The “subjoe” problem is different: it is irrelevant because “subjoe” is not a user, only Joe is a user. All the transhumanist magic that happens later doesn’t change this. Users are people living during the AI launch, and only them. The status of any future (trans/post)humans is determined entirely according to the utility functions of users. Why? For two reasons: (i) the AI can only have access and stable pointers to existing people (ii) we only need the buy-in of existing people to launch the AI. If existing people want future people to be treated well, then they have nothing to worry about since this preference is part of the existing people’s utility functions.
Ah—that’s cool if IB physicalism might address this kind of thing (still on my to-read list).
Agreed that the subjoe thing isn’t directly a problem. My worry is mainly whether it’s harder to rule out i experiencing a simulation of xsubj−suffering, since subj isn’t a user. However, if you can avoid the suffering js by limiting access to information, the same should presumably work for relevant sub-js.
This isn’t so clear (to me at least) if:
Most, but not all current users want future people to be treated well.
Part of being “treated well” includes being involved in an ongoing bargaining process which decides the AI’s/future’s trajectory.
For instance, suppose initially 90% of people would like to have an iterated bargaining process that includes future (trans/post)humans as users, once they exist. The other 10% are only willing to accept such a situation if they maintain their bargaining power in future iterations (by whatever mechanism).
If you iterate this process, the bargaining process ends up dominated by users who won’t relinquish any power to future users. 90% of initial users might prefer drift over lock-in, but we get lock-in regardless (the disagreement point also amounting to lock-in).
Unless I’m confusing myself, this kind of thing seems like a problem. (not in terms of reaching some non-terrible lower bound, but in terms of realising potential)
Wherever there’s this kind of asymmetry/degradation over bargaining iterations, I think there’s an argument for building in a way to avoid it from the start—since anything short of 100% just limits to 0 over time. [it’s by no means clear that we do want to make future people users on an equal footing to today’s people; it just seems to me that we have to do it at step zero or not at all]
I admit that at this stage it’s unclear because physicalism brings in the monotonicity principle that creates bigger problems than what we discuss here. But maybe some variant can work.
Roughly speaking, in this case the 10% preserve their 10% of the power forever. I think it’s fine because I want the buy-in of this 10% and the cost seems acceptable to me. I’m also not sure there is any viable alternative which doesn’t have even bigger problems.
Sure, I’m not sure there’s a viable alternative either. This kind of approach seems promising—but I want to better understand any downsides.
My worry wasn’t about the initial 10%, but about the possibility of the process being iterated such that you end up with almost all bargaining power in the hands of power-keepers.
In retrospect, this is probably silly: if there’s a designable-by-us mechanism that better achieves what we want, the first bargaining iteration should find it. If not, then what I’m gesturing at must either be incoherent, or not endorsed by the 10% - so hard-coding it into the initial mechanism wouldn’t get the buy-in of the 10% to the extent that they understood the mechanism.
In the end, I think my concern is that we won’t get buy-in from a large majority of users:
In order to accommodate some proportion with odd moral views it seems likely you’ll be throwing away huge amounts of expected value in others’ views—if I’m correctly interpreting your proposal (please correct me if I’m confused).
Is this where you’d want to apply amplified denosing?
So, rather than filtering out the undesirable i, for these i you use:
(Diu)(xi,x−i,y):=maxx′∈∏j≠iXj, y′∈Yu(xi,x′,y′) [i.e. ignoring y and imagining it’s optimal]
However, it’s not clear to me how we’d decide who gets strong denosing (clearly not everyone, or we don’t pick a y). E.g. if you strong-denose anyone who’s too willing to allow bargaining failure [everyone dies] you might end up filtering out altruists who worry about suffering risks.
Does that make sense?
I’m not sure what you mean here, but also the process is not iterated: the initial bargaining is deciding the outcome once and for all. At least that’s the mathematical ideal we’re approximating.
I don’t think so? The bargaining system does advantage large groups over small groups.
In practice, I think that for the most part people don’t care much about what happens “far” from them (for some definition of “far”, not physical distance) so giving them private utopias is close to optimal from each individual perspective. Although it’s true they might pretend to care more than they do for the usual reasons, if they’re thinking in “far-mode”.
I would certainly be very concerned about any system that gives even more power to majority views. For example, what if the majority of people are disgusted by gay sex and prefer it not the happen anywhere? I would rather accept things I disapprove of happening far away from me than allow other people to control my own life.
Ofc the system also mandates win-win exchanges. For example, if Alice’s and Bob’s private utopias each contain something strongly unpalatable to the other but not strongly important to the respective customer, the bargaining outcome will remove both unpalatable things.
I’m fine with strong-denosing negative utlitarianists who would truly stick to their guns about negative utilitarianism (but I also don’t think there are many).
Ah, I was just being an idiot on the bargaining system w.r.t. small numbers of people being able to hold it to ransom. Oops. Agreed that more majority power isn’t desirable.
[re iteration, I only meant that the bargaining could become iterated if the initial bargaining result were to decide upon iteration (to include more future users). I now don’t think this is particularly significant.]
I think my remaining uncertainty (/confusion) is all related to the issue I first mentioned (embedded copy experiences). It strikes me that something like this can also happen where minds grow/merge/overlap.
Does this avoid the problem if i’s preferences use indirection? It seems to me that a robust pointer to j may be enough: that with a robust pointer it may be possible to implicitly require something like source-code-access without explicitly referencing it. E.g. where i has a preference to “experience j suffering in circumstances where there’s strong evidence it’s actually j suffering, given that these circumstances were the outcome of this bargaining process”.
If i can’t robustly specify things like this, then I’d guess there’d be significant trouble in specifying quite a few (mutually) desirable situations involving other users too. IIUC, this would only be any problem for the denosed bargaining to find a good d1: for the second bargaining on the true utility functions there’s no need to put anything “out of scope” (right?), so win-wins are easily achieved.
I’m imagining cooperative bargaining between all users, where the disagreement point is everyone dying[1][2] (this is a natural choice assuming that if we don’t build aligned TAI we get paperclips). This guarantees that every user will receive an outcome that’s at least not worse than death.
With Nash bargaining, we can still get issues for (in)famous people that millions of people want to do unpleasant things to. Their outcome will be better than death, but maybe worse than in my claimed “lower bound”.
With Kalai-Smorodinsky bargaining things look better, since essentially we’re maximizing a minimum over all users. This should admit my lower bound, unless it is somehow disrupted by enormous asymmetries in the maximal payoffs of different users.
In either case, we might need to do some kind of outlier filtering: if e.g. literally every person on Earth is a user, then maybe some of them are utterly insane in ways that cause the Pareto frontier to collapse.
[EDIT: see improved solution]
Bargaining assumes we can access the utility function. In reality, even if we solve the value learning problem in the single user case, once you go to the multi-user case it becomes a mechanism design problem: users have incentives to lie / misrepresent their utility functions. A perfect solution might be impossible, but I proposed mitigating this by assigning each user a virtual “AI lawyer” that provides optimal input on their behalf into the bargaining system. In this case they at least have no incentive to lie to the lawyer, and the outcome will not be skewed in favor of users who are better in this game, but we don’t get the optimal bargaining solution either.
All of this assumes the TAI is based on some kind of value learning. If the first-stage TAI is based on something else, the problem might become easier or harder. Easier because the first-stage TAI will produce better solutions to the multi-user problem for the second-stage TAI. Harder because it can allow the small group of people controlling it to impose their own preferences.
For IDA-of-imitation, democratization seems like a hard problem because the mechanism by which IDA-of-imitation solves AI risk is precisely by empowering a small group of people over everyone else (since the source of AI risk comes from other people launching unaligned TAI). Adding transparency can entirely undermine safety.
For quantilized debate, adding transparency opens us to an attack vector where the AI manipulates public opinion. This significantly lowers the optimization pressure bar for manipulation, compared to manipulating the (carefully selected) judges, which might undermine the key assumption that effective dishonest strategies are harder to find than effective honest strategies.
This can be formalized by literally having the AI consider the possibility of optimizing for some unaligned utility function. This is a weird and risky approach but it works to 1st approximation.
An alternative choice of disagreement point is maximizing the utility of a randomly chosen user. This has advantages and disadvantages.
Assuming each lawyer has the same incentive to lie as its client, it has an incentive to misrepresent that some preferable-to-death outcomes are “worse-than-death” (in order to force those outcomes out of the set of “feasible agreements” in hope of getting a more preferred outcome as the actual outcome), and this at equilibrium is balanced by the marginal increase in the probability of getting “everyone dies” as the outcome (due to feasible agreements becoming a null set) caused by the lie. So the probability of “everyone dies” in this game has to be non-zero.
(It’s the same kind of problem as in the AI race or tragedy of commons: people not taking into account the full social costs of their actions as they reach for private benefits.)
Of course in actuality everyone dying may not be a realistic consequence of failure to reach agreement, but if the real consequence is better than that, and the AI lawyers know this, they would be more willing to lie since the perceived downside of lying would be smaller, so you end up with a higher chance of no agreement.
Yes, it’s not a very satisfactory solution. Some alternative/complementary solutions:
Somehow use non-transformative AI to do my mind uploading, and then have the TAI to learn by inspecting the uploads. Would be great for single-user alignment as well.
Somehow use non-transformative AI to create perfect lie detectors, and use this to enforce honesty in the mechanism. (But, is it possible to detect self-deception?)
Have the TAI learn from past data which wasn’t affected by the incentives created by the TAI. (But, is there enough information there?)
Shape the TAI’s prior about human values in order to rule out at least the most blatant lies.
Some clever mechanism design I haven’t thought of. The problem with this is, most mechanism designs rely on money and money that doesn’t seem applicable, whereas when you don’t have money there are many impossibility theorems.
This seems near guaranteed to me: a non-zero amount of people will be that crazy (in our terms), so filtering will be necessary.
Then I’m curious about how we draw the line on outlier filtering. What filtering rule do we use? I don’t yet see a good principled rule (e.g. if we want to throw out people who’d collapse agreement to the disagreement point, there’s more than one way to do that).
Depending what we mean by ‘crazy’ I think that’s unlikely—particularly when what we care about here are highly unusual moral stances. I’d see intelligence as a multiplier, rather than something which points you in the ‘right’ direction. Outliers will be at both extremes of intelligence—and I think you’ll get a much wider moral variety on the high end.
For instance, I don’t think you’ll find many low-intelligence antinatalists—and here I mean the stronger, non-obvious claim: not simply that most people calling themselves antinatalists, or advocating for antinatalism will have fairly high intelligence, but rather that most people with such a moral stance (perhaps not articulated) will have fairly high intelligence.
Generally, I think there are many weird moral stances you might think your way into that you’d be highly unlikely to find ‘naturally’ (through e.g. absorption of cultural norms).
I’d also expect creativity to positively correlate with outlier moralities. Minds that habitually throw together seven disparate concepts will find crazier notions than those which don’t get beyond three.
First, I think we want to be thinking in terms of [personal morality we’d reflectively endorse] rather than [all the base, weird, conflicting… drivers of behaviour that happen to be in our heads].
There are things most of us would wish to change about ourselves if we could. There’s no sense in baking them in for all eternity (or bargaining on their behalf), just because they happen to form part of what drives us now. [though one does have to be a bit careful here, since it’s easy to miss the upside of qualities we regard as flaws]
With this in mind, reflectively endorsed antinatalism really is a problem: yes, some people will endorse sacrificing everything just to get to a world where there’s no suffering (because there are no people).
Note that the kinds of bargaining approach Vanessa is advocating are aimed at guaranteeing a lower bound for everyone (who’s not pre-filtered out) - so you only need to include one person with a particularly weird view to fail to reach a sensible bargain. [though her most recent version should avoid this]