I got a good way through the setup to your first link. It took a while. If you’d be so kind, it would be nice to have a summary of why you think that rather dense set of posts is so relevant here? What I read did not match your link text (“The most efficient negotiation outcome will line up with preference utilitarianism.”) closely enough for this purpose. In some cases, I can get more of my preferences with you eliminated; no negotiation necessary :).
The setup for that post was a single decision, with the failure to cooperate being pretty bad for both parties. The problem here is that that isn’t necessarily the case; the winner can almost take all, depending on their preferences. They can get their desired future in the long run, sacrificing only the short run, which is tiny if you’re really a longtermist. And the post doesn’t seem to address the iterated case; how do you know if someone’s going to renege, after some previous version of them has agreed to “fairly” split the future?
So I don’t understand how the posts you link resolve that concern. Sure with sufficient intelligence you can get “chaa” (from your linked post: “fair”, proportional to power/ability to take what you want), but what if “chaa” is everyone but the first actor dead?
If the solution is “sharing source code” as in earlier work I don’t think that’s at all applicable to network-based AGI; the three body problem of prediction applies in spades.
Hmm well I’d say it gets into that immediately, but it does so in a fairly abstract way. I’d recommend the whole lot though. It’s generally about what looks like a tendency in the math towards the unity of various bargaining systems.
The setup for that post was a single decision
A single decision can be something like “who to be, how to live, from now on”. There isn’t a strict distinction between single decision and all decisions from then on when acts of self-modification are possible, as self-modification changes all future decisions.
On reflection, I’m not sure bargaining theory undermines the point you were making, I do think it’s possible that one party or another would dominate the merger, depending on what the technologies of superintelligent war turn out to be and how much the Us of the participants care about near term strife.
But the feasibility of converging towards merger seems like a relevant aspect of all of this.
Transparency aids/sufficies negotiation, but there wont be much of a negotiation if, say, having nukes turns out to be a very weak bargaining chip and the power distribution is just about who gets nanotech[1] first or whatever, or if it turns out that human utility functions don’t care as much about loss of life in the near term as they do about owning the entirety of the future. I think the latter is very unlikely and the former is debatable.
I don’t exactly believe in “nanotech”. I think materials science advances continuously and practical molecularly precise manufacturing will tend to look like various iterations of synthetic biology (you need a whole lot of little printer heads in order to make a large enough quantity of stuff to matter). There may be a threshold here, though, which we could call “DNA 2.0″ or something, a form of life that uses stronger things than amino acids.
I got a good way through the setup to your first link. It took a while. If you’d be so kind, it would be nice to have a summary of why you think that rather dense set of posts is so relevant here? What I read did not match your link text (“The most efficient negotiation outcome will line up with preference utilitarianism.”) closely enough for this purpose. In some cases, I can get more of my preferences with you eliminated; no negotiation necessary :).
The setup for that post was a single decision, with the failure to cooperate being pretty bad for both parties. The problem here is that that isn’t necessarily the case; the winner can almost take all, depending on their preferences. They can get their desired future in the long run, sacrificing only the short run, which is tiny if you’re really a longtermist. And the post doesn’t seem to address the iterated case; how do you know if someone’s going to renege, after some previous version of them has agreed to “fairly” split the future?
So I don’t understand how the posts you link resolve that concern. Sure with sufficient intelligence you can get “chaa” (from your linked post: “fair”, proportional to power/ability to take what you want), but what if “chaa” is everyone but the first actor dead?
If the solution is “sharing source code” as in earlier work I don’t think that’s at all applicable to network-based AGI; the three body problem of prediction applies in spades.
Hmm well I’d say it gets into that immediately, but it does so in a fairly abstract way. I’d recommend the whole lot though. It’s generally about what looks like a tendency in the math towards the unity of various bargaining systems.
A single decision can be something like “who to be, how to live, from now on”. There isn’t a strict distinction between single decision and all decisions from then on when acts of self-modification are possible, as self-modification changes all future decisions.
On reflection, I’m not sure bargaining theory undermines the point you were making, I do think it’s possible that one party or another would dominate the merger, depending on what the technologies of superintelligent war turn out to be and how much the Us of the participants care about near term strife.
But the feasibility of converging towards merger seems like a relevant aspect of all of this.
Transparency aids/sufficies negotiation, but there wont be much of a negotiation if, say, having nukes turns out to be a very weak bargaining chip and the power distribution is just about who gets nanotech[1] first or whatever, or if it turns out that human utility functions don’t care as much about loss of life in the near term as they do about owning the entirety of the future. I think the latter is very unlikely and the former is debatable.
I don’t exactly believe in “nanotech”. I think materials science advances continuously and practical molecularly precise manufacturing will tend to look like various iterations of synthetic biology (you need a whole lot of little printer heads in order to make a large enough quantity of stuff to matter). There may be a threshold here, though, which we could call “DNA 2.0″ or something, a form of life that uses stronger things than amino acids.