Hmm well I’d say it gets into that immediately, but it does so in a fairly abstract way. I’d recommend the whole lot though. It’s generally about what looks like a tendency in the math towards the unity of various bargaining systems.
The setup for that post was a single decision
A single decision can be something like “who to be, how to live, from now on”. There isn’t a strict distinction between single decision and all decisions from then on when acts of self-modification are possible, as self-modification changes all future decisions.
On reflection, I’m not sure bargaining theory undermines the point you were making, I do think it’s possible that one party or another would dominate the merger, depending on what the technologies of superintelligent war turn out to be and how much the Us of the participants care about near term strife.
But the feasibility of converging towards merger seems like a relevant aspect of all of this.
Transparency aids/sufficies negotiation, but there wont be much of a negotiation if, say, having nukes turns out to be a very weak bargaining chip and the power distribution is just about who gets nanotech[1] first or whatever, or if it turns out that human utility functions don’t care as much about loss of life in the near term as they do about owning the entirety of the future. I think the latter is very unlikely and the former is debatable.
I don’t exactly believe in “nanotech”. I think materials science advances continuously and practical molecularly precise manufacturing will tend to look like various iterations of synthetic biology (you need a whole lot of little printer heads in order to make a large enough quantity of stuff to matter). There may be a threshold here, though, which we could call “DNA 2.0″ or something, a form of life that uses stronger things than amino acids.
Hmm well I’d say it gets into that immediately, but it does so in a fairly abstract way. I’d recommend the whole lot though. It’s generally about what looks like a tendency in the math towards the unity of various bargaining systems.
A single decision can be something like “who to be, how to live, from now on”. There isn’t a strict distinction between single decision and all decisions from then on when acts of self-modification are possible, as self-modification changes all future decisions.
On reflection, I’m not sure bargaining theory undermines the point you were making, I do think it’s possible that one party or another would dominate the merger, depending on what the technologies of superintelligent war turn out to be and how much the Us of the participants care about near term strife.
But the feasibility of converging towards merger seems like a relevant aspect of all of this.
Transparency aids/sufficies negotiation, but there wont be much of a negotiation if, say, having nukes turns out to be a very weak bargaining chip and the power distribution is just about who gets nanotech[1] first or whatever, or if it turns out that human utility functions don’t care as much about loss of life in the near term as they do about owning the entirety of the future. I think the latter is very unlikely and the former is debatable.
I don’t exactly believe in “nanotech”. I think materials science advances continuously and practical molecularly precise manufacturing will tend to look like various iterations of synthetic biology (you need a whole lot of little printer heads in order to make a large enough quantity of stuff to matter). There may be a threshold here, though, which we could call “DNA 2.0″ or something, a form of life that uses stronger things than amino acids.