[ epistemic status: I don’t agree with all the premeses and some of the modeling, or the conclusions. But it’s hard to find one single crux. If this comment isn’t helpful, I’ll back off—feel free to rebut or disagree, but I may not comment further. ]
This seems to be mostly about voting, which is an extremely tiny part of group decision-making. It’s not used for anything really important (or if it is, the voting options are limited to a tiny subset of the potential behavior space). Even on that narrow topic, it switches from a fairly zoomed-out and incomplete causality (impulse for fairness) to a normative “this should be” stance, without a lot of support for why it should be that way.
It’s also assuming a LOT more rationality and ability to change among humans, without an acknowledgement of the variance in ability and interest in doing so among current and historical populations. “Eventually, we’ll learn from the lapses” seems laughable. Humans do learn, both individually over short/medium terms, and culturally over generations. But we don’t learn fast enough for this to happen.
Voting is one example. Who gets “human rights” is another. A third is “who is included, with what weight, in the sum over well being in a utility function”. A fourth is “we’re learning human values to optimize them: who or what counts as human”? A fifth is economic fairness, I listed all of these examples, to try to point out that (as far as I can tell) pretty-much any ethical system you build has some sort of similar definition problem of who or what counts, and how much. (Even paperclip maximizing has a similar problem of defining what does and doesn’t count as a paperclip.) I’m trying to discuss that problem, as a general feature in ethical system design for ethical systems designed around human values, without being too specific about the details of the particular ethical system in question. So if I somehow gave the impression that this was just about who gets a vote, then no, that was intended as shorthand for this ,ore general problem of defining a set or a summation.
As for the level of rationality, for the most part, I’m discussing high-tech future societies that include not just humans but also AIs, some of them superhuman. So yes, I’m assuming more rationality than typical for current purely-human societies. And yes, I’m also trying to apply the methods of rationality, or at least engineering design, to an area that has generally been dominated by politics, idealism, religion, and status-seeking. Less Wrong seemed like a reasonable place to attempt that.
Voting is one example. Who gets “human rights” is another. A third is “who is included, with what weight, in the sum over well being in a utility function”. A fourth is “we’re learning human values to optimize them: who or what counts as human”? A fifth is economic fairness,
I think voting is the only one with fairly simple observable implementations. The others (well, and voting, too) are all messy enough that it’s pretty tenuous to draw conclusions about, especially without noting all the exceptions and historical violence that led to the current state (which may or may not be an equilibrium, and it may or may not be possible to list the forces in opposition that create the equilibrium).
I think the biggest piece missing from these predictions/analysis/recommendations is the acknowledgement of misalignment and variance in capabilities of existing humans. All current social systems are in tension—people struggling and striving in both cooperation and competition. The latter component is brutal and real, and it gets somewhat sublimated with wealth, but doesn’t go away.
[ epistemic status: I don’t agree with all the premeses and some of the modeling, or the conclusions. But it’s hard to find one single crux. If this comment isn’t helpful, I’ll back off—feel free to rebut or disagree, but I may not comment further. ]
This seems to be mostly about voting, which is an extremely tiny part of group decision-making. It’s not used for anything really important (or if it is, the voting options are limited to a tiny subset of the potential behavior space). Even on that narrow topic, it switches from a fairly zoomed-out and incomplete causality (impulse for fairness) to a normative “this should be” stance, without a lot of support for why it should be that way.
It’s also assuming a LOT more rationality and ability to change among humans, without an acknowledgement of the variance in ability and interest in doing so among current and historical populations. “Eventually, we’ll learn from the lapses” seems laughable. Humans do learn, both individually over short/medium terms, and culturally over generations. But we don’t learn fast enough for this to happen.
Voting is one example. Who gets “human rights” is another. A third is “who is included, with what weight, in the sum over well being in a utility function”. A fourth is “we’re learning human values to optimize them: who or what counts as human”? A fifth is economic fairness, I listed all of these examples, to try to point out that (as far as I can tell) pretty-much any ethical system you build has some sort of similar definition problem of who or what counts, and how much. (Even paperclip maximizing has a similar problem of defining what does and doesn’t count as a paperclip.) I’m trying to discuss that problem, as a general feature in ethical system design for ethical systems designed around human values, without being too specific about the details of the particular ethical system in question. So if I somehow gave the impression that this was just about who gets a vote, then no, that was intended as shorthand for this ,ore general problem of defining a set or a summation.
As for the level of rationality, for the most part, I’m discussing high-tech future societies that include not just humans but also AIs, some of them superhuman. So yes, I’m assuming more rationality than typical for current purely-human societies. And yes, I’m also trying to apply the methods of rationality, or at least engineering design, to an area that has generally been dominated by politics, idealism, religion, and status-seeking. Less Wrong seemed like a reasonable place to attempt that.
I think voting is the only one with fairly simple observable implementations. The others (well, and voting, too) are all messy enough that it’s pretty tenuous to draw conclusions about, especially without noting all the exceptions and historical violence that led to the current state (which may or may not be an equilibrium, and it may or may not be possible to list the forces in opposition that create the equilibrium).
I think the biggest piece missing from these predictions/analysis/recommendations is the acknowledgement of misalignment and variance in capabilities of existing humans. All current social systems are in tension—people struggling and striving in both cooperation and competition. The latter component is brutal and real, and it gets somewhat sublimated with wealth, but doesn’t go away.
I make that point at length in Part 3 of the sequence.