Voting is one example. Who gets “human rights” is another. A third is “who is included, with what weight, in the sum over well being in a utility function”. A fourth is “we’re learning human values to optimize them: who or what counts as human”? A fifth is economic fairness, I listed all of these examples, to try to point out that (as far as I can tell) pretty-much any ethical system you build has some sort of similar definition problem of who or what counts, and how much. (Even paperclip maximizing has a similar problem of defining what does and doesn’t count as a paperclip.) I’m trying to discuss that problem, as a general feature in ethical system design for ethical systems designed around human values, without being too specific about the details of the particular ethical system in question. So if I somehow gave the impression that this was just about who gets a vote, then no, that was intended as shorthand for this ,ore general problem of defining a set or a summation.
As for the level of rationality, for the most part, I’m discussing high-tech future societies that include not just humans but also AIs, some of them superhuman. So yes, I’m assuming more rationality than typical for current purely-human societies. And yes, I’m also trying to apply the methods of rationality, or at least engineering design, to an area that has generally been dominated by politics, idealism, religion, and status-seeking. Less Wrong seemed like a reasonable place to attempt that.
Voting is one example. Who gets “human rights” is another. A third is “who is included, with what weight, in the sum over well being in a utility function”. A fourth is “we’re learning human values to optimize them: who or what counts as human”? A fifth is economic fairness,
I think voting is the only one with fairly simple observable implementations. The others (well, and voting, too) are all messy enough that it’s pretty tenuous to draw conclusions about, especially without noting all the exceptions and historical violence that led to the current state (which may or may not be an equilibrium, and it may or may not be possible to list the forces in opposition that create the equilibrium).
I think the biggest piece missing from these predictions/analysis/recommendations is the acknowledgement of misalignment and variance in capabilities of existing humans. All current social systems are in tension—people struggling and striving in both cooperation and competition. The latter component is brutal and real, and it gets somewhat sublimated with wealth, but doesn’t go away.
Voting is one example. Who gets “human rights” is another. A third is “who is included, with what weight, in the sum over well being in a utility function”. A fourth is “we’re learning human values to optimize them: who or what counts as human”? A fifth is economic fairness, I listed all of these examples, to try to point out that (as far as I can tell) pretty-much any ethical system you build has some sort of similar definition problem of who or what counts, and how much. (Even paperclip maximizing has a similar problem of defining what does and doesn’t count as a paperclip.) I’m trying to discuss that problem, as a general feature in ethical system design for ethical systems designed around human values, without being too specific about the details of the particular ethical system in question. So if I somehow gave the impression that this was just about who gets a vote, then no, that was intended as shorthand for this ,ore general problem of defining a set or a summation.
As for the level of rationality, for the most part, I’m discussing high-tech future societies that include not just humans but also AIs, some of them superhuman. So yes, I’m assuming more rationality than typical for current purely-human societies. And yes, I’m also trying to apply the methods of rationality, or at least engineering design, to an area that has generally been dominated by politics, idealism, religion, and status-seeking. Less Wrong seemed like a reasonable place to attempt that.
I think voting is the only one with fairly simple observable implementations. The others (well, and voting, too) are all messy enough that it’s pretty tenuous to draw conclusions about, especially without noting all the exceptions and historical violence that led to the current state (which may or may not be an equilibrium, and it may or may not be possible to list the forces in opposition that create the equilibrium).
I think the biggest piece missing from these predictions/analysis/recommendations is the acknowledgement of misalignment and variance in capabilities of existing humans. All current social systems are in tension—people struggling and striving in both cooperation and competition. The latter component is brutal and real, and it gets somewhat sublimated with wealth, but doesn’t go away.
I make that point at length in Part 3 of the sequence.