Well put. If you really do consist of different parts, each wanting different things, then your values should derive from a multi agent consensus among your parts, not just an argmax over the values of the different parts.
In other words, this:
Something with a utility function, if it values an apple 1% more than an orange, if offered a million apple-or-orange choices, will choose a million apples and zero oranges. The division within most people into selfish and unselfish components is not like that, you cannot feed it all with unselfish choices whatever the ratio. Not unless you are a Keeper, maybe, who has made yourself sharper and more coherent
seems like a very limited way of looking at “coherence”. In the context of multi agent negotiations, becoming “sharper and more coherent” should equate to having an internal consensus protocol that approaches closer to the Pareto frontier of possible multi agent equilibria.
Technically, “allocate all resources to a single agent” is a Pareto optimal distribution, but it’s only possible if a single agent has an enormously outsized influence on the decision making process. A person for whom that is true would, I think, be incredibly deranged and obsessive. None of my parts aspire to create such a twisted internal landscape.
I instead aspire to be the sort of person whose actions both reflect a broad consensus among my individual parts and effectively implement that consensus in the real world. Think results along the line of the equilibrium that emerges from superrational agents exchanging influence, rather than some sort of “internal dictatorship” where one part infinitely dominates over all others
So policy debates should appear one-sided? Wouldn’t a consensus protocol be “duller” in that it takes less actions than one that didn’t abide by a consensus?
The result about superrational agents was only demonstrated for superrational agents. That means agents which implement, essentially, the best of all possible decision theories. So they cooperate with each other and have all the non-contradictory properties that we want out of the best possible decision theory, even if we don’t currently know how to specify such a decision theory.
It’s a goal to aspire to, not a reality that’s already been achieved.
Trying to simply even further an answer to “How to deal with internal conflict?” of “Don’t have internal conflict” is pretty much correct and unhelpful.
I though that there was a line favoring consensus as approaching conflict situations something along the lines of “How to do deal with internal conflict?” “In conflict you are outside the pareto frontier so you should do nothing as there is nothing mututally agreeable to be found.” or “cooperate to the extent that mutual agreeement exists and then do nothing after that point where true disagreement starts”
Well put. If you really do consist of different parts, each wanting different things, then your values should derive from a multi agent consensus among your parts, not just an argmax over the values of the different parts.
In other words, this:
seems like a very limited way of looking at “coherence”. In the context of multi agent negotiations, becoming “sharper and more coherent” should equate to having an internal consensus protocol that approaches closer to the Pareto frontier of possible multi agent equilibria.
Technically, “allocate all resources to a single agent” is a Pareto optimal distribution, but it’s only possible if a single agent has an enormously outsized influence on the decision making process. A person for whom that is true would, I think, be incredibly deranged and obsessive. None of my parts aspire to create such a twisted internal landscape.
I instead aspire to be the sort of person whose actions both reflect a broad consensus among my individual parts and effectively implement that consensus in the real world. Think results along the line of the equilibrium that emerges from superrational agents exchanging influence, rather than some sort of “internal dictatorship” where one part infinitely dominates over all others
So policy debates should appear one-sided? Wouldn’t a consensus protocol be “duller” in that it takes less actions than one that didn’t abide by a consensus?
The result about superrational agents was only demonstrated for superrational agents. That means agents which implement, essentially, the best of all possible decision theories. So they cooperate with each other and have all the non-contradictory properties that we want out of the best possible decision theory, even if we don’t currently know how to specify such a decision theory.
It’s a goal to aspire to, not a reality that’s already been achieved.
Trying to simply even further an answer to “How to deal with internal conflict?” of “Don’t have internal conflict” is pretty much correct and unhelpful.
I though that there was a line favoring consensus as approaching conflict situations something along the lines of “How to do deal with internal conflict?” “In conflict you are outside the pareto frontier so you should do nothing as there is nothing mututally agreeable to be found.” or “cooperate to the extent that mutual agreeement exists and then do nothing after that point where true disagreement starts”