A remark that seems sufficiently distinct to deserve its own comment. At this moment we are only thinking about delegates with “fixed personalities”. Should “personality” of a delegate be “recalculated[1]” after each new agreement/trade [2]? Changes would temporary, only within a context of a given set of bills, they would revert to their original “personalities” after the vote. Maybe this could give results that would be vaguely analogous to smoothing a function? This would allow us to have a kind of “persuasion”.
In the context of my comment above, this could enable taking into account utility differences and not just signs, assuming large differences in utility would usually require large changes (and therefore, usually more than one change) in “personality” to invert the sign of it. I admit that this is very handwavy.
[1] I do not know what interpolation algorithm should be used
[2] A second remark. Maybe delegates should trade changes in each other’s “personality” rather than votes themselves, i.e. instead of promising to vote on bills in accordance to some binding agreement, they would promise to perform a minimal possible non-ad-hoc change [3] to their personalities that would make them vote that way? However, this could create slippery slopes, similar to those mentioned here.
It seems to me that the less personal MPs are, and the fewer opportunities we allow for anthropomorphic persuasion between them (through appeals such as issue framing, pleading, signaling loyalty to a coalition, ingratiation, defamation, challenges to the MPs status, deceit (e.g. unreliable statements by MPs about their private info relevant to probable consequences of acts resulting from the passage of bills)), then all the more we will encapsulate away the hard problems of moral reasoning within the MPs.
Even persuasive mechanisms more amenable to formalization—like agreements between MPs to reallocate their computational resources, or like risk-sharing agreements between MPs based on their expectations that they might lose future influence in the parliament if the agent changes its assignment of probabilities to the MPs’ moral correctness based on its observation of decision consequences—even these sound to me, in the absence of reasons why they should appear in a theory of how to act given a distribution over self-contained moral theories, like complications that will impede crisp mathematical reasoning, introduced mainly for their similarity to the mechanisms that function in real human parliaments.
Or am I off base, and your scare quotes around “personality” mean that you’re talking about something else? Because what I’m picturing is basically someone building cognitive machinery for emotions, concepts, habits and styles of thinking, et cetera, on top of moral theories.
Well, I agree that I chose words badly and then didn’t explain the intended meaning, continued to speak in metaphors (my writing skills are seriously lacking). What I called “personality” of a delegate was a function that assigns a utility score for any given state of the world (at the beginning they are determined by moral theories). In my first post I thought about these utility function as constants and stayed that way throughout negotiation process (it was my impression that ESRogs 3rd assumption implicitly says basically the same thing), maybe accepting some binding agreements if they help to increase the expected utility (these agreements are not treated as a part of utility function, they are ad-hoc).
On the other hand, what if we drop the assumption that these utility functions stay constant? What if, e.g. when two delegates meet, instead of exchanging binding agreements to vote in a specific way, they would exchange agreements to self-modify in a specific way that would correspond to those agreements? I.e. suppose a delegate M_1 strongly prefers option O_1,1 to an option O_1,2 on an issue B_1 and slightly prefers O_2,1 to O_2,2 on an issue B_2, whereas a delegate M_2 strongly prefers option O_2,2 to an option O_2,1 on an issue B_2 and slightly prefers O_1,2 to O_1,1 on an issue B_1. Now, M_1 could agree to vote (O_1,1 ;O_2,2) in exchange for a promise that M_2 would vote the same way, and sign a binding agreement. On the other hand, M_1 could agree to self-modify to slightly prefer O_2,2 to O_2,1 in exchange for a promise that M_2 would self-modify to slightly prefer O_1,1 to O_1,2 (both want to self-modify as little as possible, however any modification that is not ad-hoc would probably affect utility function at more than one point (?). Self-modifying in this case is restricted (only utility function is modified), therefore maybe it wouldn’t require heavy machinery (I am not sure), besides, all utility functions ultimately belong to the same persons). These self-modifications are not binding agreements, delegates are allowed to further self-modify their “personalities”(i.e. utility functions) in another exchange.
Now, this idea vaguely reminds me a smoothing over the space of all possible utility functions. Metaphorically, this looks as if delegates were “persuaded” to change their “personalities”, their “opinions about things”(i.e. utility functions) by an “argument” (i.e. exchange).
I would guess these self-modifying delegates should be used as dummy variables during a finite negotiation process. After the vote, delegates would revert to their original utility functions.
A remark that seems sufficiently distinct to deserve its own comment. At this moment we are only thinking about delegates with “fixed personalities”. Should “personality” of a delegate be “recalculated[1]” after each new agreement/trade [2]? Changes would temporary, only within a context of a given set of bills, they would revert to their original “personalities” after the vote. Maybe this could give results that would be vaguely analogous to smoothing a function? This would allow us to have a kind of “persuasion”.
In the context of my comment above, this could enable taking into account utility differences and not just signs, assuming large differences in utility would usually require large changes (and therefore, usually more than one change) in “personality” to invert the sign of it. I admit that this is very handwavy.
[1] I do not know what interpolation algorithm should be used
[2] A second remark. Maybe delegates should trade changes in each other’s “personality” rather than votes themselves, i.e. instead of promising to vote on bills in accordance to some binding agreement, they would promise to perform a minimal possible non-ad-hoc change [3] to their personalities that would make them vote that way? However, this could create slippery slopes, similar to those mentioned here.
[3] This is probably a hard problem
It seems to me that the less personal MPs are, and the fewer opportunities we allow for anthropomorphic persuasion between them (through appeals such as issue framing, pleading, signaling loyalty to a coalition, ingratiation, defamation, challenges to the MPs status, deceit (e.g. unreliable statements by MPs about their private info relevant to probable consequences of acts resulting from the passage of bills)), then all the more we will encapsulate away the hard problems of moral reasoning within the MPs.
Even persuasive mechanisms more amenable to formalization—like agreements between MPs to reallocate their computational resources, or like risk-sharing agreements between MPs based on their expectations that they might lose future influence in the parliament if the agent changes its assignment of probabilities to the MPs’ moral correctness based on its observation of decision consequences—even these sound to me, in the absence of reasons why they should appear in a theory of how to act given a distribution over self-contained moral theories, like complications that will impede crisp mathematical reasoning, introduced mainly for their similarity to the mechanisms that function in real human parliaments.
Or am I off base, and your scare quotes around “personality” mean that you’re talking about something else? Because what I’m picturing is basically someone building cognitive machinery for emotions, concepts, habits and styles of thinking, et cetera, on top of moral theories.
Well, I agree that I chose words badly and then didn’t explain the intended meaning, continued to speak in metaphors (my writing skills are seriously lacking). What I called “personality” of a delegate was a function that assigns a utility score for any given state of the world (at the beginning they are determined by moral theories). In my first post I thought about these utility function as constants and stayed that way throughout negotiation process (it was my impression that ESRogs 3rd assumption implicitly says basically the same thing), maybe accepting some binding agreements if they help to increase the expected utility (these agreements are not treated as a part of utility function, they are ad-hoc).
On the other hand, what if we drop the assumption that these utility functions stay constant? What if, e.g. when two delegates meet, instead of exchanging binding agreements to vote in a specific way, they would exchange agreements to self-modify in a specific way that would correspond to those agreements? I.e. suppose a delegate M_1 strongly prefers option O_1,1 to an option O_1,2 on an issue B_1 and slightly prefers O_2,1 to O_2,2 on an issue B_2, whereas a delegate M_2 strongly prefers option O_2,2 to an option O_2,1 on an issue B_2 and slightly prefers O_1,2 to O_1,1 on an issue B_1. Now, M_1 could agree to vote (O_1,1 ;O_2,2) in exchange for a promise that M_2 would vote the same way, and sign a binding agreement. On the other hand, M_1 could agree to self-modify to slightly prefer O_2,2 to O_2,1 in exchange for a promise that M_2 would self-modify to slightly prefer O_1,1 to O_1,2 (both want to self-modify as little as possible, however any modification that is not ad-hoc would probably affect utility function at more than one point (?). Self-modifying in this case is restricted (only utility function is modified), therefore maybe it wouldn’t require heavy machinery (I am not sure), besides, all utility functions ultimately belong to the same persons). These self-modifications are not binding agreements, delegates are allowed to further self-modify their “personalities”(i.e. utility functions) in another exchange.
Now, this idea vaguely reminds me a smoothing over the space of all possible utility functions. Metaphorically, this looks as if delegates were “persuaded” to change their “personalities”, their “opinions about things”(i.e. utility functions) by an “argument” (i.e. exchange).
I would guess these self-modifying delegates should be used as dummy variables during a finite negotiation process. After the vote, delegates would revert to their original utility functions.