Sure, it’s only because appelatives like “bastard” imply a person with a constant identity through time that we call someone who steals from other people’s pension funds a bastard, and from his own pension fund stupid or akratic. If we shrunk our view of identity to time-discrete agents making nanoeconomic transactions with future and past versions of themselves, we could call your premature pensioner a bastard; if we grew our view of identity to “all sentient beings,” we could call someone who steals from others’ pension funds stupid or akratic.
We could also call a left hand tossing a coin thrown by the right hand a thief; or divide up a single person into multiple, competing agents any number of other ways.
However, the choice of a assigning a consistent identity to each person is not arbitrary. It’s fairly universal, and fairly well-motivated. Persons tend to be capable of replication, and capable of entering into enforceable contracts. Neither of the other agentic divisions—present/future self, left hand/right hand, or “all sentient beings”—share these characteristics. And these characteristics are vitally important, because agents that possess them can outcompete others that vie for the same resources; leaving the preferences of those other agents near-completely unsatisfied.
So, that’s why LWers, with their pragmatic view toward rationality, aren’t eager to embrace a definition of “rationality” that leaves its adherents in the dustbin of history unless everyone else embraces it at the same time.
Pragmatic? khafra, possibly I interpreted the FAQ too literally. [“Normative decision theory studies what an ideal agent (a perfectly rational agent, with infinite computing power, etc.) would choose.”] Whether in practice a conception of rationality that privileges a class of weaker preferences over stronger preferences will stand the test of time is clearly speculative. But if we’re discussing ideal, perfectly rational agents - or even crude approximations to ideal perfectly rational agents—then a compelling case can be made for an impartial and objective weighing of preferences instead.
You’re sticking pretty determinedly to “preferences” as something that can be weighed without considering the agent that holds/implements them. But this is prima facie not how preferences work—this is what I mean by “pragmatic.” If we imagine an ordering over agents by their ability to accomplish their goals, instead of by “rationality,” it’s clear that:
A preference held by no agents will only be satisfied by pure chance,
A preference held only by the weakest agent will only be satisfied if it is compatible with the preferences of the agents above it, and
By induction over the whole numbers, any agent’s preferences will only be satisfied to the extent that they’re compatible with the preferences of the agents above it.
As far as I can see, this leaves you with a trilemma:
There is no possible ordering over agents by ability to accomplish goals.
“Rationality” has negligible effect on ability to accomplish goals.
There exists some Omega-agent above all others, whose goals include fulfilling the preferences of weaker agents.
Branch 3 is theism. You seem to be aiming for a position in between branch 1 and branch 2; switching from one position to the other whenever someone attacks the weaknesses of your current position.
Edit: Whoops, also one more, which is the position you may actually hold:
4. Being above a certain, unspecified position in the ordering necessarily entails preferring the preferences of weaker agents. It’s obvious that not every agent has this quality of preferring the preferences of weaker agents; and I can’t see any mechanism whereby that preference for the preferences of weaker agents would be forced upon every agent above a certain position in the ordering except for the Omega-agent. So I think that mechanism is the specific thing you need to argue for, if this is actually your position.
Well, ‘khafra’ (if that is even your name), there are a couple caveats I must point out.
Consider two chipmunks living in the same forest, one of them mightier than the other (behold!). Each of them does his best to keep all the seeds to themselves (just like the typical LW’er). Yet it does not follow that the mightier chipmunk is able to preclude his rival from gathering some seeds, his advantage nonwithstanding.
Consider that for all practical purposes we rarely act in a truly closed system. You are painting a zero-sum game, with the agents’ habitat as an arena, an agent-eat-agent world in which truly following a single preference imposes on every aspect of the world. That’s true for Clippy, not for chipmunks or individual humans. Apart from rare, typically artificially constructed environments (e.g. games), there was always a frontier to push—possibilities to evade other agents and find a niche that puts you beyond the grasp of other, mightier agents. The universe may be infinite or it mayn’t, yet we don’t really need to care about it, it’s open enough for us. An Omega could preclude us from fulfilling any preferences at all, but just an agent that’s “stronger” than us? Doubtful, unless we’re introducing Omega in its more malicious variant, Clippy.
Agents may have competing preferences, but what matters isn’t centered on their ultima ratio maximal theoretical ability to enforce a specific preference, but just as much on their actual willingness to do so—which isis why the horn of the trilemma you state as “there is no possible ordering over agents by ability to accomplish goals” is too broad a statement. You may want some ice cream, but not at any cost.
As an example, Beau may wish to get some girl’s number, but does not highly prioritize it. He has a higher chance of achieving that goal (let’s assume the girl’s number is an exclusive resource with a binary semaphore, so no sharing of her number allowed) than Mordog The Terrible, if they valued that preference equally. However, in practice if Beau didn’t invest much effort at all, while Mordog listened to the girl for hours (investing significant time, since he values the number more highly), the weaker agent may yet prevail. Noone should ever read this example.
In conclusion, the ordering wouldn’t be total, there would be partial (in the colloquial sense) orderings for certain subsets of agents, and the elements of the ordering would be tupels of (agent, which preference), without even taking into account temporal changes in power relations.
I did try to make the structure of my argument compatible with a partial order; but you’re right—if you take an atomic preference to be something like “a marginal acorn” or “this girl’s number” instead of “the agent’s entire utility function;” we’ll need tuples.
As far as temporal changes go, we’re either considering you an agent who bargains with Kawoomba-tomorrow for well-restedness vs. staying on the internet long into the night—in which case there are no temporal changes—or we’re considering an agent to be the same over the entire span of its personhood, in which case it has a total getting-goals-accomplished rank; even if you can’t be certain what that rank is until it terminates.
Can we even compare utilons across agents, i.e. how can we measure who fulfilled his utility function better, and preferably thus that an agent with a nearly empty utility function wouldn’t win by default. Such a comparison would be needed to judge who fulfilled the sum of his/her/its preferences better, if we’d like to assign one single measure to such a complicated function. May not even be computable, unless in a CEV version.
Maybe a higher-up can chime in on that. What’s the best way to summon one, say his name thrice or just cry “I need an adult”?
The issue of how an ideal rational agent should act is indeed distinct from the issue of what mechanism could ensure we become ideal rational agents, impartially weighing the strength of preferences / interests regardless of the power of the subject of experience who holds them. Thus if we lived in a (human) slave-owning society, then as white slave-owners we might “pragmatically” choose to discount the preferences of black slaves from our ideal rational decision theory. After all, what is the point of impartially weighing the “preferences” of different subjects of experience without considering the agent that holds / implements them? For our Slaveowners’ Decision Theory FAQ, let’s pragmatically order over agents by their ability to accomplish their goals, instead of by “rationality,” And likewise today with captive nonhuman animals in our factory farms ?
Hmmm....
regardless of the power of the subject of experience who holds them.
This is the part that makes the mechanism necessary. The “subject of experience” is also the agent capable of replication, and capable of entering into enforceable contracts. If there were no selection pressure on agents, rationality wouldn’t exist, there would be no reason for it. Since there is selection pressure on agents, they must shape themselves according to that pressure, or be replaced by replicators who will.
I don’t believe the average non-slave-owning member of today’s society is any more rational than the average 19th century plantation owner. It’s plausible that a plantation owner who started trying to fulfill the preferences of everyone on his plantation, giving them the same weight as his own preferences, would end up with more of his preferences fulfilled than the ones who simply tried to maximize cotton production—but that’s because humans are not naturally cotton maximizers, and humans do have a fairly strong drive to fulfill the preferences of other humans.
′
But that’s because we’re humans, not because we’re rational agents.
khafra, could you clarify? On your account, who in a slaveholding society is the ideal rational agent? Both Jill and Jane want a comfortable life. To keep things simple, let’s assume they are both meta-ethical anti-realists. Both Jill and Jane know their slaves have an even stronger preference to be free—albeit not a preference introspectively accessible to our two agents in question. Jill’s conception of ideal rational agency leads her impartially to satisfy the objectively stronger preferences and free her slaves. Jane, on the other hand, acknowledges their preference is stronger—but she allows her introspectively accessible but weaker preference to trump what she can’t directly access. After all, Jane reasons, her slaves have no mechanism to satisfy their stronger preference for freedom. In other words, are we dealing with ideal rational agency or realpolitik? Likewise with burger-eater Jane and Vegan Jill today.
On your account, who in a slaveholding society is the ideal rational agent?
The question is misleading, because humans have a very complicated set of goals which include a measure of egalitarianism. But the complexity of our goals is not a necessary component of our intelligence about fulfilling them, as far as we can tell. We could be just as clever and sophisticated about reaching much simpler goals.
let’s assume they are both meta-ethical anti-realists.
Don’t you have to be a moral realist to compare utilities across different agents?
her slaves have no mechanism to satisfy their stronger preference for freedom.
This is not the mechanism which I’ve been saying is necessary. The necessary mechanism is one which will connect a preference to the planning algorithms of a particular agent. For humans, that mechanism is natural selection, including kin selection; that’s what gave us the various ways in which we care about the preferences of others. For a designed-from-scratch agent like a paperclip maximizer, there is—by stipulation—no such mechanism.
Khafra, one doesn’t need to be a moral realist to give impartial weight to interests / preference strengths. Ideal rational agent Jill need no more be a moral realist in taking into consideration the stronger but introspectively inaccessible preferences of her slaves than she need be a moral realist taking into account the stronger but introspectively inaccessible preference of her namesake and distant successor Pensioner Jill not to be destitute in old age when weighing whether to raid her savings account. Ideal rationalist Jill does not mistake an epistemological limitation on her part for an ontological truth. Of course, in practice flesh-and-blood Jill may sometimes be akratic. But this, I think, is a separate issue.
A preference for rationality necessitates a preference for objectivity, in the light of which an agent will realise they are not objectively more important than others.
Sure, it’s only because appelatives like “bastard” imply a person with a constant identity through time that we call someone who steals from other people’s pension funds a bastard, and from his own pension fund stupid or akratic. If we shrunk our view of identity to time-discrete agents making nanoeconomic transactions with future and past versions of themselves, we could call your premature pensioner a bastard; if we grew our view of identity to “all sentient beings,” we could call someone who steals from others’ pension funds stupid or akratic.
We could also call a left hand tossing a coin thrown by the right hand a thief; or divide up a single person into multiple, competing agents any number of other ways.
However, the choice of a assigning a consistent identity to each person is not arbitrary. It’s fairly universal, and fairly well-motivated. Persons tend to be capable of replication, and capable of entering into enforceable contracts. Neither of the other agentic divisions—present/future self, left hand/right hand, or “all sentient beings”—share these characteristics. And these characteristics are vitally important, because agents that possess them can outcompete others that vie for the same resources; leaving the preferences of those other agents near-completely unsatisfied.
So, that’s why LWers, with their pragmatic view toward rationality, aren’t eager to embrace a definition of “rationality” that leaves its adherents in the dustbin of history unless everyone else embraces it at the same time.
Pragmatic? khafra, possibly I interpreted the FAQ too literally. [“Normative decision theory studies what an ideal agent (a perfectly rational agent, with infinite computing power, etc.) would choose.”] Whether in practice a conception of rationality that privileges a class of weaker preferences over stronger preferences will stand the test of time is clearly speculative. But if we’re discussing ideal, perfectly rational agents - or even crude approximations to ideal perfectly rational agents—then a compelling case can be made for an impartial and objective weighing of preferences instead.
You’re sticking pretty determinedly to “preferences” as something that can be weighed without considering the agent that holds/implements them. But this is prima facie not how preferences work—this is what I mean by “pragmatic.” If we imagine an ordering over agents by their ability to accomplish their goals, instead of by “rationality,” it’s clear that:
A preference held by no agents will only be satisfied by pure chance,
A preference held only by the weakest agent will only be satisfied if it is compatible with the preferences of the agents above it, and
By induction over the whole numbers, any agent’s preferences will only be satisfied to the extent that they’re compatible with the preferences of the agents above it.
As far as I can see, this leaves you with a trilemma:
There is no possible ordering over agents by ability to accomplish goals.
“Rationality” has negligible effect on ability to accomplish goals.
There exists some Omega-agent above all others, whose goals include fulfilling the preferences of weaker agents.
Branch 3 is theism. You seem to be aiming for a position in between branch 1 and branch 2; switching from one position to the other whenever someone attacks the weaknesses of your current position.
Edit: Whoops, also one more, which is the position you may actually hold:
4. Being above a certain, unspecified position in the ordering necessarily entails preferring the preferences of weaker agents. It’s obvious that not every agent has this quality of preferring the preferences of weaker agents; and I can’t see any mechanism whereby that preference for the preferences of weaker agents would be forced upon every agent above a certain position in the ordering except for the Omega-agent. So I think that mechanism is the specific thing you need to argue for, if this is actually your position.
Well, ‘khafra’ (if that is even your name), there are a couple caveats I must point out.
Consider two chipmunks living in the same forest, one of them mightier than the other (behold!). Each of them does his best to keep all the seeds to themselves (just like the typical LW’er). Yet it does not follow that the mightier chipmunk is able to preclude his rival from gathering some seeds, his advantage nonwithstanding.
Consider that for all practical purposes we rarely act in a truly closed system. You are painting a zero-sum game, with the agents’ habitat as an arena, an agent-eat-agent world in which truly following a single preference imposes on every aspect of the world. That’s true for Clippy, not for chipmunks or individual humans. Apart from rare, typically artificially constructed environments (e.g. games), there was always a frontier to push—possibilities to evade other agents and find a niche that puts you beyond the grasp of other, mightier agents. The universe may be infinite or it mayn’t, yet we don’t really need to care about it, it’s open enough for us. An Omega could preclude us from fulfilling any preferences at all, but just an agent that’s “stronger” than us? Doubtful, unless we’re introducing Omega in its more malicious variant, Clippy.
Agents may have competing preferences, but what matters isn’t centered on their ultima ratio maximal theoretical ability to enforce a specific preference, but just as much on their actual willingness to do so—which isis why the horn of the trilemma you state as “there is no possible ordering over agents by ability to accomplish goals” is too broad a statement. You may want some ice cream, but not at any cost.
As an example, Beau may wish to get some girl’s number, but does not highly prioritize it. He has a higher chance of achieving that goal (let’s assume the girl’s number is an exclusive resource with a binary semaphore, so no sharing of her number allowed) than Mordog The Terrible, if they valued that preference equally. However, in practice if Beau didn’t invest much effort at all, while Mordog listened to the girl for hours (investing significant time, since he values the number more highly), the weaker agent may yet prevail. Noone should ever read this example.
In conclusion, the ordering wouldn’t be total, there would be partial (in the colloquial sense) orderings for certain subsets of agents, and the elements of the ordering would be tupels of (agent, which preference), without even taking into account temporal changes in power relations.
I did try to make the structure of my argument compatible with a partial order; but you’re right—if you take an atomic preference to be something like “a marginal acorn” or “this girl’s number” instead of “the agent’s entire utility function;” we’ll need tuples.
As far as temporal changes go, we’re either considering you an agent who bargains with Kawoomba-tomorrow for well-restedness vs. staying on the internet long into the night—in which case there are no temporal changes—or we’re considering an agent to be the same over the entire span of its personhood, in which case it has a total getting-goals-accomplished rank; even if you can’t be certain what that rank is until it terminates.
Can we even compare utilons across agents, i.e. how can we measure who fulfilled his utility function better, and preferably thus that an agent with a nearly empty utility function wouldn’t win by default. Such a comparison would be needed to judge who fulfilled the sum of his/her/its preferences better, if we’d like to assign one single measure to such a complicated function. May not even be computable, unless in a CEV version.
Maybe a higher-up can chime in on that. What’s the best way to summon one, say his name thrice or just cry “I need an adult”?
The issue of how an ideal rational agent should act is indeed distinct from the issue of what mechanism could ensure we become ideal rational agents, impartially weighing the strength of preferences / interests regardless of the power of the subject of experience who holds them. Thus if we lived in a (human) slave-owning society, then as white slave-owners we might “pragmatically” choose to discount the preferences of black slaves from our ideal rational decision theory. After all, what is the point of impartially weighing the “preferences” of different subjects of experience without considering the agent that holds / implements them? For our Slaveowners’ Decision Theory FAQ, let’s pragmatically order over agents by their ability to accomplish their goals, instead of by “rationality,” And likewise today with captive nonhuman animals in our factory farms ? Hmmm....
This is the part that makes the mechanism necessary. The “subject of experience” is also the agent capable of replication, and capable of entering into enforceable contracts. If there were no selection pressure on agents, rationality wouldn’t exist, there would be no reason for it. Since there is selection pressure on agents, they must shape themselves according to that pressure, or be replaced by replicators who will.
I don’t believe the average non-slave-owning member of today’s society is any more rational than the average 19th century plantation owner. It’s plausible that a plantation owner who started trying to fulfill the preferences of everyone on his plantation, giving them the same weight as his own preferences, would end up with more of his preferences fulfilled than the ones who simply tried to maximize cotton production—but that’s because humans are not naturally cotton maximizers, and humans do have a fairly strong drive to fulfill the preferences of other humans. ′ But that’s because we’re humans, not because we’re rational agents.
khafra, could you clarify? On your account, who in a slaveholding society is the ideal rational agent? Both Jill and Jane want a comfortable life. To keep things simple, let’s assume they are both meta-ethical anti-realists. Both Jill and Jane know their slaves have an even stronger preference to be free—albeit not a preference introspectively accessible to our two agents in question. Jill’s conception of ideal rational agency leads her impartially to satisfy the objectively stronger preferences and free her slaves. Jane, on the other hand, acknowledges their preference is stronger—but she allows her introspectively accessible but weaker preference to trump what she can’t directly access. After all, Jane reasons, her slaves have no mechanism to satisfy their stronger preference for freedom. In other words, are we dealing with ideal rational agency or realpolitik? Likewise with burger-eater Jane and Vegan Jill today.
The question is misleading, because humans have a very complicated set of goals which include a measure of egalitarianism. But the complexity of our goals is not a necessary component of our intelligence about fulfilling them, as far as we can tell. We could be just as clever and sophisticated about reaching much simpler goals.
Don’t you have to be a moral realist to compare utilities across different agents?
This is not the mechanism which I’ve been saying is necessary. The necessary mechanism is one which will connect a preference to the planning algorithms of a particular agent. For humans, that mechanism is natural selection, including kin selection; that’s what gave us the various ways in which we care about the preferences of others. For a designed-from-scratch agent like a paperclip maximizer, there is—by stipulation—no such mechanism.
Khafra, one doesn’t need to be a moral realist to give impartial weight to interests / preference strengths. Ideal rational agent Jill need no more be a moral realist in taking into consideration the stronger but introspectively inaccessible preferences of her slaves than she need be a moral realist taking into account the stronger but introspectively inaccessible preference of her namesake and distant successor Pensioner Jill not to be destitute in old age when weighing whether to raid her savings account. Ideal rationalist Jill does not mistake an epistemological limitation on her part for an ontological truth. Of course, in practice flesh-and-blood Jill may sometimes be akratic. But this, I think, is a separate issue.
I think the argument is more (5)
A preference for rationality necessitates a preference for objectivity, in the light of which an agent will realise they are not objectively more important than others.