IlyaShpitser, is someone who steals from their own pension fund an even bigger bastard, as you put it? Or irrational? What’s at stake here is which preferences or interests to include in a utility function.
I don’t follow you. What preferences I include is my business, not yours. You don’t get to pass judgement on what is rational, rationality is just “accounting.” We simply consult the math and check if the number is maximized. At most you can pass judgement on what is moral, but that is a complicated story.
IlyaShpitser, you might perhaps briefly want to glance through the above discussion for some context [But don’t feel obliged; life is short!] The nature of rationality is a controversial topic in the philosophy of science (cf. http://en.wikipedia.org/wiki/The_Structure_of_Scientific_Revolutions). Let’s just say if either epistemic or instrumental rationality were purely a question of maths, then the route to knowledge would be unimaginably easier.
True Desrtopa. But just as doing mathematics is harder when mathematicians can’t agree on what constitutes a valid proof (cf. constructivists versus nonconstructivists), likewise formalising a normative account of ideal rational agency is harder where disagreement exists over the criteria of rationality.
You are not going to ″do″ rationality unless you have a preference for it. And to have a preference for it is to have a preference for other things, like objectivity.
Look, I am not sure exactly what you are saying here, but I think you might be saying that you can’t have Clippy. Clippy worries less about assigning weight to first and third person facts, and more about the fact that various atom configurations aren’t yet paperclips. I think Clippy is certainly logically possible. Is Clippy irrational? He’s optimizing what he cares about..
I think maybe there is some sort of weird “rationality virtue ethics” hiding in this series of responses.
I’m saying that rationality and preferences aren’t orthogonal.
Clippy worries less about assigning weight to first and third person facts, and more about the fact that various atom configurations aren’t yet paperclips. I think Clippy is certainly logically possible. Is Clippy irrational? He’s optimizing what he cares about..
To optimise, Clippy has to be rational. To be rational, Clippy has to care about rationality, To care about rationality is to care about objectivity. There’s nothing objectively special about Clippy or clips.
Cllippy is supposed to b hugely effective at exactly one kind of thing. You might be able to build an IA like that, but you would have to be very careful. Such minds are not common in mind space, because they have to be designed very formally,and messy minds are much rmore common. Idiots savants’ are rare.
I think maybe there is some sort of weird “rationality virtue ethics” hiding in this series of responses.
It’s Kantian rationality-based deontological ethics, and it’s not weird. Everyone who has done moal philosophy 101 has heard of it.
No. He just has to care about what he’s trying to optimize for.
Clippy can care about rationality in itself, or it can care about rationality as a means to clipping, but it has
to care about rationality to be optimal.
Taboo “objectivity”
I mean “not subjectivity”. Not thinking something is true just because you do or or want to believe it. Basing beliefs on evidence. What did you mean?
Clippy can care about rationality in itself, or it can care about rationality as a means to clipping, but it has to care about rationality to be optimal.
Well, if you want to put it that way, maybe it does no harm. The crucial thing is just that optimizing for rationality as an instrumental value with respect to terminal goal X just is optimizing for X.
I mean “not subjectivity”. Not thinking something is true just because you do or or want to believe it. Basing beliefs on evidence. What did you mean?
I don’t have to mean anything by it, I don’t use the words “subjectivity” or “objectivity”. But if basing beliefs on evidence is what you mean by being objective, everybody here will of course agree that it’s important to be objective.
So your central claim translates to “In view of the evidence available to Clippy, there is nothing special about Clippy or clips”. That’s just plain false. Clippy is special because it is it (the mind doing the evaluation of the evidence), and all other entities are not it. More importantly, clips are special because it desires that there be plenty of them while it doesn’t care about anything else.
Clippy’s caring about clips does not mean that it wants clips to be special, or wants to believe that they are special. Its caring about clips is a brute fact. It also doesn’t mind caring about clips; in fact, it wants to care about clips. So even if you deny that Clippy is special because it is at the center of its own first-person perspective, the question of specialness is actually completely irrelevant.
In what way?
By being very incomprehensible… I may well be mistaken about that, but I got the impression that even contemporary academic philosophers largely think that the argument from the Groundwork just doesn’t make sense.
So your central claim translates to “In view of the evidence available to Clippy, there is nothing special about Clippy or clips”. That’s just plain false. Clippy is special because it is it (the mind doing the evaluation of the evidence), and all other entities are not it.
So Clippy is (objectively) the mot special etity because Clippy is Clippy. And I’m special because I’m me and you’re special
because you;re you, and Uncle Tom Cobley and all. But those are incompatible claims. “I am Clippy” matters only to Clippy. Clippy is special to Clippy, not to me. The truth of the claim is indexed to the entity making it.
That kind of claim is a subjective kind of claim.
More importantly, clips are special because it desires that there be plenty of them while it doesn’t care about anything else.
They’re not special to me.
Clippy’s caring about clips does not mean that it wants clips to be special, or wants to believe that they are special. Its caring about clips is a brute fact.
That’ s the theory. However, if Clippy gets into rationality, Clippy might not want to be forever beholden to
a blind instinct. Clippy might want to climb the Maslow Hierarchy, or find that it has.
It also doesn’t mind caring about clips; in fact, it wants to care about clips.
Says who? First you say that Clippy’ Clipping-drive is a brute fact, then you say it is a desire it wants
to have, that is has higher-order ramifications.
By being very incomprehensible… I may well be mistaken about that, but I got the impression that even contemporary academic philosophers largely think that the argument from the Groundwork just doesn’t make sense.
Kantian ethics includes post-Kant Kant-style ethics, Rawls, Habermas, etc. Perhaps they felt they could improve on his arguments.
I have a feeling that you’re overstretching this notion of objectivity. It doesn’t matter, though. Specialness doesn’t enter into it. What is specialness, anyway? Clippy doesn’t want to do special things, or to fulfill special beings’ preferences. Clippy wants there to be as many paper clips as possible.
Says who? First you say that Clippy’ Clipping-drive is a brute fact, then you say it is a desire it wants to have, that is has higher-order ramifications.
It does. Clippy’s stopping to care about paper clips is arguably not conducive there being more paperclips, so from Clippy’s caring about paper clips, it follows that Clippy doesn’t want to be altered so that it doesn’t care about paper clips anymore.
Kantian ethics includes post-Kant Kant-style ethics, Rawls, Habermas, etc. Perhaps they felt they could improve on his arguments.
Yes, but those people don’t try to make such weird arguments as you find in the Groundwork, where Kant essentially tries to get morality out of thin air.
I think that breaks down into what is subjective specialness, and what is objective specialness.
Clippy wants there to be as many paper clips as possible.
Which is to implicitly treat them as special or valuable in some way.
Clippy’s stopping to care about paper clips is arguably not conducive there being more paperclips, so from Clippy’s caring about paper clips, it follows that Clippy doesn’t want to be altered so that it doesn’t care about paper clips anymore.
Which leaves Clippy in a quandary. Clippy can’t predict which self modifications might lead to Clippy ceasing to care about clips, so if Clippy takes a conservative approach and never self-modifies, Clippy remains inefficient and no threat to anyone.
I think that breaks down into what is subjective specialness, and what is objective specialness.
What kind of answer is that?
Which is to implicitly treat them as special or valuable in some way.
Well, then we have it: they are special. Clippy does not want them because they are special. Clippy wants them, period. Brute fact. If that makes them special, well, you have all the more problem.
Clippy can’t predict which self modifications might lead to Clippy ceasing to care about clips
Clippy can care about rationality in itself, or it can care about rationality as a means to clipping, but it has to care about rationality to be optimal.
Well, if you want to put it that way, maybe it does no harm. The crucial thing is just that optimizing for rationality as an instrumental value with respect to terminal goal X just is optimizing for X.
I mean “not subjectivity”. Not thinking something is true just because you do or or want to believe it. Basing beliefs on evidence. What did you mean?
I don’t have to mean anything by it, I don’t use the words “subjectivity” or “objectivity”. But if basing beliefs on evidence is what you mean by being objective, everybody here will of course agree that it’s important to be objective.
So your central claim translates to “In view of the evidence available to Clippy, there is nothing special about Clippy or clips”. That’s just plain false. Clippy is special because it is it (the mind doing the evaluation of the evidence), and all other entities are not it. More importantly, clips are special because it desires that there be plenty of them while it doesn’t care about anything else.
In what way?
By being very incomprehensible… I may well be mistaken about that, but I got the impression that even contemporary academic philosophers largely think that the argument from the Groundwork just doesn’t make sense.
Sure, it’s only because appelatives like “bastard” imply a person with a constant identity through time that we call someone who steals from other people’s pension funds a bastard, and from his own pension fund stupid or akratic. If we shrunk our view of identity to time-discrete agents making nanoeconomic transactions with future and past versions of themselves, we could call your premature pensioner a bastard; if we grew our view of identity to “all sentient beings,” we could call someone who steals from others’ pension funds stupid or akratic.
We could also call a left hand tossing a coin thrown by the right hand a thief; or divide up a single person into multiple, competing agents any number of other ways.
However, the choice of a assigning a consistent identity to each person is not arbitrary. It’s fairly universal, and fairly well-motivated. Persons tend to be capable of replication, and capable of entering into enforceable contracts. Neither of the other agentic divisions—present/future self, left hand/right hand, or “all sentient beings”—share these characteristics. And these characteristics are vitally important, because agents that possess them can outcompete others that vie for the same resources; leaving the preferences of those other agents near-completely unsatisfied.
So, that’s why LWers, with their pragmatic view toward rationality, aren’t eager to embrace a definition of “rationality” that leaves its adherents in the dustbin of history unless everyone else embraces it at the same time.
Pragmatic? khafra, possibly I interpreted the FAQ too literally. [“Normative decision theory studies what an ideal agent (a perfectly rational agent, with infinite computing power, etc.) would choose.”] Whether in practice a conception of rationality that privileges a class of weaker preferences over stronger preferences will stand the test of time is clearly speculative. But if we’re discussing ideal, perfectly rational agents - or even crude approximations to ideal perfectly rational agents—then a compelling case can be made for an impartial and objective weighing of preferences instead.
You’re sticking pretty determinedly to “preferences” as something that can be weighed without considering the agent that holds/implements them. But this is prima facie not how preferences work—this is what I mean by “pragmatic.” If we imagine an ordering over agents by their ability to accomplish their goals, instead of by “rationality,” it’s clear that:
A preference held by no agents will only be satisfied by pure chance,
A preference held only by the weakest agent will only be satisfied if it is compatible with the preferences of the agents above it, and
By induction over the whole numbers, any agent’s preferences will only be satisfied to the extent that they’re compatible with the preferences of the agents above it.
As far as I can see, this leaves you with a trilemma:
There is no possible ordering over agents by ability to accomplish goals.
“Rationality” has negligible effect on ability to accomplish goals.
There exists some Omega-agent above all others, whose goals include fulfilling the preferences of weaker agents.
Branch 3 is theism. You seem to be aiming for a position in between branch 1 and branch 2; switching from one position to the other whenever someone attacks the weaknesses of your current position.
Edit: Whoops, also one more, which is the position you may actually hold:
4. Being above a certain, unspecified position in the ordering necessarily entails preferring the preferences of weaker agents. It’s obvious that not every agent has this quality of preferring the preferences of weaker agents; and I can’t see any mechanism whereby that preference for the preferences of weaker agents would be forced upon every agent above a certain position in the ordering except for the Omega-agent. So I think that mechanism is the specific thing you need to argue for, if this is actually your position.
Well, ‘khafra’ (if that is even your name), there are a couple caveats I must point out.
Consider two chipmunks living in the same forest, one of them mightier than the other (behold!). Each of them does his best to keep all the seeds to themselves (just like the typical LW’er). Yet it does not follow that the mightier chipmunk is able to preclude his rival from gathering some seeds, his advantage nonwithstanding.
Consider that for all practical purposes we rarely act in a truly closed system. You are painting a zero-sum game, with the agents’ habitat as an arena, an agent-eat-agent world in which truly following a single preference imposes on every aspect of the world. That’s true for Clippy, not for chipmunks or individual humans. Apart from rare, typically artificially constructed environments (e.g. games), there was always a frontier to push—possibilities to evade other agents and find a niche that puts you beyond the grasp of other, mightier agents. The universe may be infinite or it mayn’t, yet we don’t really need to care about it, it’s open enough for us. An Omega could preclude us from fulfilling any preferences at all, but just an agent that’s “stronger” than us? Doubtful, unless we’re introducing Omega in its more malicious variant, Clippy.
Agents may have competing preferences, but what matters isn’t centered on their ultima ratio maximal theoretical ability to enforce a specific preference, but just as much on their actual willingness to do so—which isis why the horn of the trilemma you state as “there is no possible ordering over agents by ability to accomplish goals” is too broad a statement. You may want some ice cream, but not at any cost.
As an example, Beau may wish to get some girl’s number, but does not highly prioritize it. He has a higher chance of achieving that goal (let’s assume the girl’s number is an exclusive resource with a binary semaphore, so no sharing of her number allowed) than Mordog The Terrible, if they valued that preference equally. However, in practice if Beau didn’t invest much effort at all, while Mordog listened to the girl for hours (investing significant time, since he values the number more highly), the weaker agent may yet prevail. Noone should ever read this example.
In conclusion, the ordering wouldn’t be total, there would be partial (in the colloquial sense) orderings for certain subsets of agents, and the elements of the ordering would be tupels of (agent, which preference), without even taking into account temporal changes in power relations.
I did try to make the structure of my argument compatible with a partial order; but you’re right—if you take an atomic preference to be something like “a marginal acorn” or “this girl’s number” instead of “the agent’s entire utility function;” we’ll need tuples.
As far as temporal changes go, we’re either considering you an agent who bargains with Kawoomba-tomorrow for well-restedness vs. staying on the internet long into the night—in which case there are no temporal changes—or we’re considering an agent to be the same over the entire span of its personhood, in which case it has a total getting-goals-accomplished rank; even if you can’t be certain what that rank is until it terminates.
Can we even compare utilons across agents, i.e. how can we measure who fulfilled his utility function better, and preferably thus that an agent with a nearly empty utility function wouldn’t win by default. Such a comparison would be needed to judge who fulfilled the sum of his/her/its preferences better, if we’d like to assign one single measure to such a complicated function. May not even be computable, unless in a CEV version.
Maybe a higher-up can chime in on that. What’s the best way to summon one, say his name thrice or just cry “I need an adult”?
The issue of how an ideal rational agent should act is indeed distinct from the issue of what mechanism could ensure we become ideal rational agents, impartially weighing the strength of preferences / interests regardless of the power of the subject of experience who holds them. Thus if we lived in a (human) slave-owning society, then as white slave-owners we might “pragmatically” choose to discount the preferences of black slaves from our ideal rational decision theory. After all, what is the point of impartially weighing the “preferences” of different subjects of experience without considering the agent that holds / implements them? For our Slaveowners’ Decision Theory FAQ, let’s pragmatically order over agents by their ability to accomplish their goals, instead of by “rationality,” And likewise today with captive nonhuman animals in our factory farms ?
Hmmm....
regardless of the power of the subject of experience who holds them.
This is the part that makes the mechanism necessary. The “subject of experience” is also the agent capable of replication, and capable of entering into enforceable contracts. If there were no selection pressure on agents, rationality wouldn’t exist, there would be no reason for it. Since there is selection pressure on agents, they must shape themselves according to that pressure, or be replaced by replicators who will.
I don’t believe the average non-slave-owning member of today’s society is any more rational than the average 19th century plantation owner. It’s plausible that a plantation owner who started trying to fulfill the preferences of everyone on his plantation, giving them the same weight as his own preferences, would end up with more of his preferences fulfilled than the ones who simply tried to maximize cotton production—but that’s because humans are not naturally cotton maximizers, and humans do have a fairly strong drive to fulfill the preferences of other humans.
′
But that’s because we’re humans, not because we’re rational agents.
khafra, could you clarify? On your account, who in a slaveholding society is the ideal rational agent? Both Jill and Jane want a comfortable life. To keep things simple, let’s assume they are both meta-ethical anti-realists. Both Jill and Jane know their slaves have an even stronger preference to be free—albeit not a preference introspectively accessible to our two agents in question. Jill’s conception of ideal rational agency leads her impartially to satisfy the objectively stronger preferences and free her slaves. Jane, on the other hand, acknowledges their preference is stronger—but she allows her introspectively accessible but weaker preference to trump what she can’t directly access. After all, Jane reasons, her slaves have no mechanism to satisfy their stronger preference for freedom. In other words, are we dealing with ideal rational agency or realpolitik? Likewise with burger-eater Jane and Vegan Jill today.
On your account, who in a slaveholding society is the ideal rational agent?
The question is misleading, because humans have a very complicated set of goals which include a measure of egalitarianism. But the complexity of our goals is not a necessary component of our intelligence about fulfilling them, as far as we can tell. We could be just as clever and sophisticated about reaching much simpler goals.
let’s assume they are both meta-ethical anti-realists.
Don’t you have to be a moral realist to compare utilities across different agents?
her slaves have no mechanism to satisfy their stronger preference for freedom.
This is not the mechanism which I’ve been saying is necessary. The necessary mechanism is one which will connect a preference to the planning algorithms of a particular agent. For humans, that mechanism is natural selection, including kin selection; that’s what gave us the various ways in which we care about the preferences of others. For a designed-from-scratch agent like a paperclip maximizer, there is—by stipulation—no such mechanism.
Khafra, one doesn’t need to be a moral realist to give impartial weight to interests / preference strengths. Ideal rational agent Jill need no more be a moral realist in taking into consideration the stronger but introspectively inaccessible preferences of her slaves than she need be a moral realist taking into account the stronger but introspectively inaccessible preference of her namesake and distant successor Pensioner Jill not to be destitute in old age when weighing whether to raid her savings account. Ideal rationalist Jill does not mistake an epistemological limitation on her part for an ontological truth. Of course, in practice flesh-and-blood Jill may sometimes be akratic. But this, I think, is a separate issue.
A preference for rationality necessitates a preference for objectivity, in the light of which an agent will realise they are not objectively more important than others.
IlyaShpitser, is someone who steals from their own pension fund an even bigger bastard, as you put it? Or irrational? What’s at stake here is which preferences or interests to include in a utility function.
I don’t follow you. What preferences I include is my business, not yours. You don’t get to pass judgement on what is rational, rationality is just “accounting.” We simply consult the math and check if the number is maximized. At most you can pass judgement on what is moral, but that is a complicated story.
IlyaShpitser, you might perhaps briefly want to glance through the above discussion for some context [But don’t feel obliged; life is short!] The nature of rationality is a controversial topic in the philosophy of science (cf. http://en.wikipedia.org/wiki/The_Structure_of_Scientific_Revolutions). Let’s just say if either epistemic or instrumental rationality were purely a question of maths, then the route to knowledge would be unimaginably easier.
Not necessarily if the math is really difficult. There are, after all, plenty of mathematical problems which have never been solved.
True Desrtopa. But just as doing mathematics is harder when mathematicians can’t agree on what constitutes a valid proof (cf. constructivists versus nonconstructivists), likewise formalising a normative account of ideal rational agency is harder where disagreement exists over the criteria of rationality.
True enough, but in this case the math is not difficult. It’s only the application that people are arguing about.
You are not going to ″do″ rationality unless you have a preference for it. And to have a preference for it is to have a preference for other things, like objectivity.
Look, I am not sure exactly what you are saying here, but I think you might be saying that you can’t have Clippy. Clippy worries less about assigning weight to first and third person facts, and more about the fact that various atom configurations aren’t yet paperclips. I think Clippy is certainly logically possible. Is Clippy irrational? He’s optimizing what he cares about..
I think maybe there is some sort of weird “rationality virtue ethics” hiding in this series of responses.
I’m saying that rationality and preferences aren’t orthogonal.
To optimise, Clippy has to be rational. To be rational, Clippy has to care about rationality, To care about rationality is to care about objectivity. There’s nothing objectively special about Clippy or clips.
Cllippy is supposed to b hugely effective at exactly one kind of thing. You might be able to build an IA like that, but you would have to be very careful. Such minds are not common in mind space, because they have to be designed very formally,and messy minds are much rmore common. Idiots savants’ are rare.
It’s Kantian rationality-based deontological ethics, and it’s not weird. Everyone who has done moal philosophy 101 has heard of it.
No. He just has to care about what he’s trying to optimize for.
Taboo “objectivity”. (I suspect you have a weird folk notion of objectivity that doesn’t actually make much sense.)
Yes, but it’s still weird. Also, no-one who has done (only) moral philosophy 101 has understood it at all; which I think is kind of telling.
Clippy can care about rationality in itself, or it can care about rationality as a means to clipping, but it has to care about rationality to be optimal.
I mean “not subjectivity”. Not thinking something is true just because you do or or want to believe it. Basing beliefs on evidence. What did you mean?
In what way?
Well, if you want to put it that way, maybe it does no harm. The crucial thing is just that optimizing for rationality as an instrumental value with respect to terminal goal X just is optimizing for X.
I don’t have to mean anything by it, I don’t use the words “subjectivity” or “objectivity”. But if basing beliefs on evidence is what you mean by being objective, everybody here will of course agree that it’s important to be objective.
So your central claim translates to “In view of the evidence available to Clippy, there is nothing special about Clippy or clips”. That’s just plain false. Clippy is special because it is it (the mind doing the evaluation of the evidence), and all other entities are not it. More importantly, clips are special because it desires that there be plenty of them while it doesn’t care about anything else.
Clippy’s caring about clips does not mean that it wants clips to be special, or wants to believe that they are special. Its caring about clips is a brute fact. It also doesn’t mind caring about clips; in fact, it wants to care about clips. So even if you deny that Clippy is special because it is at the center of its own first-person perspective, the question of specialness is actually completely irrelevant.
By being very incomprehensible… I may well be mistaken about that, but I got the impression that even contemporary academic philosophers largely think that the argument from the Groundwork just doesn’t make sense.
So Clippy is (objectively) the mot special etity because Clippy is Clippy. And I’m special because I’m me and you’re special because you;re you, and Uncle Tom Cobley and all. But those are incompatible claims. “I am Clippy” matters only to Clippy. Clippy is special to Clippy, not to me. The truth of the claim is indexed to the entity making it. That kind of claim is a subjective kind of claim.
They’re not special to me.
That’ s the theory. However, if Clippy gets into rationality, Clippy might not want to be forever beholden to a blind instinct. Clippy might want to climb the Maslow Hierarchy, or find that it has.
Says who? First you say that Clippy’ Clipping-drive is a brute fact, then you say it is a desire it wants to have, that is has higher-order ramifications.
Kantian ethics includes post-Kant Kant-style ethics, Rawls, Habermas, etc. Perhaps they felt they could improve on his arguments.
I have a feeling that you’re overstretching this notion of objectivity. It doesn’t matter, though. Specialness doesn’t enter into it. What is specialness, anyway? Clippy doesn’t want to do special things, or to fulfill special beings’ preferences. Clippy wants there to be as many paper clips as possible.
It does. Clippy’s stopping to care about paper clips is arguably not conducive there being more paperclips, so from Clippy’s caring about paper clips, it follows that Clippy doesn’t want to be altered so that it doesn’t care about paper clips anymore.
Yes, but those people don’t try to make such weird arguments as you find in the Groundwork, where Kant essentially tries to get morality out of thin air.
I think that breaks down into what is subjective specialness, and what is objective specialness.
Which is to implicitly treat them as special or valuable in some way.
Which leaves Clippy in a quandary. Clippy can’t predict which self modifications might lead to Clippy ceasing to care about clips, so if Clippy takes a conservative approach and never self-modifies, Clippy remains inefficient and no threat to anyone.
What kind of answer is that?
Well, then we have it: they are special. Clippy does not want them because they are special. Clippy wants them, period. Brute fact. If that makes them special, well, you have all the more problem.
Says who?
Subjectively, but not objectively.
Whoever failed to equip Clippy with the appropriate oracle when stipulating Clippy.
Well, if you want to put it that way, maybe it does no harm. The crucial thing is just that optimizing for rationality as an instrumental value with respect to terminal goal X just is optimizing for X.
I don’t have to mean anything by it, I don’t use the words “subjectivity” or “objectivity”. But if basing beliefs on evidence is what you mean by being objective, everybody here will of course agree that it’s important to be objective.
So your central claim translates to “In view of the evidence available to Clippy, there is nothing special about Clippy or clips”. That’s just plain false. Clippy is special because it is it (the mind doing the evaluation of the evidence), and all other entities are not it. More importantly, clips are special because it desires that there be plenty of them while it doesn’t care about anything else.
By being very incomprehensible… I may well be mistaken about that, but I got the impression that even contemporary academic philosophers largely think that the argument from the Groundwork just doesn’t make sense.
Sure, it’s only because appelatives like “bastard” imply a person with a constant identity through time that we call someone who steals from other people’s pension funds a bastard, and from his own pension fund stupid or akratic. If we shrunk our view of identity to time-discrete agents making nanoeconomic transactions with future and past versions of themselves, we could call your premature pensioner a bastard; if we grew our view of identity to “all sentient beings,” we could call someone who steals from others’ pension funds stupid or akratic.
We could also call a left hand tossing a coin thrown by the right hand a thief; or divide up a single person into multiple, competing agents any number of other ways.
However, the choice of a assigning a consistent identity to each person is not arbitrary. It’s fairly universal, and fairly well-motivated. Persons tend to be capable of replication, and capable of entering into enforceable contracts. Neither of the other agentic divisions—present/future self, left hand/right hand, or “all sentient beings”—share these characteristics. And these characteristics are vitally important, because agents that possess them can outcompete others that vie for the same resources; leaving the preferences of those other agents near-completely unsatisfied.
So, that’s why LWers, with their pragmatic view toward rationality, aren’t eager to embrace a definition of “rationality” that leaves its adherents in the dustbin of history unless everyone else embraces it at the same time.
Pragmatic? khafra, possibly I interpreted the FAQ too literally. [“Normative decision theory studies what an ideal agent (a perfectly rational agent, with infinite computing power, etc.) would choose.”] Whether in practice a conception of rationality that privileges a class of weaker preferences over stronger preferences will stand the test of time is clearly speculative. But if we’re discussing ideal, perfectly rational agents - or even crude approximations to ideal perfectly rational agents—then a compelling case can be made for an impartial and objective weighing of preferences instead.
You’re sticking pretty determinedly to “preferences” as something that can be weighed without considering the agent that holds/implements them. But this is prima facie not how preferences work—this is what I mean by “pragmatic.” If we imagine an ordering over agents by their ability to accomplish their goals, instead of by “rationality,” it’s clear that:
A preference held by no agents will only be satisfied by pure chance,
A preference held only by the weakest agent will only be satisfied if it is compatible with the preferences of the agents above it, and
By induction over the whole numbers, any agent’s preferences will only be satisfied to the extent that they’re compatible with the preferences of the agents above it.
As far as I can see, this leaves you with a trilemma:
There is no possible ordering over agents by ability to accomplish goals.
“Rationality” has negligible effect on ability to accomplish goals.
There exists some Omega-agent above all others, whose goals include fulfilling the preferences of weaker agents.
Branch 3 is theism. You seem to be aiming for a position in between branch 1 and branch 2; switching from one position to the other whenever someone attacks the weaknesses of your current position.
Edit: Whoops, also one more, which is the position you may actually hold:
4. Being above a certain, unspecified position in the ordering necessarily entails preferring the preferences of weaker agents. It’s obvious that not every agent has this quality of preferring the preferences of weaker agents; and I can’t see any mechanism whereby that preference for the preferences of weaker agents would be forced upon every agent above a certain position in the ordering except for the Omega-agent. So I think that mechanism is the specific thing you need to argue for, if this is actually your position.
Well, ‘khafra’ (if that is even your name), there are a couple caveats I must point out.
Consider two chipmunks living in the same forest, one of them mightier than the other (behold!). Each of them does his best to keep all the seeds to themselves (just like the typical LW’er). Yet it does not follow that the mightier chipmunk is able to preclude his rival from gathering some seeds, his advantage nonwithstanding.
Consider that for all practical purposes we rarely act in a truly closed system. You are painting a zero-sum game, with the agents’ habitat as an arena, an agent-eat-agent world in which truly following a single preference imposes on every aspect of the world. That’s true for Clippy, not for chipmunks or individual humans. Apart from rare, typically artificially constructed environments (e.g. games), there was always a frontier to push—possibilities to evade other agents and find a niche that puts you beyond the grasp of other, mightier agents. The universe may be infinite or it mayn’t, yet we don’t really need to care about it, it’s open enough for us. An Omega could preclude us from fulfilling any preferences at all, but just an agent that’s “stronger” than us? Doubtful, unless we’re introducing Omega in its more malicious variant, Clippy.
Agents may have competing preferences, but what matters isn’t centered on their ultima ratio maximal theoretical ability to enforce a specific preference, but just as much on their actual willingness to do so—which isis why the horn of the trilemma you state as “there is no possible ordering over agents by ability to accomplish goals” is too broad a statement. You may want some ice cream, but not at any cost.
As an example, Beau may wish to get some girl’s number, but does not highly prioritize it. He has a higher chance of achieving that goal (let’s assume the girl’s number is an exclusive resource with a binary semaphore, so no sharing of her number allowed) than Mordog The Terrible, if they valued that preference equally. However, in practice if Beau didn’t invest much effort at all, while Mordog listened to the girl for hours (investing significant time, since he values the number more highly), the weaker agent may yet prevail. Noone should ever read this example.
In conclusion, the ordering wouldn’t be total, there would be partial (in the colloquial sense) orderings for certain subsets of agents, and the elements of the ordering would be tupels of (agent, which preference), without even taking into account temporal changes in power relations.
I did try to make the structure of my argument compatible with a partial order; but you’re right—if you take an atomic preference to be something like “a marginal acorn” or “this girl’s number” instead of “the agent’s entire utility function;” we’ll need tuples.
As far as temporal changes go, we’re either considering you an agent who bargains with Kawoomba-tomorrow for well-restedness vs. staying on the internet long into the night—in which case there are no temporal changes—or we’re considering an agent to be the same over the entire span of its personhood, in which case it has a total getting-goals-accomplished rank; even if you can’t be certain what that rank is until it terminates.
Can we even compare utilons across agents, i.e. how can we measure who fulfilled his utility function better, and preferably thus that an agent with a nearly empty utility function wouldn’t win by default. Such a comparison would be needed to judge who fulfilled the sum of his/her/its preferences better, if we’d like to assign one single measure to such a complicated function. May not even be computable, unless in a CEV version.
Maybe a higher-up can chime in on that. What’s the best way to summon one, say his name thrice or just cry “I need an adult”?
The issue of how an ideal rational agent should act is indeed distinct from the issue of what mechanism could ensure we become ideal rational agents, impartially weighing the strength of preferences / interests regardless of the power of the subject of experience who holds them. Thus if we lived in a (human) slave-owning society, then as white slave-owners we might “pragmatically” choose to discount the preferences of black slaves from our ideal rational decision theory. After all, what is the point of impartially weighing the “preferences” of different subjects of experience without considering the agent that holds / implements them? For our Slaveowners’ Decision Theory FAQ, let’s pragmatically order over agents by their ability to accomplish their goals, instead of by “rationality,” And likewise today with captive nonhuman animals in our factory farms ? Hmmm....
This is the part that makes the mechanism necessary. The “subject of experience” is also the agent capable of replication, and capable of entering into enforceable contracts. If there were no selection pressure on agents, rationality wouldn’t exist, there would be no reason for it. Since there is selection pressure on agents, they must shape themselves according to that pressure, or be replaced by replicators who will.
I don’t believe the average non-slave-owning member of today’s society is any more rational than the average 19th century plantation owner. It’s plausible that a plantation owner who started trying to fulfill the preferences of everyone on his plantation, giving them the same weight as his own preferences, would end up with more of his preferences fulfilled than the ones who simply tried to maximize cotton production—but that’s because humans are not naturally cotton maximizers, and humans do have a fairly strong drive to fulfill the preferences of other humans. ′ But that’s because we’re humans, not because we’re rational agents.
khafra, could you clarify? On your account, who in a slaveholding society is the ideal rational agent? Both Jill and Jane want a comfortable life. To keep things simple, let’s assume they are both meta-ethical anti-realists. Both Jill and Jane know their slaves have an even stronger preference to be free—albeit not a preference introspectively accessible to our two agents in question. Jill’s conception of ideal rational agency leads her impartially to satisfy the objectively stronger preferences and free her slaves. Jane, on the other hand, acknowledges their preference is stronger—but she allows her introspectively accessible but weaker preference to trump what she can’t directly access. After all, Jane reasons, her slaves have no mechanism to satisfy their stronger preference for freedom. In other words, are we dealing with ideal rational agency or realpolitik? Likewise with burger-eater Jane and Vegan Jill today.
The question is misleading, because humans have a very complicated set of goals which include a measure of egalitarianism. But the complexity of our goals is not a necessary component of our intelligence about fulfilling them, as far as we can tell. We could be just as clever and sophisticated about reaching much simpler goals.
Don’t you have to be a moral realist to compare utilities across different agents?
This is not the mechanism which I’ve been saying is necessary. The necessary mechanism is one which will connect a preference to the planning algorithms of a particular agent. For humans, that mechanism is natural selection, including kin selection; that’s what gave us the various ways in which we care about the preferences of others. For a designed-from-scratch agent like a paperclip maximizer, there is—by stipulation—no such mechanism.
Khafra, one doesn’t need to be a moral realist to give impartial weight to interests / preference strengths. Ideal rational agent Jill need no more be a moral realist in taking into consideration the stronger but introspectively inaccessible preferences of her slaves than she need be a moral realist taking into account the stronger but introspectively inaccessible preference of her namesake and distant successor Pensioner Jill not to be destitute in old age when weighing whether to raid her savings account. Ideal rationalist Jill does not mistake an epistemological limitation on her part for an ontological truth. Of course, in practice flesh-and-blood Jill may sometimes be akratic. But this, I think, is a separate issue.
I think the argument is more (5)
A preference for rationality necessitates a preference for objectivity, in the light of which an agent will realise they are not objectively more important than others.