Tim, an ideally rational embodied agent may prefer no suffering to exist outside her cosmological horizon; but she is not rationally constrained to take such suffering—or the notional preferences of sentients in other Hubble volumes—into consideration before acting. This is because nothing she does as an embodied agent will affect such beings. By contrast, the interests and preferences of local sentients fall within the scope of embodied agency. Jane must decide whether the vividness and immediacy of her preference for a burger, when compared to the stronger but dimly grasped preference of a terrified cow not to have her throat slit, disclose some deep ontological truth about the world or a mere epistemological limitation. If she’s an ideal rational agent, she’ll recognise the latter and act accordingly.
The issue isn’t just about things beyond cosmological horizons. All distances are involved. I can help my neighbour more easily than I can help someone from half-way around the world. The distance involved entails expenses relating to sensory and motor signal propagation. For example, I can give my neighbour 10 bucks and be pretty sure that they will receive it.
Of course, there are also other, more important reasons why real agents don’t respect the preferences of others. Egocentricity is caused more by evolution than by simple physics.
Lastly, I still don’t think you can hope to use the term “rational” in this way. It sounds as though you’re talking about some kind of supermorality to me. “Rationality” means something too different.
Tim, all the above is indeed relevant to the decisions taken by an idealised rational agent. I just think a solipsistic conception of rational choice is irrational and unscientific. Yes, as you say, natural selection goes a long way to explaining our egocentricity. But just because evolution has hardwired a fitness-enhancing illusion doesn’t mean we should endorse the egocentric conception of rational decision-making that illusion promotes. Adoption of a God’s-eye-view does entail a different conception of rational choice.
I just think a solipsistic conception of rational choice is irrational and unscientific.
Surely that grossly mischaracterises the position you are arguing against. Egoists don’t think that other agents don’t have minds. They just care more about themselves than others.
But just because evolution has hardwired a fitness-enhancing illusion doesn’t mean we should endorse the egocentric conception of rational decision-making that illusion promotes.
Again, this seems like very prejudicial wording. Egoists aren’t under “a fitness-enhancing illusion”. Illusions involve distortion of the contents of the senses during perception. Nothing like that is involved in egoism.
There are indeed all sorts of specific illusions, for example mirages. But natural selection has engineered a generic illusion that maximised the inclusive fitness of our genes in the ancestral environment. This illusion is that one is located at the centre of the universe. I live in a DP-centred virtual world focused on one particular body-image, just as you live in a TT-centred virtual world focused on a different body-image. I can’t think of any better way to describe this design feature of our minds than as an illusion. No doubt an impartial view from nowhere, stripped of distortions of perspective, would be genetically maladaptive on the African savanna. But this doesn’t mean we need to retain the primitive conception of rational agency that such systematic bias naturally promotes.
Notice that a first person perspective doesn’t necessarily have much to do with adaptations or evolution. If you build a robot, it too is at the centre of its world—simply because that’s where its sensors and actuators are. This makes maximizing inclusive fitness seem like a bit of a side issue.
Calling what is essentially a product of locality an “illusion” still seems very odd to me. We really are at the centre of our perspectives on the world. That isn’t an illusion, it’s a true fact.
There’s a huge difference between the descriptive center-of-the-world and the evaluative centre-of-the-world. The most altruistic person still literally sees everything from their geometrical perspective.
Surely, that’s not really the topic. For instance, I am not fooled by my perspective into thinking that I am literally at the center of the universe. Nor are most educated humans. My observations are compatible with me being at many locations in the universe—relative to any edges that future astronomers might conceivably discover. I don’t see much of an illusion there.
It’s true that early humans often believed that the earth was at the center of the universe. However, that seems a bit different.
Tim, for sure, outright messianic delusions are uncommon (cf. http://www.slate.com/articles/health_and_science/science/2010/05/jesus_jesus_jesus.html) But I wonder if you may underestimate just how pervasive—albeit normally implicit—is the bias imparted by the fact the whole world seems centered on one’s body image. An egocentric world-simulation lends a sense one is in some way special—that this here-and-now is privileged. Sentients in other times and places and Everett branches (etc) have only a second-rate ontological status. Yes, of course, if set out explicitly, such egocentric bias is absurd. But our behaviour towards other sentients suggests to me this distortion of perspective is all too real.
I’m still uncomfortable with the proposed “illusion” terminology. If all perceptions are illusions, we would need some other terminology in order to distinguish relatively accurate perceptions from misleading ones. However, it seems to me that that’s what the word “illusion” is for in the first place.
I’d prefer to describe a camera’s representation of the world as “incomplete” or “limited”, rather than as an “illusion”.
Tim, when dreaming, one has a generic delusion, i.e. the background assumption that one is awake, and a specific delusion, i.e. the particular content of one’s dream. But given we’re constructing a FAQ of ideal rational agency, no such radical scepticism about perception is at stake - merely eliminating a source of systematic bias that is generic to cognitive agents evolved under pressure of natural selection. For sure, there may be some deluded folk who don’t recognise it’s a bias and who believe instead they really are the centre of the universe—and therefore their interests and preferences carry special ontological weight. But Luke’s FAQ is expressly about normative decision theory. The FAQ explicitly contrasts itself with descriptive decision theory, which “studies how non-ideal agents (e.g. humans) actually choose.”
Rationality doesn’t have to mean morality to have implications for morality: since you can reason about just about anything, rationality has implications for just about everything.
Tim, an ideally rational embodied agent may prefer no suffering to exist outside her cosmological horizon; but she is not rationally constrained to take such suffering—or the notional preferences of sentients in other Hubble volumes—into consideration before acting. This is because nothing she does as an embodied agent will affect such beings. By contrast, the interests and preferences of local sentients fall within the scope of embodied agency. Jane must decide whether the vividness and immediacy of her preference for a burger, when compared to the stronger but dimly grasped preference of a terrified cow not to have her throat slit, disclose some deep ontological truth about the world or a mere epistemological limitation. If she’s an ideal rational agent, she’ll recognise the latter and act accordingly.
The issue isn’t just about things beyond cosmological horizons. All distances are involved. I can help my neighbour more easily than I can help someone from half-way around the world. The distance involved entails expenses relating to sensory and motor signal propagation. For example, I can give my neighbour 10 bucks and be pretty sure that they will receive it.
Of course, there are also other, more important reasons why real agents don’t respect the preferences of others. Egocentricity is caused more by evolution than by simple physics.
Lastly, I still don’t think you can hope to use the term “rational” in this way. It sounds as though you’re talking about some kind of supermorality to me. “Rationality” means something too different.
Tim, all the above is indeed relevant to the decisions taken by an idealised rational agent. I just think a solipsistic conception of rational choice is irrational and unscientific. Yes, as you say, natural selection goes a long way to explaining our egocentricity. But just because evolution has hardwired a fitness-enhancing illusion doesn’t mean we should endorse the egocentric conception of rational decision-making that illusion promotes. Adoption of a God’s-eye-view does entail a different conception of rational choice.
Surely that grossly mischaracterises the position you are arguing against. Egoists don’t think that other agents don’t have minds. They just care more about themselves than others.
Again, this seems like very prejudicial wording. Egoists aren’t under “a fitness-enhancing illusion”. Illusions involve distortion of the contents of the senses during perception. Nothing like that is involved in egoism.
There are indeed all sorts of specific illusions, for example mirages. But natural selection has engineered a generic illusion that maximised the inclusive fitness of our genes in the ancestral environment. This illusion is that one is located at the centre of the universe. I live in a DP-centred virtual world focused on one particular body-image, just as you live in a TT-centred virtual world focused on a different body-image. I can’t think of any better way to describe this design feature of our minds than as an illusion. No doubt an impartial view from nowhere, stripped of distortions of perspective, would be genetically maladaptive on the African savanna. But this doesn’t mean we need to retain the primitive conception of rational agency that such systematic bias naturally promotes.
Notice that a first person perspective doesn’t necessarily have much to do with adaptations or evolution. If you build a robot, it too is at the centre of its world—simply because that’s where its sensors and actuators are. This makes maximizing inclusive fitness seem like a bit of a side issue.
Calling what is essentially a product of locality an “illusion” still seems very odd to me. We really are at the centre of our perspectives on the world. That isn’t an illusion, it’s a true fact.
There’s a huge difference between the descriptive center-of-the-world and the evaluative centre-of-the-world. The most altruistic person still literally sees everything from their geometrical perspective.
Surely, that’s not really the topic. For instance, I am not fooled by my perspective into thinking that I am literally at the center of the universe. Nor are most educated humans. My observations are compatible with me being at many locations in the universe—relative to any edges that future astronomers might conceivably discover. I don’t see much of an illusion there.
It’s true that early humans often believed that the earth was at the center of the universe. However, that seems a bit different.
Tim, for sure, outright messianic delusions are uncommon (cf. http://www.slate.com/articles/health_and_science/science/2010/05/jesus_jesus_jesus.html) But I wonder if you may underestimate just how pervasive—albeit normally implicit—is the bias imparted by the fact the whole world seems centered on one’s body image. An egocentric world-simulation lends a sense one is in some way special—that this here-and-now is privileged. Sentients in other times and places and Everett branches (etc) have only a second-rate ontological status. Yes, of course, if set out explicitly, such egocentric bias is absurd. But our behaviour towards other sentients suggests to me this distortion of perspective is all too real.
I’m still uncomfortable with the proposed “illusion” terminology. If all perceptions are illusions, we would need some other terminology in order to distinguish relatively accurate perceptions from misleading ones. However, it seems to me that that’s what the word “illusion” is for in the first place.
I’d prefer to describe a camera’s representation of the world as “incomplete” or “limited”, rather than as an “illusion”.
Tim, when dreaming, one has a generic delusion, i.e. the background assumption that one is awake, and a specific delusion, i.e. the particular content of one’s dream. But given we’re constructing a FAQ of ideal rational agency, no such radical scepticism about perception is at stake - merely eliminating a source of systematic bias that is generic to cognitive agents evolved under pressure of natural selection. For sure, there may be some deluded folk who don’t recognise it’s a bias and who believe instead they really are the centre of the universe—and therefore their interests and preferences carry special ontological weight. But Luke’s FAQ is expressly about normative decision theory. The FAQ explicitly contrasts itself with descriptive decision theory, which “studies how non-ideal agents (e.g. humans) actually choose.”
But what DP was talking about is thinking you are more important.
Rationality doesn’t have to mean morality to have implications for morality: since you can reason about just about anything, rationality has implications for just about everything.