This is (one of the reasons) why I’m not a total utilitarian (of any brand). For future versions of myself, my preferences align pretty well with average utilitarianism (albeit with some caveats), but I haven’t yet found or devised a formalization which captures the complexities of my moral intuitions when applied to others.
Are they? They certainly look complex, but that could be because we haven’t found the proper way to describe them. For example the Mandelbrot set looks complex, but it can be defined in a single line.
Also “complex” leads to ambiguity, perhaps it needs to be defined. I used it in the sense that something is complex if it cannot be quickly defined for a smart and reasonably knowledgeable (in the relevant domain) human, since this seems to be the relevant sense here.
There’s no particular reason why we should expect highly abstract aspects of our random-walk psychological presets to be elegant or simply defined. As such, it’s practically guaranteed that they won’t be.
I’m not saying that our population intuitions are simple, I’m saying that we can’t rule out the possibility. For example a prior I wouldn’t have expected physics to turn out to be simple, however (at least to the level that I took it) physics seems to be remarkably simple (particularly in comparison to the universe it describes), this leads me to conclude that there is some mechanism by which things turn out to be simpler than I would expect.
To give an example, my best guess (besides “something I haven″t though of”) for this mechanism is that mathematical expressions are fairly evenly distributed over patterns which occur in reality, and that one should hence expect there to be a fairly simple piece of mathematics which comes very close to describing physics, a similar thing might happen with our population intuitions.
There’s no particular reason why we should expect highly abstract aspects of our random-walk psychological presets to be elegant or simply defined.
Wouldn’t highly abstract aspects of our psychology be be more recent and as such simpler?
As such, it’s practically guaranteed that they won’t be.
This depends on your priors. If you assign comparable probabilities to simple and complex hypothesis, this follows. If you assign higher probabilities to simple hypothesis than complex ones it doesn’t.
This depends on your priors. If you assign comparable probabilities to simple and complex hypothesis, this follows. If you assign higher probabilities to simple hypothesis than complex ones it doesn’t.
If you flip 1000 fair coins, the resulting output is more likely to be a mishmash of meaningless clumps than it is to be something like “HHTTHHTTHHTTHHTT...” or another very simple repeating pattern. Similarly, a chaotic[1] process like the evolution of our ethical intuitions is more likely to produce an arbitrary mishmash of conflicting emotional drives than it is to produce some coherent system which can easily be extrapolated into an elegant theory of population ethics. All of this is perfectly consistent with any reasonable formalization of Occam’s Razor.
EDIT: The new definition of “complex” that you added above is a reasonable one in general, but in this case it might lead to some dangerous circularity—it seems okay right now, but defining complexity in terms of human intuition while we’re discussing the complexity of human intuition seems like a risky maneuver.
Wouldn’t highly abstract aspects of our psychology be be more recent and as such simpler?
The abstract aspects in question are abstractions and extrapolations of much older empathy patterns, or are trying to be. So, no.
In the colloquial sense of “lots and lots and lots of difficult-to-untangle significant contributing factors”
For future versions of myself, my preferences align pretty well with average utilitarianism (albeit with some caveats)
Could you explain? Those sound like awfully big caveats. If I consider the population of “future versions of myself” as unchangeable, then average utilitarianism and total utilitarianism are equivalent. If I consider that population as changeable, then average utilitarianism seems to suggest changing it by removing the ones with lowest utility: e.g. putting my retirement savings on the roulette wheel and finding some means of painless suicide if I lose.
Yes, but this is the “consider my population unchangeable” case I mentioned, wherein “average” and “total” cease being distinct. Certainly if we calculate average utility by summing 1 won-at-roulette future with 37 killed-myself futures and dividing by 38, then we get a lousy result, but it (as well as the result of any other hypothetical future plans) is isomorphic to what we’d have gotten if we’d obtained total utility by summing those futures and then not dividing. To distinguish average utility from total utility we have to be able to make plans which affect the denominator of that average.
Not for hedonistic utlitarianism—there only fear of death is bad (or the death of people who don’t get replaced by others of equivalent or equal happiness).
This is (one of the reasons) why I’m not a total utilitarian (of any brand). For future versions of myself, my preferences align pretty well with average utilitarianism (albeit with some caveats), but I haven’t yet found or devised a formalization which captures the complexities of my moral intuitions when applied to others.
A proper theory of population ethics should be complex, as our population intuitions are complex...
Are they? They certainly look complex, but that could be because we haven’t found the proper way to describe them. For example the Mandelbrot set looks complex, but it can be defined in a single line.
Also “complex” leads to ambiguity, perhaps it needs to be defined. I used it in the sense that something is complex if it cannot be quickly defined for a smart and reasonably knowledgeable (in the relevant domain) human, since this seems to be the relevant sense here.
There’s no particular reason why we should expect highly abstract aspects of our random-walk psychological presets to be elegant or simply defined. As such, it’s practically guaranteed that they won’t be.
I’m not saying that our population intuitions are simple, I’m saying that we can’t rule out the possibility. For example a prior I wouldn’t have expected physics to turn out to be simple, however (at least to the level that I took it) physics seems to be remarkably simple (particularly in comparison to the universe it describes), this leads me to conclude that there is some mechanism by which things turn out to be simpler than I would expect.
To give an example, my best guess (besides “something I haven″t though of”) for this mechanism is that mathematical expressions are fairly evenly distributed over patterns which occur in reality, and that one should hence expect there to be a fairly simple piece of mathematics which comes very close to describing physics, a similar thing might happen with our population intuitions.
Wouldn’t highly abstract aspects of our psychology be be more recent and as such simpler?
This depends on your priors. If you assign comparable probabilities to simple and complex hypothesis, this follows. If you assign higher probabilities to simple hypothesis than complex ones it doesn’t.
If you flip 1000 fair coins, the resulting output is more likely to be a mishmash of meaningless clumps than it is to be something like “HHTTHHTTHHTTHHTT...” or another very simple repeating pattern. Similarly, a chaotic[1] process like the evolution of our ethical intuitions is more likely to produce an arbitrary mishmash of conflicting emotional drives than it is to produce some coherent system which can easily be extrapolated into an elegant theory of population ethics. All of this is perfectly consistent with any reasonable formalization of Occam’s Razor.
EDIT: The new definition of “complex” that you added above is a reasonable one in general, but in this case it might lead to some dangerous circularity—it seems okay right now, but defining complexity in terms of human intuition while we’re discussing the complexity of human intuition seems like a risky maneuver.
The abstract aspects in question are abstractions and extrapolations of much older empathy patterns, or are trying to be. So, no.
In the colloquial sense of “lots and lots and lots of difficult-to-untangle significant contributing factors”
Maybe a better phrasing would be that we don’t a priori expect them to be simple...
Could you explain? Those sound like awfully big caveats. If I consider the population of “future versions of myself” as unchangeable, then average utilitarianism and total utilitarianism are equivalent. If I consider that population as changeable, then average utilitarianism seems to suggest changing it by removing the ones with lowest utility: e.g. putting my retirement savings on the roulette wheel and finding some means of painless suicide if I lose.
Death is a major source of negative utility even if one accepts average utilitarianism.
Yes, but this is the “consider my population unchangeable” case I mentioned, wherein “average” and “total” cease being distinct. Certainly if we calculate average utility by summing 1 won-at-roulette future with 37 killed-myself futures and dividing by 38, then we get a lousy result, but it (as well as the result of any other hypothetical future plans) is isomorphic to what we’d have gotten if we’d obtained total utility by summing those futures and then not dividing. To distinguish average utility from total utility we have to be able to make plans which affect the denominator of that average.
Not for hedonistic utlitarianism—there only fear of death is bad (or the death of people who don’t get replaced by others of equivalent or equal happiness).