Although it’s not technically possible yet, measuring the intensity of the positive and negative components of an experience sounds something that ought to be at least possible in principle.
I don’t see how having a quantitative, empirical measure which is appropriate for one individual helps you with comparisons across individuals. Do we really want to make people utility monsters because their neural currents devoted to measuring happiness have a higher amperage?
I was assuming that the measure would be valid across individuals. I wouldn’t expect the neural basis of suffering or pleasure to vary so much that you couldn’t automatically adapt it to the brains in question.
Do we really want to make people utility monsters because their neural currents devoted to measuring happiness have a higher amperage?
Well yes, hedonistic utilitarianism does make it possible in principle that Felix ends up screwing us over, but that’s an objection to hedonistic utilitarianism rather than the measure.
I was assuming that the measure would be valid across individuals.
I mean, the measure is going to be something like an EEG or an MRI, where we determine the amount of activity in some brain region. But while measuring the electrical properties of that region is just an engineering problem, and the units are the same from person to person, and maybe even the range is the same from person to person, that doesn’t establish the ethical principle that all people deserve equal consideration (or, in the case of range differences or variance differences, that neural activity determines how much consideration one deserves).
Well yes, hedonistic utilitarianism does make it possible in principle that Felix ends up screwing us over, but that’s an objection to hedonistic utilitarianism rather than the measure.
It’s not obvious to me that all agents deserve the same level of moral consideration (i.e. I am open to the possibility of utility monsters), but it is obvious to me that some ways of determining who should be the utility monsters are bad (generally because they’re easily hacked or provide unproductive incentives).
Well it’s not like people would go around maximizing the amount of this particular pattern of neural activity in the world: they would go around maximizing pleasure in the-kinds-of-agents-they-care-about, where the pattern is just a way of measuring and establishing what kinds of interventions actually do increase pleasure. (We are talking about humans, not FAI design, right?) If there are ways of hacking the pattern or producing it in ways that don’t actually correlate with pleasure (of the kind that we care about), then those can be identified and ignored.
Well it’s not like people would go around maximizing the amount of this particular pattern of neural activity in the world
Depending on your view of human psychology, this doesn’t seem like that bad a description, so long as we’re talking about people only maximizing their own circuitry. (Maximizing is probably wrong, rather than keeping it within some reference range.)
We are talking about humans, not FAI design, right?
That’s what I had that in mind, yeah.
My core objection, which I think lines up with SaidAchmiz’s, is that even if there’s the ability to measure people’s satisfaction objectively (so that we can count the transparency problem as solved), that doesn’t tell us how to make satisfaction tradeoffs between individuals.
even if there’s the ability to measure people’s satisfaction objectively (so that we can count the transparency problem as solved), that doesn’t tell us how to make satisfaction tradeoffs between individuals.
I agree with this. I was originally only objecting to the argument that aggregating utility between individuals would be impossible or incoherent, but I do not have an objection to the argument that the mapping from subjective states to math is underspecified. (Though I don’t see this as a serious problem for utilitarianism: it only means that different people will have different mappings rather than there being a single unique one.)
I was originally only objecting to the argument that aggregating utility between individuals would be impossible or incoherent
Er, hang on. If this is your objection, I’m not sure that you’ve actually said what’s wrong with said argument. Or do you mean that you were objecting to the applicability of said argument to hedonistic utilitarianism, which is how I read your comments?
To add to my “yes”: I agree with the claim that aggregating utility between individuals seems to be possibly incoherent in the context of preference utilitarianism. Indeed, if we define utility in terms of preferences, I’m even somewhat skeptical of the feasibility of optimizing the utility of a single individual over their lifetime: see this comment.
I approve of virtuous acts, and disapprove of vicious ones.
In terms of labels, I think I give consequentialist answers to the standard ethical questions, but I think most character improvement comes from thinking deontologically, because of the tremendous amount of influence our identities have on our actions. If one thinks of oneself as humble, that has many known ways of making one act differently. One’s abstract, far mode views are likely to only change one’s speech, not one’s behavior. Thus, I don’t put all that much effort into theories of ethics, and try to put effort instead into acting virtuously.
Interestingly, it seems our views are complementary, not contradictory. I would (I think) be willing to endorse what you said as a recipe for implementing the views I describe.
There is no such centralized place, no; I’ve alluded to my views in comments here and there over the past year or so, but haven’t gone laid them out fully. (Then again, I’m a member of no movements that depend heavily on any ethical positions. ;)
Truth be told — and I haven’t disguised this — my ethical views are not anywhere near completely fleshed-out. I know the general shape, I suppose, but beyond that I’m more sure about what I don’t believe — what objections and criticisms I have to other people’s views — than about what I do believe. But here’s a brief sketch.
I think that consequentialism, as a foundational idea, a basic approach, is the only one that makes sense. Deontology seems to me to be completely nonsensical as a grounding for ethics. Every seemingly-intelligent deontologist to whom I’ve spoken (which, admittedly, is a small number — a handful of people here in LessWrong) has appeared to be spouting utter nonsense. Deontology has its uses (see Bostrom’s “An Infinitarian Challenge to Aggregative Ethics”, and this post by Eliezer, for examples), but there it’s deployed for consequentialist reasons: we think it’ll give better results. I’ve seen the view expressed that virtue ethics is descriptively correct as an account of how human minds implement morality, and (as a result) prescriptively valid as a recommendation of how to implement your morality in your own mind once you’ve decided on your object-level moral views, and that seems like a more-or-less reasonable stance to take. As an actual philosophical grounding for morality, virtue ethics is nonsense, but perhaps that’s fine, given the above. Consequentialism actually makes sense. Consequences are the only things that matter? Well, yes. What else could there be?
As far as varieties of consequentialism go… I think intended and foreseeable consequences matter when evaluating the moral rightness of an act, not actual consequences; judging based on actual consequences seems utterly useless, because then you can’t even apply decision theory to the problem of deciding how to act. Judging on actual consequences also utterly fails to accord with my moral intuitions, while judging on intended and foreseeable consequences fits quite well.
I tend toward rule consequentialism rather than act consequentialism; I ask not “what would be the consequences of such an act?”, but “what sort of world would it be like, where [a suitably generalized class of] people acted in this [suitably generalized] way? Would I want to live in such a world?”, or something along those lines. I find act consequentialism to be too often short-sighted, and open to all sorts of dilemmas to which rule consequentialism simply does not fall prey.
I take seriously the complexity of value, and think that hedonistic utilitiarianism utterly fails to capture that complexity. I would not want to live in a world ruled by hedonistic utilitiarians. I wouldn’t want to hand them control of the future. I generally think that preferences are what’s important, and ought to be satisfied — I don’t think there’s any such thing as intrinsically immoral preferences (not even the preference to torture children), although of course one might have uninformed preferences (no, Mr. Example doesn’t really want to drink that glass of acid; what he wants is a glass of beer, and his apparent preference for acid would dissolve immediately, were he apprised of the facts); and satisfying certain preferences might introduce difficult conflicts (the fellow who wants to torture children — well, if satisfying his preferences would result in actual children being actually tortured, then I’m afraid we couldn’t have that). “I prefer to kill myself because I am depressed” is genuinely problematic, however. That’s an issue that I think about often.
All that seems like it might make me a preference utilitiarian, or something like it, but as I’ve said, I’m highly skeptical about the possibility or even coherence of aggregating utility across individuals, not to mention the fact that I don’t think my own preferences adhere to the VNM axioms, and so it may not even be possible to construct a utility function for all individuals. (The last person with whom I was discussing this stopped commenting on Lesswrong before I could get hold of my copy of Rational Choice in an Uncertain World, but now I’ve got it, and I’m willing to discuss the matter, if anyone likes.)
I don’t think it’s obvious that all beings that matter, matter equally. I don’t see anything wrong with valuing my mother much more than I value a randomly selected stranger in Mongolia. It’s not just that I do, in fact, value my mother more; I think it’s right that I should. My family and friends more than strangers; members of my culture (whatever that means, which isn’t necessarily “nation” or “country” or any such thing, though these things may be related) more than members of other cultures… this seems correct to me. (This seems to violate both the “equal consideration” and “agent-neutrality” aspects of classical utilitarianism, to again tie back to the SEP breakdown.)
As far as who matters — to a first approximation, I’d say it’s something like “beings intelligent and self-aware enough to consciously think about themselves”. Human-level intelligence and subjective consciousness, in other words. I don’t think animals matter. I don’t think unborn children matter, nor do infants (though there are nonetheless good reasons for not killing them, having to do with bright lines and so forth; similar considerations may protect the severely mentally disabled, though this is a matter which requires much further thought).
Do these thoughts add up to a coherent ethical system? Unlikely. They’re what I’ve got so far, though. Hopefully you find them at least somewhat useful, and of course feel free to ask me to elaborate, if you like.
Out of curiosity, what was your reason for asking about my ethical views in detail? I did somewhat enjoy writing out that comment, but I’m curious as to whether you were planning to go somewhere with this.
No big systematic overview, though several comments and posts of mine touch upon different parts of them. Is there anything in particular that you’re interested in?
If I could ask two quick questions, it’d be whether you’re a realist and whether you’re a cognitivist. The preponderance of those views within EA is what I’ve heard debated most often. (This is different from what first made me ask, but I’ll drop that.)
I know Jacy Anthis—thebestwecan on LessWrong—has an argument that realism combined with the moral beliefs about future generations typical among EAs suggests that smarter people in the future will work out a more correct ethics, and that this should significantly affect our actions now. He rejects realism, and think this is a bad consequence. I think it actually doesn’t depend on realism, but rather on most forms of cognitivism, for instance ones on which our coherent extrapolated view is correct. He plans to write about this.
Definitely not a realist. I haven’t looked at the exact definitions of these terms very much, but judging from the Wikipedia and SEP articles that I’ve skimmed, I’d call myself an ethical subjectivist (which apparently does fall under cognitivism).
I believe the prevalence of moral realism within EA is risky and bad for EA goals for several reasons. One of which is that moral realists tend to believe in the inevitability of a positive far-future (since smart minds will converge on the “right” morality), which tends to make them focus on ensuring the existence of the far future at the cost of other things.
If smart minds will converge on the “right” morality, this makes sense, but I severely doubt that is true. It could be true, but that possibility certainly isn’t worth sacrificing other goals of improvement.
And I think trying to figure out the “right” morality is a waste of resources for similar reasons. CEA has expressed the views I argue against here, which has other EAs and me concerned.
I don’t see how having a quantitative, empirical measure which is appropriate for one individual helps you with comparisons across individuals. Do we really want to make people utility monsters because their neural currents devoted to measuring happiness have a higher amperage?
I was assuming that the measure would be valid across individuals. I wouldn’t expect the neural basis of suffering or pleasure to vary so much that you couldn’t automatically adapt it to the brains in question.
Well yes, hedonistic utilitarianism does make it possible in principle that Felix ends up screwing us over, but that’s an objection to hedonistic utilitarianism rather than the measure.
I mean, the measure is going to be something like an EEG or an MRI, where we determine the amount of activity in some brain region. But while measuring the electrical properties of that region is just an engineering problem, and the units are the same from person to person, and maybe even the range is the same from person to person, that doesn’t establish the ethical principle that all people deserve equal consideration (or, in the case of range differences or variance differences, that neural activity determines how much consideration one deserves).
It’s not obvious to me that all agents deserve the same level of moral consideration (i.e. I am open to the possibility of utility monsters), but it is obvious to me that some ways of determining who should be the utility monsters are bad (generally because they’re easily hacked or provide unproductive incentives).
Well it’s not like people would go around maximizing the amount of this particular pattern of neural activity in the world: they would go around maximizing pleasure in the-kinds-of-agents-they-care-about, where the pattern is just a way of measuring and establishing what kinds of interventions actually do increase pleasure. (We are talking about humans, not FAI design, right?) If there are ways of hacking the pattern or producing it in ways that don’t actually correlate with pleasure (of the kind that we care about), then those can be identified and ignored.
Depending on your view of human psychology, this doesn’t seem like that bad a description, so long as we’re talking about people only maximizing their own circuitry. (Maximizing is probably wrong, rather than keeping it within some reference range.)
That’s what I had that in mind, yeah.
My core objection, which I think lines up with SaidAchmiz’s, is that even if there’s the ability to measure people’s satisfaction objectively (so that we can count the transparency problem as solved), that doesn’t tell us how to make satisfaction tradeoffs between individuals.
I agree with this. I was originally only objecting to the argument that aggregating utility between individuals would be impossible or incoherent, but I do not have an objection to the argument that the mapping from subjective states to math is underspecified. (Though I don’t see this as a serious problem for utilitarianism: it only means that different people will have different mappings rather than there being a single unique one.)
Er, hang on. If this is your objection, I’m not sure that you’ve actually said what’s wrong with said argument. Or do you mean that you were objecting to the applicability of said argument to hedonistic utilitarianism, which is how I read your comments?
To add to my “yes”: I agree with the claim that aggregating utility between individuals seems to be possibly incoherent in the context of preference utilitarianism. Indeed, if we define utility in terms of preferences, I’m even somewhat skeptical of the feasibility of optimizing the utility of a single individual over their lifetime: see this comment.
Yes.
Kaj, is there somewhere you lay out your ethical views in more detail?
Ditto for Vaniver and Said.
I approve of virtuous acts, and disapprove of vicious ones.
In terms of labels, I think I give consequentialist answers to the standard ethical questions, but I think most character improvement comes from thinking deontologically, because of the tremendous amount of influence our identities have on our actions. If one thinks of oneself as humble, that has many known ways of making one act differently. One’s abstract, far mode views are likely to only change one’s speech, not one’s behavior. Thus, I don’t put all that much effort into theories of ethics, and try to put effort instead into acting virtuously.
Interestingly, it seems our views are complementary, not contradictory. I would (I think) be willing to endorse what you said as a recipe for implementing the views I describe.
There is no such centralized place, no; I’ve alluded to my views in comments here and there over the past year or so, but haven’t gone laid them out fully. (Then again, I’m a member of no movements that depend heavily on any ethical positions. ;)
Truth be told — and I haven’t disguised this — my ethical views are not anywhere near completely fleshed-out. I know the general shape, I suppose, but beyond that I’m more sure about what I don’t believe — what objections and criticisms I have to other people’s views — than about what I do believe. But here’s a brief sketch.
I think that consequentialism, as a foundational idea, a basic approach, is the only one that makes sense. Deontology seems to me to be completely nonsensical as a grounding for ethics. Every seemingly-intelligent deontologist to whom I’ve spoken (which, admittedly, is a small number — a handful of people here in LessWrong) has appeared to be spouting utter nonsense. Deontology has its uses (see Bostrom’s “An Infinitarian Challenge to Aggregative Ethics”, and this post by Eliezer, for examples), but there it’s deployed for consequentialist reasons: we think it’ll give better results. I’ve seen the view expressed that virtue ethics is descriptively correct as an account of how human minds implement morality, and (as a result) prescriptively valid as a recommendation of how to implement your morality in your own mind once you’ve decided on your object-level moral views, and that seems like a more-or-less reasonable stance to take. As an actual philosophical grounding for morality, virtue ethics is nonsense, but perhaps that’s fine, given the above. Consequentialism actually makes sense. Consequences are the only things that matter? Well, yes. What else could there be?
As far as varieties of consequentialism go… I think intended and foreseeable consequences matter when evaluating the moral rightness of an act, not actual consequences; judging based on actual consequences seems utterly useless, because then you can’t even apply decision theory to the problem of deciding how to act. Judging on actual consequences also utterly fails to accord with my moral intuitions, while judging on intended and foreseeable consequences fits quite well.
I tend toward rule consequentialism rather than act consequentialism; I ask not “what would be the consequences of such an act?”, but “what sort of world would it be like, where [a suitably generalized class of] people acted in this [suitably generalized] way? Would I want to live in such a world?”, or something along those lines. I find act consequentialism to be too often short-sighted, and open to all sorts of dilemmas to which rule consequentialism simply does not fall prey.
I take seriously the complexity of value, and think that hedonistic utilitiarianism utterly fails to capture that complexity. I would not want to live in a world ruled by hedonistic utilitiarians. I wouldn’t want to hand them control of the future. I generally think that preferences are what’s important, and ought to be satisfied — I don’t think there’s any such thing as intrinsically immoral preferences (not even the preference to torture children), although of course one might have uninformed preferences (no, Mr. Example doesn’t really want to drink that glass of acid; what he wants is a glass of beer, and his apparent preference for acid would dissolve immediately, were he apprised of the facts); and satisfying certain preferences might introduce difficult conflicts (the fellow who wants to torture children — well, if satisfying his preferences would result in actual children being actually tortured, then I’m afraid we couldn’t have that). “I prefer to kill myself because I am depressed” is genuinely problematic, however. That’s an issue that I think about often.
All that seems like it might make me a preference utilitiarian, or something like it, but as I’ve said, I’m highly skeptical about the possibility or even coherence of aggregating utility across individuals, not to mention the fact that I don’t think my own preferences adhere to the VNM axioms, and so it may not even be possible to construct a utility function for all individuals. (The last person with whom I was discussing this stopped commenting on Lesswrong before I could get hold of my copy of Rational Choice in an Uncertain World, but now I’ve got it, and I’m willing to discuss the matter, if anyone likes.)
I don’t think it’s obvious that all beings that matter, matter equally. I don’t see anything wrong with valuing my mother much more than I value a randomly selected stranger in Mongolia. It’s not just that I do, in fact, value my mother more; I think it’s right that I should. My family and friends more than strangers; members of my culture (whatever that means, which isn’t necessarily “nation” or “country” or any such thing, though these things may be related) more than members of other cultures… this seems correct to me. (This seems to violate both the “equal consideration” and “agent-neutrality” aspects of classical utilitarianism, to again tie back to the SEP breakdown.)
As far as who matters — to a first approximation, I’d say it’s something like “beings intelligent and self-aware enough to consciously think about themselves”. Human-level intelligence and subjective consciousness, in other words. I don’t think animals matter. I don’t think unborn children matter, nor do infants (though there are nonetheless good reasons for not killing them, having to do with bright lines and so forth; similar considerations may protect the severely mentally disabled, though this is a matter which requires much further thought).
Do these thoughts add up to a coherent ethical system? Unlikely. They’re what I’ve got so far, though. Hopefully you find them at least somewhat useful, and of course feel free to ask me to elaborate, if you like.
Out of curiosity, what was your reason for asking about my ethical views in detail? I did somewhat enjoy writing out that comment, but I’m curious as to whether you were planning to go somewhere with this.
I’m glad you enjoyed it, as you’re right I didn’t go anywhere—I got distracted by other thing. But it was partly a sort of straw poll to supplement the survey, and partly connected to these concerns: http://lesswrong.com/lw/k60/2014_survey_of_effective_altruists/aw1p
No big systematic overview, though several comments and posts of mine touch upon different parts of them. Is there anything in particular that you’re interested in?
If I could ask two quick questions, it’d be whether you’re a realist and whether you’re a cognitivist. The preponderance of those views within EA is what I’ve heard debated most often. (This is different from what first made me ask, but I’ll drop that.)
I know Jacy Anthis—thebestwecan on LessWrong—has an argument that realism combined with the moral beliefs about future generations typical among EAs suggests that smarter people in the future will work out a more correct ethics, and that this should significantly affect our actions now. He rejects realism, and think this is a bad consequence. I think it actually doesn’t depend on realism, but rather on most forms of cognitivism, for instance ones on which our coherent extrapolated view is correct. He plans to write about this.
Definitely not a realist. I haven’t looked at the exact definitions of these terms very much, but judging from the Wikipedia and SEP articles that I’ve skimmed, I’d call myself an ethical subjectivist (which apparently does fall under cognitivism).
I believe the prevalence of moral realism within EA is risky and bad for EA goals for several reasons. One of which is that moral realists tend to believe in the inevitability of a positive far-future (since smart minds will converge on the “right” morality), which tends to make them focus on ensuring the existence of the far future at the cost of other things.
If smart minds will converge on the “right” morality, this makes sense, but I severely doubt that is true. It could be true, but that possibility certainly isn’t worth sacrificing other goals of improvement.
And I think trying to figure out the “right” morality is a waste of resources for similar reasons. CEA has expressed the views I argue against here, which has other EAs and me concerned.