My parents are both vegetarian, and have been since I was born. They brought me up to be a vegetarian. I’m still a vegetarian. Clearly I’m on shaky ground, since my beliefs weren’t formed from evidence, but purely from nurture.
Interestingly my parents became vegetarian because they perceived the way animals were farmed to be cruel (although they also stopped eating non-farmed animals such as fish), however my rationalization for not eating meat is that it is the killing of animals that is wrong (generalising from the belief that killing humans is worse than mistreating them). Since eating meat is not necessary to live, it must therefore be as bad as hunting for fun, which is much more widely disapproved of. (I’m not a vegan, and I often eat sweets containing gelatine, if asked to explain this, I would rationalise that eating these thing causes the death of many fewer animals than actually eating, like, steak).
But having read all of Eliezer’s posts, I now realise that I could have come up with that rationalisation even if eating meat were not wrong, and that I’m now in just a bad a position as a religious believer. I want a crisis of faith, but I have a problem…
I don’t know where to go back to. There’s no objective basis for morality. I don’t know what kind of evidence I should condition on (I don’t know what would be different about the world if eating meat was good instead of bad). If a religious person realises they have no evidence they should go back to their priors. Because god has a tiny prior, they should immediately stop believing. I don’t know exactly what the prior on “killing animals is wrong” is, but I think it has a reasonable size (certainly larger than that for god), and I feel more justified in being vegetarian because of this. What should I do now?
Footnote: I probably don’t have to say this, but I don’t want arguments for or against vegetarianism, simply advice on how one should challenge one’s own moral beliefs. I’ve used “eating meat” and “killing animals” interchangeably in my post, because I think that they are morally equivalent due to supply and demand.
I hope this isn’t a vegatarianism argument, but remember that you have to rehabilitate both killing and cruelty to justify eating most meat, even if killing alone has held you back so far.
Is a state of nonexistence(death) truly a negative, or is it the most neutral of all states?
If Omega told me that the rest of my life would be more painful than it was pleasant I would still choose to live. I think most others here would choose similarly (except in cases of extreme pain like torture).
On grounds of utility, I believe that is irrational, choosing to live.
Even if my life would be painful on net, there are still projects I want to finish and work I want to do for others that would prevent me from choosing death. Valuing things such as these is no more irrational than valuing your own pleasure.
Perhaps our disagreement is over the connection between pain/pleasure and utility. I would prefer a world in which I was in pain but am able to complete certain projects to one in which I was in pleasure but unable to complete certain projects. In the economic sense of utility (rank in an ordinal preference function), my utility would be higher in the former world than the latter world (even though the former is more painful).
I think your disagreement is over time preference. Which path you choose now depends on how much you discount future pain versus present moral guilt or empathy considerations.
I would prefer a world in which I was in pain but am able to complete certain projects to one in which I was in pleasure but unable to complete certain projects.
In other words, you would make that choice now because that would make you feel best now. Of course (you project that) you would make the same choice at time T, for all T occurring between now and the completion of your projects.
This is known as having a high time preference. It might seem like a quintessential example of low time preference, because you get a big payoff if you can persist through to completing those projects. However, the initial assumption was that “the rest of my life would be more painful than it was pleasant,” so ex hypothesi the payoff cannot possibly be big enough to balance out the pain.
Thanks, I read the article, and I think everything in it is actually answered by my post above. For instance:
I wouldn’t put myself into a holodeck even if I could take a pill to forget the fact afterward. That’s simply not where I’m trying to steer the future.
He’s confused about time structure here. He doesn’t want to take the pill now, because that would have a dreadful effect on his happiness now. Whether we call it pleasure/pain, happiness/unhappiness or something else, there’s no escaping it.
So my values are not strictly reducible to happiness: There are properties I value about the future that aren’t reducible to activation levels in anyone’s pleasure center;
Eliezer says his values are not reducible to happiness. Yet how unhappy (or painful) would it be for him right now to watch the happy-all-the-time pill slowly being inched toward his mouth, knowing he’ll be made to swallow it? I suspect those would be the worst few moments of his life.
It’s not that values are not reducible to happiness, it’s that happiness has a time structure that our language usually ignores.
What if you sneak up on him while he’s sleeping and give him the happy-all-the-time injection before he knows you’ve done it? Then he wouldn’t have that moment of unhappiness.
Yes, and he would never care about it as long as he never entertained the prospect. I don’t think there is a definition of “value” that does everything he needs it to while at the some time not referring to happiness/unhappiness or similar. Charity requires that I continue to await such definition, but I am skeptical.
Preference satisfaction =! happiness.How many times do I have to make this point?
Which would you prefer to have happen, without any forewarning:
1) I wirehead you, and destroy the rest of the world, or 2) I torture you for a while, and leave the rest of the world alone.
Preference satisfaction =! happiness. How many times do I have to make this point?
Indeed, we cannot say categorically that “preference satisfaction = happiness,” but my point has been that such a statement is not very elucidating unless it takes time structure into account:
Satisfaction of my current preferences right now = happiness right now (this is tautological, unless we are including dopamine-based “wanting but not liking” in the definition of preferences—I can account for the case if you’d like, but it will make my response a lot longer)
Knowledge that my current preferences will be satisfied later = happiness right now (but yes, this does not necessarily equal happiness later)
ETA: In case it’s not clear, you still haven’t shown that values are not just another way of looking at happiness/unhappiness. “Preference” is just another word for value—or if not, please define. I didn’t answer your other question simply because neither answer reveals anything about either of our positions.
However, your implying that someone who chooses option 1 should feel like an asshole underscores my point: If someone chose 2 over 1 it’d be because choosing 1 would be painful (insofar as seeing oneself as an asshole is painful ;-). (By the way, you can’t get around this by adding “without warning” because the fact that you can make a choice about what is coming implies you believe it’s coming (even if you don’t know when), or if you don’t believe it’s coming then it’s a meaningless hypothetical.)
Disclosure: I didn’t wait for Shadow (and I felt like an asshole, although that was afterward).
Satisfaction of my current preferences right now = happiness right now (this is tautological, unless we are including dopamine-based “wanting but not liking” in the definition of preferences—I can account for the case if you’d like, but it will make my response a lot longer)
It isn’t tautological. In fact, it’s been my experience that this is simply not true. There seem to be times that I prefer to wallow in self-pity rather than feel happiness. Anger also seems to preclude happiness in the moment, but there are also times that I prefer to be angry. I could probably also make you happy and leave many of your preferences unsatisfied by injecting you with heroin.
Somehow I think we’re just talking past each other...
There seem to be times that I prefer to wallow in self-pity rather than feel happiness.
I’ve been there too, but I will try to show below that that is just having an (extremely) high time preference. Don’t you get some tiny instantaneous satisfaction from choosing at any given moment to continue wallowing in self-pity? I do. It’s similar to the guilty pleasure of horking down a tub of Ben&Jerry’s, but on an even shorter timescale.
Here are my assumptions:
If humans had no foresight, they would simply choose what gave them immediate pleasure or relief from pain. The fact that we have instincts doesn’t complicate this, because an instinct is simply an urge, which is just another way of saying, “going against it is more painful than going with it.”
But we do have foresight, and our minds are (almost) constantly hounding us to consider the effects of present decisions on future expected pleasure. This is of course vital to our continued survival. However, we can push those future-oriented thoughts out of our minds to some extent (some people excel at this). Certain states—anger probably foremost among them—can effectively shut off or decrease the hounding about the future as well. Probably no one has a time preference of “IMMEDIATELY” all the time, and having a low (long) time preference is usually associated with good mental health and self-actualization.
(Note that this “emotional time preference” is relative: perhaps we weight the pleasure experienced in the next second very highly versus the coming year, or perhaps the reverse; or perhaps it’s the next hour versus the next few days, etc.)
So what we call values are generally things we are willing to defer to our brain’s “future hounding” about.
Example: A man chances upon some ice cream, but he is lactose intolerant. Let’s say he believes the pain of the forthcoming upset stomach will exceed the pleasure of the eating experience. If his mind is hounding him hard enough about the pain of an upset stomach (which will occur hours later), it will override the prospects of a few minutes pleasure starting now. If he is able to push those thoughts out of his mind, he can enjoy the ice cream now, against his “better judgment” (this is akrasia in a nutshell).
Now I think we have the groundwork to explain in an elucidating way why the question you posed can be answered in terms of pleasure/pain only.
1) I wirehead you, and destroy the rest of the world
I’m going to make the decision right now, and I contend that I am going to make that decision purely based on pleasure vs. pain felt right now. Factors include: empathy, guilt, and future prospects of happiness (let’s just assume being wireheaded really is purely pleasurable/happy).
If I reject (1), it will be because
I believe I would immediately feel very sorry for the people (because of “future hounding”), and/or
I believe I would immediately feel very guilty for my decision, and/or
I relatively discount the future prospects of pleasure.
If I accept (1), it will be because of a different weighting of the above factors, but that weighting is happening right now, not at some future time. Now let me repeat what I wrote above: The fact that we have instincts (empathy, guilt, even the instinct to create and adhere to “values”) doesn’t complicate this, because an instinct is simply an urge, which is just another way of saying, “going against it is more painful than going with it.”
When do you think suicide would be the rational option?
When doing so causes a sufficiently large benefit for others (ie, ‘a suicide mission’, as opposed to mere suicide). Or when you have already experienced enough danger (that is, situations likely to have killed you) to overcome your prior and make you conclude that you have quantum immortality with high enough confidence.
I don’t know exactly what the prior on “killing animals is wrong” is, but I think it has a reasonable size (certainly larger than that for god), and I feel more justified in being vegetarian because of this.
Is it meaningful to put a probability on ‘killing animals is wrong’ and absolute moral statements like that? Feels like trying to put a probability on ‘abortion is wrong’ or ‘gun control is wrong’ or ‘(insert your pet issue here) is wrong/right’ or...
No, it’s not meaningful to put a prior probability on it, unless you seriously think something like absolute morality exists. Having said that, the prior for “killing animals is wrong” is still higher than the prior for the God of Abraham existing.
Note that Bayesian probability is not absolute, so it’s not appropriate to demand absolute morality in order to put probabilities on moral claims. You just need a meaningful (subjective) concept of morality. This holds for any concept one can consider, any statement can be assigned a subjective probability, and morality isn’t an exceptional special case.
If morality is a fixed computation, you can place probabilities on possible outputs of that computation (or more concretely, on possible outputs of an extrapolation of your or humanity’s volition).
See this discussion of my own meat-eating. My conclusion was that there is not much of a rational basis for deciding one way or the other—my attempts to use rationality broke down.
I think you should go out and get yourself something deliciously meaty, while still being mostly vegetarian. “Fair weather vegetarianism”. Unless you don’t actually like the taste of meat. That’s ok. There’s also an issue of convenience. You could begin the slippery slope of drinking chicken broth soup and Thai food with lots of fish sauce.
We exist in an immoral system and there isn’t much to do about it. Being a vegetarian for reasons of animal suffering is symbolic. If we truly cared about the holocaust of animal suffering, we would be waging a guerrilla war against factory farms.
In this case, other people seem to have concluded that the value of not eating a piece of an animal is in the long run equal to that much animal not suffering/dying. So I know the difference one person could make and it seems too small to be worth the hassle of not eating meat that other people prepare for me, and not worth the inconvenience of not getting the most delicious item on the menu at restaurants.
My parents are both vegetarian, and have been since I was born. They brought me up to be a vegetarian. I’m still a vegetarian. Clearly I’m on shaky ground, since my beliefs weren’t formed from evidence, but purely from nurture.
Interestingly my parents became vegetarian because they perceived the way animals were farmed to be cruel (although they also stopped eating non-farmed animals such as fish), however my rationalization for not eating meat is that it is the killing of animals that is wrong (generalising from the belief that killing humans is worse than mistreating them). Since eating meat is not necessary to live, it must therefore be as bad as hunting for fun, which is much more widely disapproved of. (I’m not a vegan, and I often eat sweets containing gelatine, if asked to explain this, I would rationalise that eating these thing causes the death of many fewer animals than actually eating, like, steak).
But having read all of Eliezer’s posts, I now realise that I could have come up with that rationalisation even if eating meat were not wrong, and that I’m now in just a bad a position as a religious believer. I want a crisis of faith, but I have a problem… I don’t know where to go back to. There’s no objective basis for morality. I don’t know what kind of evidence I should condition on (I don’t know what would be different about the world if eating meat was good instead of bad). If a religious person realises they have no evidence they should go back to their priors. Because god has a tiny prior, they should immediately stop believing. I don’t know exactly what the prior on “killing animals is wrong” is, but I think it has a reasonable size (certainly larger than that for god), and I feel more justified in being vegetarian because of this. What should I do now?
Footnote: I probably don’t have to say this, but I don’t want arguments for or against vegetarianism, simply advice on how one should challenge one’s own moral beliefs. I’ve used “eating meat” and “killing animals” interchangeably in my post, because I think that they are morally equivalent due to supply and demand.
I hope this isn’t a vegatarianism argument, but remember that you have to rehabilitate both killing and cruelty to justify eating most meat, even if killing alone has held you back so far.
That’s an excellent point, and one I may not have spotted otherwise. Thank you.
Do you want to eat meat?
Or do you just want to have a good reason for not wanting to eat meat?
It’s… y’know… food. I don’t have an ethical objection to peppermint but I don’t eat it because I don’t want to.
.
If Omega told me that the rest of my life would be more painful than it was pleasant I would still choose to live. I think most others here would choose similarly (except in cases of extreme pain like torture).
.
Even if my life would be painful on net, there are still projects I want to finish and work I want to do for others that would prevent me from choosing death. Valuing things such as these is no more irrational than valuing your own pleasure.
Perhaps our disagreement is over the connection between pain/pleasure and utility. I would prefer a world in which I was in pain but am able to complete certain projects to one in which I was in pleasure but unable to complete certain projects. In the economic sense of utility (rank in an ordinal preference function), my utility would be higher in the former world than the latter world (even though the former is more painful).
I think your disagreement is over time preference. Which path you choose now depends on how much you discount future pain versus present moral guilt or empathy considerations.
In other words, you would make that choice now because that would make you feel best now. Of course (you project that) you would make the same choice at time T, for all T occurring between now and the completion of your projects.
This is known as having a high time preference. It might seem like a quintessential example of low time preference, because you get a big payoff if you can persist through to completing those projects. However, the initial assumption was that “the rest of my life would be more painful than it was pleasant,” so ex hypothesi the payoff cannot possibly be big enough to balance out the pain.
Pleasure and pain have little to do with it.
Thanks, I read the article, and I think everything in it is actually answered by my post above. For instance:
He’s confused about time structure here. He doesn’t want to take the pill now, because that would have a dreadful effect on his happiness now. Whether we call it pleasure/pain, happiness/unhappiness or something else, there’s no escaping it.
Eliezer says his values are not reducible to happiness. Yet how unhappy (or painful) would it be for him right now to watch the happy-all-the-time pill slowly being inched toward his mouth, knowing he’ll be made to swallow it? I suspect those would be the worst few moments of his life.
It’s not that values are not reducible to happiness, it’s that happiness has a time structure that our language usually ignores.
What if you sneak up on him while he’s sleeping and give him the happy-all-the-time injection before he knows you’ve done it? Then he wouldn’t have that moment of unhappiness.
Yes, and he would never care about it as long as he never entertained the prospect. I don’t think there is a definition of “value” that does everything he needs it to while at the some time not referring to happiness/unhappiness or similar. Charity requires that I continue to await such definition, but I am skeptical.
Preference satisfaction =! happiness. How many times do I have to make this point?
Which would you prefer to have happen, without any forewarning: 1) I wirehead you, and destroy the rest of the world, or 2) I torture you for a while, and leave the rest of the world alone.
If you don’t pick 2, you’re an asshole. :P
Indeed, we cannot say categorically that “preference satisfaction = happiness,” but my point has been that such a statement is not very elucidating unless it takes time structure into account:
Satisfaction of my current preferences right now = happiness right now (this is tautological, unless we are including dopamine-based “wanting but not liking” in the definition of preferences—I can account for the case if you’d like, but it will make my response a lot longer)
Knowledge that my current preferences will be satisfied later = happiness right now (but yes, this does not necessarily equal happiness later)
ETA: In case it’s not clear, you still haven’t shown that values are not just another way of looking at happiness/unhappiness. “Preference” is just another word for value—or if not, please define. I didn’t answer your other question simply because neither answer reveals anything about either of our positions.
However, your implying that someone who chooses option 1 should feel like an asshole underscores my point: If someone chose 2 over 1 it’d be because choosing 1 would be painful (insofar as seeing oneself as an asshole is painful ;-). (By the way, you can’t get around this by adding “without warning” because the fact that you can make a choice about what is coming implies you believe it’s coming (even if you don’t know when), or if you don’t believe it’s coming then it’s a meaningless hypothetical.)
Disclosure: I didn’t wait for Shadow (and I felt like an asshole, although that was afterward).
It isn’t tautological. In fact, it’s been my experience that this is simply not true. There seem to be times that I prefer to wallow in self-pity rather than feel happiness. Anger also seems to preclude happiness in the moment, but there are also times that I prefer to be angry. I could probably also make you happy and leave many of your preferences unsatisfied by injecting you with heroin.
Somehow I think we’re just talking past each other...
I’ve been there too, but I will try to show below that that is just having an (extremely) high time preference. Don’t you get some tiny instantaneous satisfaction from choosing at any given moment to continue wallowing in self-pity? I do. It’s similar to the guilty pleasure of horking down a tub of Ben&Jerry’s, but on an even shorter timescale.
Here are my assumptions:
If humans had no foresight, they would simply choose what gave them immediate pleasure or relief from pain. The fact that we have instincts doesn’t complicate this, because an instinct is simply an urge, which is just another way of saying, “going against it is more painful than going with it.”
But we do have foresight, and our minds are (almost) constantly hounding us to consider the effects of present decisions on future expected pleasure. This is of course vital to our continued survival. However, we can push those future-oriented thoughts out of our minds to some extent (some people excel at this). Certain states—anger probably foremost among them—can effectively shut off or decrease the hounding about the future as well. Probably no one has a time preference of “IMMEDIATELY” all the time, and having a low (long) time preference is usually associated with good mental health and self-actualization.
(Note that this “emotional time preference” is relative: perhaps we weight the pleasure experienced in the next second very highly versus the coming year, or perhaps the reverse; or perhaps it’s the next hour versus the next few days, etc.)
So what we call values are generally things we are willing to defer to our brain’s “future hounding” about.
Example: A man chances upon some ice cream, but he is lactose intolerant. Let’s say he believes the pain of the forthcoming upset stomach will exceed the pleasure of the eating experience. If his mind is hounding him hard enough about the pain of an upset stomach (which will occur hours later), it will override the prospects of a few minutes pleasure starting now. If he is able to push those thoughts out of his mind, he can enjoy the ice cream now, against his “better judgment” (this is akrasia in a nutshell).
Now I think we have the groundwork to explain in an elucidating way why the question you posed can be answered in terms of pleasure/pain only.
I’m going to make the decision right now, and I contend that I am going to make that decision purely based on pleasure vs. pain felt right now. Factors include: empathy, guilt, and future prospects of happiness (let’s just assume being wireheaded really is purely pleasurable/happy).
If I reject (1), it will be because
I believe I would immediately feel very sorry for the people (because of “future hounding”), and/or
I believe I would immediately feel very guilty for my decision, and/or
I relatively discount the future prospects of pleasure.
If I accept (1), it will be because of a different weighting of the above factors, but that weighting is happening right now, not at some future time. Now let me repeat what I wrote above: The fact that we have instincts (empathy, guilt, even the instinct to create and adhere to “values”) doesn’t complicate this, because an instinct is simply an urge, which is just another way of saying, “going against it is more painful than going with it.”
EDIT: typos and clarification
When do you think suicide would be the rational option?
When doing so causes a sufficiently large benefit for others (ie, ‘a suicide mission’, as opposed to mere suicide). Or when you have already experienced enough danger (that is, situations likely to have killed you) to overcome your prior and make you conclude that you have quantum immortality with high enough confidence.
.
Is it meaningful to put a probability on ‘killing animals is wrong’ and absolute moral statements like that? Feels like trying to put a probability on ‘abortion is wrong’ or ‘gun control is wrong’ or ‘(insert your pet issue here) is wrong/right’ or...
No, it’s not meaningful to put a prior probability on it, unless you seriously think something like absolute morality exists. Having said that, the prior for “killing animals is wrong” is still higher than the prior for the God of Abraham existing.
Note that Bayesian probability is not absolute, so it’s not appropriate to demand absolute morality in order to put probabilities on moral claims. You just need a meaningful (subjective) concept of morality. This holds for any concept one can consider, any statement can be assigned a subjective probability, and morality isn’t an exceptional special case.
If morality is a fixed computation, you can place probabilities on possible outputs of that computation (or more concretely, on possible outputs of an extrapolation of your or humanity’s volition).
I find this paper to be a good resource to think about this subject: https://motherjones.com/files/emotional_dog_and_rational_tail.pdf
You have to escape underscores by preceding them with backslashes, otherwise they’re interpreted as markup for italics.
The underscores need escaping.
See this discussion of my own meat-eating. My conclusion was that there is not much of a rational basis for deciding one way or the other—my attempts to use rationality broke down.
I think you should go out and get yourself something deliciously meaty, while still being mostly vegetarian. “Fair weather vegetarianism”. Unless you don’t actually like the taste of meat. That’s ok. There’s also an issue of convenience. You could begin the slippery slope of drinking chicken broth soup and Thai food with lots of fish sauce.
We exist in an immoral system and there isn’t much to do about it. Being a vegetarian for reasons of animal suffering is symbolic. If we truly cared about the holocaust of animal suffering, we would be waging a guerrilla war against factory farms.
.
In this case, other people seem to have concluded that the value of not eating a piece of an animal is in the long run equal to that much animal not suffering/dying. So I know the difference one person could make and it seems too small to be worth the hassle of not eating meat that other people prepare for me, and not worth the inconvenience of not getting the most delicious item on the menu at restaurants.