Lots of reasons. It’s pretty much built into the human brain that being nice to your friends and neighbours is helpful to long-term survival, so most people get pleasant feelings from doing something they consider ‘good’, and feel guilty after doing something they consider ‘bad’. You don’t need the Commandments themselves.
...Oh and the whole idea that it’s better to live in a society where everyone follows laws like “don’t murder”...even if you personally could benefit from murdering the people who you didn’t like, you don’t want everyone else murdering people too, and so it makes sense, as a society, to teach children that ‘murder is bad’.
It’s pretty much built into the human brain that being nice to your friends and neighbours is helpful to long-term survival, so most people get pleasant feelings from doing something they consider ‘good’, and feel guilty after doing something they consider ‘bad’.
Are these reasons to not kill people or steal? Can I propose a test? Suppose that it were built into the human brain that being cruel to your friends and neighbors is helpful to long-term survival (bear with me on the evolutionary implausibility of this), and so must people get pleasant feelings from doing things they consider cruel, and feel guilty after doing nice things.
Suppose all that were true: would you then have good reasons to to be cruel? If not, then how are they reasons to be nice?
We might want to distinguish here between reasons to do something and reasons why one does something. So imagine we discover that the color green makes people want to compromise, so we paint a boardroom green. During a meeting, the chairperson decides to compromise. Even if the chairperson knows about the study, and is being affected by the green walls in a decisive way (such that the greenness of the walls is the reason why he or she compromises), could the chairperson take the greenness of the walls as a reason to compromise?
A reasonable distinction, but I don’t think it quite maps onto the issue at hand. You said to suppose “people get pleasant feelings from doing things they consider cruel, and feel guilty after doing nice things”. If one has a goal to feel pleasant feelings, and is structured in that manner, then that is reason to be cruel, not just reason why they would be cruel.
If one has a goal to feel pleasant feelings, and is structured in that manner, then that is reason to be cruel, not just reason why they would be cruel.
Agreed, but so much is packed into that ‘if’. We all seek pleasure, but not one of us believes it is an unqualified good. The implication of Swimmer’s post was that atheists have reasons to obey the ten commandments (well, 4 or 5 of them) comparable in formal terms to the reasons Christians have (God’ll burn me if I don’t, or whatever). That is, the claim seems to be that atheists can justify their actions. Now, if someone does something nice for me, and I ask her why she did that, she can reply with some facts about evolutionary biology. This might explain her behavior, but it doesn’t justify it.
If we imagine someone committing a murder and then telling us something about her (perhaps defective) neurobiology, we might take this to explain their behavior, but never to justify it. We would never say ’Yeah, I guess now that you make those observations about your brain, it was reasonable of you to kill that guy.” The point is that the murderer hasn’t just given us a bad reason, she hasn’t given us a reason at all. We cannot call her rational if this is all she has.
The implication of Swimmer’s post was that atheists have reasons to obey the ten commandments (well, 4 or 5 of them) comparable in formal terms to the reasons Christians have (God’ll burn me if I don’t, or whatever).
I didn’t claim that, and if I implied it, it was by accident. (Although I do think that a lot of atheists have just as strong if not stronger reasons to obey certain moral rules, the examples I gave weren’t those examples.) I was trying to point out that if someone decides one day to stop believing in God, and realizes that this means God won’t smite them if they break one of the Ten Commandments, that doesn’t mean they’ll go out and murder someone. Their moral instincts, and the positive/negative reinforcement to obey them (i.e. pleasure or guilt), keep existing regardless of external laws.
The point is that the murderer hasn’t just given us a bad reason, she hasn’t given us a reason at all. We cannot call her rational if this is all she has.
So we ask her why, and she says “oh, he took the seat that I wanted on the bus three weeks in a row, and his humming is annoying, and he always copies my exams.” Which might not be a good reason to murder someone according to you, with your normal neurobiology–you would content yourself with fuming and making rude comments about him to your friends–but she considers it a good reason, because her mental ‘brakes’ are off.
Their moral instincts, and the positive/negative reinforcement to obey them (i.e. pleasure or guilt), keep existing regardless of external laws.
Right, we agree on that. But if the apostate thereafter has no reason to regard themselves as morally responsible, then their moral behavior is no longer fully rational. They’re sort of going through the motions.
Which might not be a good reason to murder someone according to you, with your normal neurobiology–you would content yourself with fuming and making rude comments about him to your friends–but she considers it a good reason, because her mental ‘brakes’ are off.”
The question here isn’t about good vs. bad reasons, but between admissible vs. inadmissible reasons. Hearsay is often a bad reason to believe that Peter shot Paul, but it is a reason. It counts as evidence. If that’s all you have, then you’re not reasoning well, but you are reasoning. The number of planets orbiting the star furthest from the sun is not a reason to believe Peter shot Paul. It’s not that it’s a bad reason. It’s just totally inadmissible. If that’s all you have, then you’re not reasoning badly, you’re just not reasoning at all.
Suppose all that were true: would you then have good reasons to to be cruel?
It’s a hard world to visualize, but if cruelty-tendencies evolved because people survived better by being cruel, then cruelty works in that world, and society would be dysfunctional if there were rules against it (imagine our world having rules against being nice, ever!), and to me, something being useful is a good reason to do it.
If we ever came across that species, no doubt we’d be appalled, but the universe isn’t appalled. Not unless you believe that morality exists in itself, independently of brains...which I don’t.
Suppose that it were built into the human brain that being cruel to your friends and neighbors is helpful to long-term survival (bear with me on the evolutionary implausibility of this), and so must people get pleasant feelings from doing things they consider cruel, and feel guilty after doing nice things.
If there were an entire society built out of people like this, then probably quite a lot of minor day-to-day cruelty would go on, and there would be rationalized Laws, like the Ten Commandments, justifying why being cruel was so important, and there would be social customs and structures and etiquette involved in making sure the right kind of cruelty happened at the right times…
I’m not saying that our brain’s evolutionary capacity for empathy is the ultimate perfect moral theory. But I do think that all those moral theories, perfect or ultimate or not, exist because our brains evolved to have the little voice of empathy. Which means that if you take away the Ten Commandments, most people won’t stop being nice to people they care about.
(Being nice to strangers or members of an outgroup is a completely different matter...there seems to be a mechanism for turning off empathy towards groups of strangers, and plenty of societies have produced people who were very nice to their friends and neighbors, and barbaric towards everyone else.)
Most atheists don’t accept deontological moral theories–i.e. any theory that talks about a set of a priori rules of what’s right versus wrong. But morality doesn’t go away. If you reason it out starting from what our brains already tell us, you end up with utilitarian theories (“I like being happy, and I’m capable of empathy, so I think other people must like being happy too, and since my perfect world would be one where I was happy all the time, the perfect world for everyone would be one with maximum happiness.”)
Alternately you end up with Kantian theories (“I like being treated as an end, not a means, and empathy tells me other people are similar to me, so we should treat everyone as an end in themselves or not a means… Oh, and Action X will make me happy, but if everyone else did Action X too, it would make me unhappy, and empathy tells me everyone else is about like me, so they wouldn’t want me to do X, so the best society is one in which no one does X.”) Etc.
If you don’t reason it out, you get “well, it made me happy when I helped Susan with her homework, and it made me feel bad when I said something mean to Rachel and she cried, so I should help people more and not be mean as much.” These feelings aren’t perfect, and there are lots of conflicting feelings, so people aren’t nice all the time...but the innate brain mechanisms are there even when there aren’t any laws, and the fact that they’re there is probably the reason why there are laws at all.
These feelings aren’t perfect, and there are lots of conflicting feelings, so people aren’t nice all the time...but the innate brain mechanisms are there even when there aren’t any laws, and the fact that they’re there is probably the reason why there are laws at all.
So we agree that one might have a reason to do something because it’s recommended by moral theories. What I’m questioning is whether or not you can have a reason to do something on the basis of brain mechanisms or if you can have reason to adopt a moral theory on the basis of brain mechanisms. And I don’t mean ‘good’ reasons, I mean admissible reasons.
Imagine someone thinking to themselves: ‘Well, my brain is structured in such and such a way as a result of evolution, so I think I’ll kill this completely innocent guy over here.’ Is he thinking rationally?
And concerning the adoption of a moral theory:
(“I like being happy, and I’m capable of empathy, so I think other people must like being happy too, and since my perfect world would be one where I was happy all the time, the perfect world for everyone would be one with maximum happiness.”)
There’s a missing inference here from wanting to be happy to wanting other people to be happy. Can you explain how you think this argument gets filled out? As it stands, it’s not valid.
Likewise:
“I like being treated as an end, not a means, and empathy tells me other people are similar to me, so we should treat everyone as an end in themselves or not a means...
Why should the fact that other people want something motivate me? It doesn’t follow from the fact that my wanting something motivates me, that another person’s wanting that thing should motivate me. In both these arguments there’s a missing step which, I think, is pertinent to the problem above: the fact that I am motivated to X doesn’t even give me reason to X, much less a reason to pursue the desires of other people.
Well, my brain is structured in such and such a way as a result of evolution, so I think I’ll kill this completely innocent guy over here.
Beliefs don’t feel like beliefs, they feel like the way the world is. Likewise with brain structures. If someone is a sociopath (in short, their brain mechanism for empathy is broken) and they decide they want to kill someone for reasons X and Y, are they being any more irrational than someone who volunteers at a soup kitchen because seeing people smile when he hands them their food makes him feel fulfilled?
(“I like being happy, and I’m capable of empathy, so I think other people must like being happy too, and since my perfect world would be one where I was happy all the time, the perfect world for everyone would be one with maximum happiness.”)
There’s a missing inference here from wanting to be happy to wanting other people to be happy. Can you explain how you think this argument gets filled out? As it stands, it’s not valid.
Sorry for not being clear. The inference is that “empathy”, the ability to step into someone else’s shoes and imagine being them, is an innate ability that most humans have, leads you to think that other people are like you...when they feel pleasure, it’s like your pleasure, and when they feel pain, it’s like your pain, and there’s a hypothetical world where you could have been them. I don’t think this hypothetical is something that’s taught by moral theories, because I remember reasoning with it as a child when I’d had basically no exposure to formal moral theories, only the standard “that wasn’t nice, you should apologize.” If you could have been them, you want the same things for them that you’d want for yourself.
I think this is immediately obvious for family members and friends...do you want your mother to be happy? Your children?
Beliefs don’t feel like beliefs, they feel like the way the world is.
Perhaps on some level this is right, but the fact that I can assess the truth of my beliefs means that they don’t feel like the way the world is in an important respect. They feel like things that are true and false. The way the world is has no truth value. Very small children have problem with this distinction, but so far as I can tell almost all healthy adults do not believe that their beliefs are identical with the world. ETA: That sounded jerky. I didn’t intend any covert meanness, and please forgive any appearance of that.
If someone is a sociopath (in short, their brain mechanism for empathy is broken) and they decide they want to kill someone for reasons X and Y, are they being any more irrational than someone who volunteers at a soup kitchen because seeing people smile when he hands them their food makes him feel fulfilled?
I think I really don’t understand your question. Could you explain the idea behind this a little better? My objection was that there are reasons to do things, and reasons why we do things, and while all reasons to do things are also reasons why, there are reasons why that are not reasons to do things. For example, having a micro-stroke might be the reason why I drive my car over an embankment, but it’s not a reason to drive one’s car over an embankment. No rational person could say to themselves “Huh, I just had a micro-stroke. I guess that means I should drive over this embankment.”
I think this is immediately obvious for family members and friends...do you want your mother to be happy? Your children?
Sure, but I take myself to have moral reasons for this. I may feel this way because of my biology, but my biology is never itself a reason for me to do anything.
That post is in need of some serious editing: I genuinely couldn’t tell if it was on the whole agreeing with what I was saying or not.
I have a puzzle for you: suppose we lived in a universe which is entirely deterministic. From the present state of the universe, all future states could be computed. Would that mean that deliberation in which we try to come to a decision about what to do is meaningless, impossible, or somehow undermined? Or would this make no difference?
That post is in need of some serious editing: I genuinely couldn’t tell if it was on the whole agreeing with what I was saying or not.
That post didn’t have a conclusion, because EY wanted to get much further into his Metaethics sequence before offering one.
I have a puzzle for you: suppose we lived in a universe which is entirely deterministic. From the present state of the universe, all future states could be computed. Would that mean that deliberation in which we try to come to a decision about what to do is meaningless, impossible, or somehow undermined? Or would this make no difference?
It makes no difference. In fact, many-worlds is a deterministic universe; it just so happens there are different versions of future-you who experience/do different things, so it’s not “deterministic from your viewpoint”.
So I’d like to argue that it makes at least a little difference. When we engage in practical deliberation, when we think about what to do, we are thinking about what is possible and about ourselves as sources of what is possible. No one deliberates about the necessary, or about anything over which we have no control: we don’t deliberate about what the size of the sun should be, or whether or not modus tollens should be valid.
If we realize that the universe is deterministic, then we may still decide that we can deliberate, but we do now qualify this as a matter of ‘viewpoints’ or something like that. So the little difference this makes is in the way we qualify the idea of deliberation.
So do you agree that there is at least this little difference? Perhaps it is inconsequential, but it does mean that we learn something about what it means to deliberate when we learn we are living in a deterministic universe as opposed to one with a bunch of spontaneous free causes running around.
It all adds up to normality. Everything you do when making a decision is something a deterministic agent can do, and a deterministic agent that deliberates well will (on average) experience higher expected value than deterministic agents that deliberate poorly.
You’re getting closer to the sequence of posts that covers this in more detail, so I’ll just say that I endorse what’s said in this sequence.
It all adds up to normality. Everything you do when making a decision is something a deterministic agent can do, and a deterministic agent that deliberates well will (on average) experience higher expected value than deterministic agents that deliberate poorly.
What is normality exactly? It’s not the ideas and intuitions I came to the table with, unless the theory actually proposes to teach me nothing. My questions is this: “what do I learn when I learn that the universe is deterministic?” Do I learn anything that has to do with deliberation? One reasonable answer (and one way to explain the normality point) would just be ‘no, it has nothing to do with action.’ But this would strike many people as odd, since we recognize in our deliberation a distinction between future events we can bring about or prevent, and future states we cannot bring about or prevent.
You’re getting closer to the sequence of posts that covers this in more detail, so I’ll just say that I endorse what’s said in this sequence.
I find I have an extremely hard time understanding some of the arguments in that sequence, after several attempts. I would dearly love to have some of it explained in response to my questions. I find this argument in parcticular to be very confusing:
But have you ever seen the future change from one time to another? Have you wandered by a lamp at exactly 7:02am, and seen that it is OFF; then, a bit later, looked in again on the “the lamp at exactly 7:02am”, and discovered that it is now ON?
Naturally, we often feel like we are “changing the future”. Logging on to your online bank account, you discover that your credit card bill comes due tomorrow, and, for some reason, has not been paid automatically. Imagining the future-by-default—extrapolating out the world as it would be without any further actions—you see that the bill not being paid, and interest charges accruing on your credit card. So you pay the bill online. And now, imagining tomorrow, it seems to you that the interest charges will not occur. So at 1:00pm, you imagined a future in which your credit card accrued interest charges, and at 1:02pm, you imagined a future in which it did not. And so your imagination of the future changed, from one time to another.
This argument (which reappears in the ‘timeless control’ article) seems to hang on a very weird idea of ‘changing the future’. No one I have ever talked to believes that they can literally change a future moment from having one property to having another, and that this change is distinct from a change that takes place over an extent of time. I certainly don’t see how anyone could take this as a way to treat the world as undetermined. This seems like very much a strawman view, born from an equivocation on the word ‘change’.
But I expect I am missing something (perhaps something revealed later on in the more technical stage of the article). Can you help me?
I meant that learning the universe is deterministic should not turn one into a fatalist who doesn’t care about making good decisions (which is the intuition that many people have about determinism), because goals and choices mean something even in a deterministic universe. As an analogy, note that all of the agents in my decision theory sequence are deterministic (with one kind-of exception: they can make a deterministic choice to adopt a mixed strategy), but some of them characteristically do better than others.
Regarding the “changing the future” idea, let’s think of what it means in the context of two deterministic computer programs playing chess. It is a fact that only one game actually gets played, but many alternate moves are explored in hypotheticals (within the programs) along the way. When one program decides to make a particular move, it’s not that “the future changed” (since someone with a faster computer could have predicted in advance what moves the programs make, the future is in that sense fixed), but rather that of all the hypothetical moves it explored, the program chose one according to a particular set of criteria. Other programs would have chosen another moves in those circumstances, which would have led to different games in the end.
When you or I are deciding what to do, the different hypothetical options all feel like they’re on an equal basis, because we haven’t figured out what to choose. That doesn’t mean that different possible futures are all real, and that all but one vanish when we make our decision. The hypothetical futures exist on our map, not in the territory; it may be that no version of you anywhere chooses option X, even though you considered it.
but some of them characteristically do better than others.
A fair point, though I would be interested to hear how the algorithm described in DT relate to action (it can’t be that they describe action, since we needn’t act on the output of a DT, especially given that we’re often akratic). When the metaethics sequence, for all the trouble I have with its arguments, gets into an account of free will, I don’t generally find myself in disagreement. I’ve been looking over that and the physics sequences in the last couple of days, and I think I’ve found the point where I need to do some more reading: I think I just don’t believe either that the universe is timeless, or that it’s a block universe. So I should read Barbour’s book.
Thanks, buy the way for posting that DT series, and for answering my questions. Both have been very helpful.
Does that make more sense?
It does, but I find myself, as I said, unable to grant the premise that statements about the future have truth value. I think I do just need to read up on this view of time.
Thanks, buy the way for posting that DT series, and for answering my questions. Both have been very helpful.
You’re welcome!
I would be interested to hear how the algorithm described in DT relate to action (it can’t be that they describe action, since we needn’t act on the output of a DT, especially given that we’re often akratic).
Yeah, a human who consciously endorses a particular decision theory is not the same sort of agent as a simple algorithm that runs that decision theory. But that has more to do with the messy psychology of human beings than with decision theory in its abstract mathematical form.
Beliefs don’t feel like beliefs, they feel like the way the world is.
Perhaps on some level this is right, but the fact that I can assess the truth of my beliefs means that they don’t feel like the way the world is in an important respect.
OK, let me give you a better example. When you look at something, a lot of very complex hardware packed into your retina, optic nerve, and visual cortex, a lot of hard-won complexity optimized over millions of years, is going all out analyzing the data and presenting you with comprehensible shapes, colour, and movement, as well as helpful recognizing objects for you. When you look at something, are you aware of all that happening? Or do you just see it?
(Disclaimer: if you’ve read a lot about neuroscience, it’s quite possible that sometimes you do think about your visual processing centres while you’re looking at something. But the average person wouldn’t, and the average person probably doesn’t think ‘well, there go my empathy centres again’ when they see an old lady having trouble with her grocery bag and feel a desire to help her.)
I think I really don’t understand your question. Could you explain the idea behind this a little better? My objection was that there are reasons to do things, and reasons why we do things, and while all reasons to do things are also reasons why, there are reasons why that are not reasons to do things.
Okay, let’s try to unpack this. In my example, we have a sociopath who wants to murder someone. The reason why he wants to murder someone, when most people don’t, is because there’s a centre in his brain that’s broken and so hasn’t learned to see the world from another’s perspective, thus hasn’t internalized any social morality because it doesn’t make sense to him...basically, people are objects to him, so why not kill them. His reason to murder someone is because, let’s say, they’re dating a girl he wants to date. Most non-sociopaths wouldn’t consider that a reason to murder anyone, but the reason why they wouldn’t is because they have an innate understanding that other people feel pain, of the concept of fairness, etc, and were thus capable of learning more complex moral rules as well.
Sure, but I take myself to have moral reasons for this. I may feel this way because of my biology, but my biology is never itself a reason for me to do anything.
The way I see it, the biology aspect is both necessary and sufficient for this kind of behaviour. Someone without the requisite biology wouldn’t be a good parent or friend because they’d see no reason to make an effort (unless they were deliberately “faking it” to benefit from that person). And an ordinary human being raised with no exposure to moral rules, who isn’t taught anything about it explicitly, will still want to make their friends happy and do the best they can raising children. They may not be very good at it, but unless they’re downright abused/severely neglected, they won’t be evil.
When you look at something, are you aware of all that happening? Or do you just see it?
I just see it. I’m aware on some abstract level, but I never think about this when I see things, and I don’t take it into account when I confidently believe what I see.
“His reason to murder someone is because, let’s say, they’re dating a girl he wants to date. Most non-sociopaths wouldn’t consider that a reason to murder anyone”
I guess I’d disagree with the second claim, or at least I’d want to qualify it. Having a broken brain center is an inadmissible reason to kill someone. If that’s the only explanation someone could give (or that we could supply them) then we wouldn’t even hold them responsible for their actions. But dating your beloved really is a reason to kill someone. It’s a very bad reason, all things considered, but it is a reason. In this case, the killer would be held responsible.
“The way I see it, the biology aspect is both necessary and sufficient for this kind of behaviour. ”
Necessary, we agree. Sufficient is, I think, too much, especially if we’re relying on evolutionary explanations, which should never stand in without qualification for psychological, much less rational explanations. After all, I could come to hate my family if our relationship soured. This happens to many, many people who are not significantly different from me in this biological respect.
An ordinary human being raised with no exposure to moral rules in an extremely strange counterfactual: no person I have ever met, or ever heard of, is like this. I would probably say that there’s not really any sense in which they were ‘raised’ at all. Could they have friends? Is that so morally neutral an idea that one could learn it while leaning nothing of loyalty? I really don’t think I can imagine a rational, language-using human adult who hasn’t been exposed to moral rules.
So the ‘necessity’ case is granted. We agree there. The ‘sufficiency’ case is very problematic. I don’t think you could even have learned a first language without being exposed to moral rules, and if you never learn any language, then you’re just not really a rational agent.
An ordinary human being raised with no exposure to moral rules in an extremely strange counterfactual: no person I have ever met, or ever heard of, is like this.
A weak example of this: someone from a society that doesn’t have any explicit moral rules, i.e. ‘Ten Commandments.’ They might follow laws, but but the laws aren’t explained ‘A is the right thing to do’ or ‘B is wrong’. Strong version: someone whose parents never told them ‘don’t do that, that’s wrong/mean/bad/etc’ or ‘you should do this, because it’s the right thing/what good people do/etc.’ Someone raised in that context would probably be strange, and kind of undisciplined, and probably pretty thoughtless about the consequences of actions, and might include only a small number of people in their ‘circle of empathy’...but I don’t think they’d be incapable of having friends/being nice.′
A weak example of this: someone from a society that doesn’t have any explicit moral rules, i.e. ‘Ten Commandments.’ They might follow laws, but but the laws aren’t explained ‘A is the right thing to do’ or ‘B is wrong’.
I can see a case like this, but morality is a much broader idea than can be captured by a list of divine commands and similar such things. Even Christians, Jews, and Muslims would say that the ten commandments are just a sort of beginning, and not all on their own sufficient to be moral ideas.
Someone raised in that context would probably be strange, and kind of undisciplined, and probably pretty thoughtless about the consequences of actions, and might include only a small number of people in their ‘circle of empathy’...but I don’t think they’d be incapable of having friends/being nice.′
Huh, we have pretty different intuitions about this: I have a hard time imagining how you’d even get a human being out of that situation. I mean, animals, even really crappy ones like rats, can be empathetic toward one another. But there’s no morality in a rat, and we would never think to praise or blame one for its behavior. Empathy itself is necessary for morality, but far from sufficient.
Lots of reasons. It’s pretty much built into the human brain that being nice to your friends and neighbours is helpful to long-term survival, so most people get pleasant feelings from doing something they consider ‘good’, and feel guilty after doing something they consider ‘bad’. You don’t need the Commandments themselves.
...Oh and the whole idea that it’s better to live in a society where everyone follows laws like “don’t murder”...even if you personally could benefit from murdering the people who you didn’t like, you don’t want everyone else murdering people too, and so it makes sense, as a society, to teach children that ‘murder is bad’.
Are these reasons to not kill people or steal? Can I propose a test? Suppose that it were built into the human brain that being cruel to your friends and neighbors is helpful to long-term survival (bear with me on the evolutionary implausibility of this), and so must people get pleasant feelings from doing things they consider cruel, and feel guilty after doing nice things.
Suppose all that were true: would you then have good reasons to to be cruel? If not, then how are they reasons to be nice?
You would clearly have reasons; whether they are good reasons depends how you’re measuring “good”.
We might want to distinguish here between reasons to do something and reasons why one does something. So imagine we discover that the color green makes people want to compromise, so we paint a boardroom green. During a meeting, the chairperson decides to compromise. Even if the chairperson knows about the study, and is being affected by the green walls in a decisive way (such that the greenness of the walls is the reason why he or she compromises), could the chairperson take the greenness of the walls as a reason to compromise?
A reasonable distinction, but I don’t think it quite maps onto the issue at hand. You said to suppose “people get pleasant feelings from doing things they consider cruel, and feel guilty after doing nice things”. If one has a goal to feel pleasant feelings, and is structured in that manner, then that is reason to be cruel, not just reason why they would be cruel.
Agreed, but so much is packed into that ‘if’. We all seek pleasure, but not one of us believes it is an unqualified good. The implication of Swimmer’s post was that atheists have reasons to obey the ten commandments (well, 4 or 5 of them) comparable in formal terms to the reasons Christians have (God’ll burn me if I don’t, or whatever). That is, the claim seems to be that atheists can justify their actions. Now, if someone does something nice for me, and I ask her why she did that, she can reply with some facts about evolutionary biology. This might explain her behavior, but it doesn’t justify it.
If we imagine someone committing a murder and then telling us something about her (perhaps defective) neurobiology, we might take this to explain their behavior, but never to justify it. We would never say ’Yeah, I guess now that you make those observations about your brain, it was reasonable of you to kill that guy.” The point is that the murderer hasn’t just given us a bad reason, she hasn’t given us a reason at all. We cannot call her rational if this is all she has.
I didn’t claim that, and if I implied it, it was by accident. (Although I do think that a lot of atheists have just as strong if not stronger reasons to obey certain moral rules, the examples I gave weren’t those examples.) I was trying to point out that if someone decides one day to stop believing in God, and realizes that this means God won’t smite them if they break one of the Ten Commandments, that doesn’t mean they’ll go out and murder someone. Their moral instincts, and the positive/negative reinforcement to obey them (i.e. pleasure or guilt), keep existing regardless of external laws.
So we ask her why, and she says “oh, he took the seat that I wanted on the bus three weeks in a row, and his humming is annoying, and he always copies my exams.” Which might not be a good reason to murder someone according to you, with your normal neurobiology–you would content yourself with fuming and making rude comments about him to your friends–but she considers it a good reason, because her mental ‘brakes’ are off.
Right, we agree on that. But if the apostate thereafter has no reason to regard themselves as morally responsible, then their moral behavior is no longer fully rational. They’re sort of going through the motions.
The question here isn’t about good vs. bad reasons, but between admissible vs. inadmissible reasons. Hearsay is often a bad reason to believe that Peter shot Paul, but it is a reason. It counts as evidence. If that’s all you have, then you’re not reasoning well, but you are reasoning. The number of planets orbiting the star furthest from the sun is not a reason to believe Peter shot Paul. It’s not that it’s a bad reason. It’s just totally inadmissible. If that’s all you have, then you’re not reasoning badly, you’re just not reasoning at all.
It’s a hard world to visualize, but if cruelty-tendencies evolved because people survived better by being cruel, then cruelty works in that world, and society would be dysfunctional if there were rules against it (imagine our world having rules against being nice, ever!), and to me, something being useful is a good reason to do it.
If we ever came across that species, no doubt we’d be appalled, but the universe isn’t appalled. Not unless you believe that morality exists in itself, independently of brains...which I don’t.
If there were an entire society built out of people like this, then probably quite a lot of minor day-to-day cruelty would go on, and there would be rationalized Laws, like the Ten Commandments, justifying why being cruel was so important, and there would be social customs and structures and etiquette involved in making sure the right kind of cruelty happened at the right times…
I’m not saying that our brain’s evolutionary capacity for empathy is the ultimate perfect moral theory. But I do think that all those moral theories, perfect or ultimate or not, exist because our brains evolved to have the little voice of empathy. Which means that if you take away the Ten Commandments, most people won’t stop being nice to people they care about.
(Being nice to strangers or members of an outgroup is a completely different matter...there seems to be a mechanism for turning off empathy towards groups of strangers, and plenty of societies have produced people who were very nice to their friends and neighbors, and barbaric towards everyone else.)
Most atheists don’t accept deontological moral theories–i.e. any theory that talks about a set of a priori rules of what’s right versus wrong. But morality doesn’t go away. If you reason it out starting from what our brains already tell us, you end up with utilitarian theories (“I like being happy, and I’m capable of empathy, so I think other people must like being happy too, and since my perfect world would be one where I was happy all the time, the perfect world for everyone would be one with maximum happiness.”)
Alternately you end up with Kantian theories (“I like being treated as an end, not a means, and empathy tells me other people are similar to me, so we should treat everyone as an end in themselves or not a means… Oh, and Action X will make me happy, but if everyone else did Action X too, it would make me unhappy, and empathy tells me everyone else is about like me, so they wouldn’t want me to do X, so the best society is one in which no one does X.”) Etc.
If you don’t reason it out, you get “well, it made me happy when I helped Susan with her homework, and it made me feel bad when I said something mean to Rachel and she cried, so I should help people more and not be mean as much.” These feelings aren’t perfect, and there are lots of conflicting feelings, so people aren’t nice all the time...but the innate brain mechanisms are there even when there aren’t any laws, and the fact that they’re there is probably the reason why there are laws at all.
So we agree that one might have a reason to do something because it’s recommended by moral theories. What I’m questioning is whether or not you can have a reason to do something on the basis of brain mechanisms or if you can have reason to adopt a moral theory on the basis of brain mechanisms. And I don’t mean ‘good’ reasons, I mean admissible reasons.
Imagine someone thinking to themselves: ‘Well, my brain is structured in such and such a way as a result of evolution, so I think I’ll kill this completely innocent guy over here.’ Is he thinking rationally?
And concerning the adoption of a moral theory:
There’s a missing inference here from wanting to be happy to wanting other people to be happy. Can you explain how you think this argument gets filled out? As it stands, it’s not valid.
Likewise:
Why should the fact that other people want something motivate me? It doesn’t follow from the fact that my wanting something motivates me, that another person’s wanting that thing should motivate me. In both these arguments there’s a missing step which, I think, is pertinent to the problem above: the fact that I am motivated to X doesn’t even give me reason to X, much less a reason to pursue the desires of other people.
Beliefs don’t feel like beliefs, they feel like the way the world is. Likewise with brain structures. If someone is a sociopath (in short, their brain mechanism for empathy is broken) and they decide they want to kill someone for reasons X and Y, are they being any more irrational than someone who volunteers at a soup kitchen because seeing people smile when he hands them their food makes him feel fulfilled?
Sorry for not being clear. The inference is that “empathy”, the ability to step into someone else’s shoes and imagine being them, is an innate ability that most humans have, leads you to think that other people are like you...when they feel pleasure, it’s like your pleasure, and when they feel pain, it’s like your pain, and there’s a hypothetical world where you could have been them. I don’t think this hypothetical is something that’s taught by moral theories, because I remember reasoning with it as a child when I’d had basically no exposure to formal moral theories, only the standard “that wasn’t nice, you should apologize.” If you could have been them, you want the same things for them that you’d want for yourself.
I think this is immediately obvious for family members and friends...do you want your mother to be happy? Your children?
Perhaps on some level this is right, but the fact that I can assess the truth of my beliefs means that they don’t feel like the way the world is in an important respect. They feel like things that are true and false. The way the world is has no truth value. Very small children have problem with this distinction, but so far as I can tell almost all healthy adults do not believe that their beliefs are identical with the world. ETA: That sounded jerky. I didn’t intend any covert meanness, and please forgive any appearance of that.
I think I really don’t understand your question. Could you explain the idea behind this a little better? My objection was that there are reasons to do things, and reasons why we do things, and while all reasons to do things are also reasons why, there are reasons why that are not reasons to do things. For example, having a micro-stroke might be the reason why I drive my car over an embankment, but it’s not a reason to drive one’s car over an embankment. No rational person could say to themselves “Huh, I just had a micro-stroke. I guess that means I should drive over this embankment.”
Sure, but I take myself to have moral reasons for this. I may feel this way because of my biology, but my biology is never itself a reason for me to do anything.
Relevant LW post.
That post is in need of some serious editing: I genuinely couldn’t tell if it was on the whole agreeing with what I was saying or not.
I have a puzzle for you: suppose we lived in a universe which is entirely deterministic. From the present state of the universe, all future states could be computed. Would that mean that deliberation in which we try to come to a decision about what to do is meaningless, impossible, or somehow undermined? Or would this make no difference?
That post didn’t have a conclusion, because EY wanted to get much further into his Metaethics sequence before offering one.
It makes no difference. In fact, many-worlds is a deterministic universe; it just so happens there are different versions of future-you who experience/do different things, so it’s not “deterministic from your viewpoint”.
So I’d like to argue that it makes at least a little difference. When we engage in practical deliberation, when we think about what to do, we are thinking about what is possible and about ourselves as sources of what is possible. No one deliberates about the necessary, or about anything over which we have no control: we don’t deliberate about what the size of the sun should be, or whether or not modus tollens should be valid.
If we realize that the universe is deterministic, then we may still decide that we can deliberate, but we do now qualify this as a matter of ‘viewpoints’ or something like that. So the little difference this makes is in the way we qualify the idea of deliberation.
So do you agree that there is at least this little difference? Perhaps it is inconsequential, but it does mean that we learn something about what it means to deliberate when we learn we are living in a deterministic universe as opposed to one with a bunch of spontaneous free causes running around.
It all adds up to normality. Everything you do when making a decision is something a deterministic agent can do, and a deterministic agent that deliberates well will (on average) experience higher expected value than deterministic agents that deliberate poorly.
You’re getting closer to the sequence of posts that covers this in more detail, so I’ll just say that I endorse what’s said in this sequence.
What is normality exactly? It’s not the ideas and intuitions I came to the table with, unless the theory actually proposes to teach me nothing. My questions is this: “what do I learn when I learn that the universe is deterministic?” Do I learn anything that has to do with deliberation? One reasonable answer (and one way to explain the normality point) would just be ‘no, it has nothing to do with action.’ But this would strike many people as odd, since we recognize in our deliberation a distinction between future events we can bring about or prevent, and future states we cannot bring about or prevent.
I find I have an extremely hard time understanding some of the arguments in that sequence, after several attempts. I would dearly love to have some of it explained in response to my questions. I find this argument in parcticular to be very confusing:
This argument (which reappears in the ‘timeless control’ article) seems to hang on a very weird idea of ‘changing the future’. No one I have ever talked to believes that they can literally change a future moment from having one property to having another, and that this change is distinct from a change that takes place over an extent of time. I certainly don’t see how anyone could take this as a way to treat the world as undetermined. This seems like very much a strawman view, born from an equivocation on the word ‘change’.
But I expect I am missing something (perhaps something revealed later on in the more technical stage of the article). Can you help me?
I meant that learning the universe is deterministic should not turn one into a fatalist who doesn’t care about making good decisions (which is the intuition that many people have about determinism), because goals and choices mean something even in a deterministic universe. As an analogy, note that all of the agents in my decision theory sequence are deterministic (with one kind-of exception: they can make a deterministic choice to adopt a mixed strategy), but some of them characteristically do better than others.
Regarding the “changing the future” idea, let’s think of what it means in the context of two deterministic computer programs playing chess. It is a fact that only one game actually gets played, but many alternate moves are explored in hypotheticals (within the programs) along the way. When one program decides to make a particular move, it’s not that “the future changed” (since someone with a faster computer could have predicted in advance what moves the programs make, the future is in that sense fixed), but rather that of all the hypothetical moves it explored, the program chose one according to a particular set of criteria. Other programs would have chosen another moves in those circumstances, which would have led to different games in the end.
When you or I are deciding what to do, the different hypothetical options all feel like they’re on an equal basis, because we haven’t figured out what to choose. That doesn’t mean that different possible futures are all real, and that all but one vanish when we make our decision. The hypothetical futures exist on our map, not in the territory; it may be that no version of you anywhere chooses option X, even though you considered it.
Does that make more sense?
A fair point, though I would be interested to hear how the algorithm described in DT relate to action (it can’t be that they describe action, since we needn’t act on the output of a DT, especially given that we’re often akratic). When the metaethics sequence, for all the trouble I have with its arguments, gets into an account of free will, I don’t generally find myself in disagreement. I’ve been looking over that and the physics sequences in the last couple of days, and I think I’ve found the point where I need to do some more reading: I think I just don’t believe either that the universe is timeless, or that it’s a block universe. So I should read Barbour’s book.
Thanks, buy the way for posting that DT series, and for answering my questions. Both have been very helpful.
It does, but I find myself, as I said, unable to grant the premise that statements about the future have truth value. I think I do just need to read up on this view of time.
You’re welcome!
Yeah, a human who consciously endorses a particular decision theory is not the same sort of agent as a simple algorithm that runs that decision theory. But that has more to do with the messy psychology of human beings than with decision theory in its abstract mathematical form.
OK, let me give you a better example. When you look at something, a lot of very complex hardware packed into your retina, optic nerve, and visual cortex, a lot of hard-won complexity optimized over millions of years, is going all out analyzing the data and presenting you with comprehensible shapes, colour, and movement, as well as helpful recognizing objects for you. When you look at something, are you aware of all that happening? Or do you just see it?
(Disclaimer: if you’ve read a lot about neuroscience, it’s quite possible that sometimes you do think about your visual processing centres while you’re looking at something. But the average person wouldn’t, and the average person probably doesn’t think ‘well, there go my empathy centres again’ when they see an old lady having trouble with her grocery bag and feel a desire to help her.)
Okay, let’s try to unpack this. In my example, we have a sociopath who wants to murder someone. The reason why he wants to murder someone, when most people don’t, is because there’s a centre in his brain that’s broken and so hasn’t learned to see the world from another’s perspective, thus hasn’t internalized any social morality because it doesn’t make sense to him...basically, people are objects to him, so why not kill them. His reason to murder someone is because, let’s say, they’re dating a girl he wants to date. Most non-sociopaths wouldn’t consider that a reason to murder anyone, but the reason why they wouldn’t is because they have an innate understanding that other people feel pain, of the concept of fairness, etc, and were thus capable of learning more complex moral rules as well.
The way I see it, the biology aspect is both necessary and sufficient for this kind of behaviour. Someone without the requisite biology wouldn’t be a good parent or friend because they’d see no reason to make an effort (unless they were deliberately “faking it” to benefit from that person). And an ordinary human being raised with no exposure to moral rules, who isn’t taught anything about it explicitly, will still want to make their friends happy and do the best they can raising children. They may not be very good at it, but unless they’re downright abused/severely neglected, they won’t be evil.
I just see it. I’m aware on some abstract level, but I never think about this when I see things, and I don’t take it into account when I confidently believe what I see.
“His reason to murder someone is because, let’s say, they’re dating a girl he wants to date. Most non-sociopaths wouldn’t consider that a reason to murder anyone”
I guess I’d disagree with the second claim, or at least I’d want to qualify it. Having a broken brain center is an inadmissible reason to kill someone. If that’s the only explanation someone could give (or that we could supply them) then we wouldn’t even hold them responsible for their actions. But dating your beloved really is a reason to kill someone. It’s a very bad reason, all things considered, but it is a reason. In this case, the killer would be held responsible.
“The way I see it, the biology aspect is both necessary and sufficient for this kind of behaviour. ”
Necessary, we agree. Sufficient is, I think, too much, especially if we’re relying on evolutionary explanations, which should never stand in without qualification for psychological, much less rational explanations. After all, I could come to hate my family if our relationship soured. This happens to many, many people who are not significantly different from me in this biological respect.
An ordinary human being raised with no exposure to moral rules in an extremely strange counterfactual: no person I have ever met, or ever heard of, is like this. I would probably say that there’s not really any sense in which they were ‘raised’ at all. Could they have friends? Is that so morally neutral an idea that one could learn it while leaning nothing of loyalty? I really don’t think I can imagine a rational, language-using human adult who hasn’t been exposed to moral rules.
So the ‘necessity’ case is granted. We agree there. The ‘sufficiency’ case is very problematic. I don’t think you could even have learned a first language without being exposed to moral rules, and if you never learn any language, then you’re just not really a rational agent.
A weak example of this: someone from a society that doesn’t have any explicit moral rules, i.e. ‘Ten Commandments.’ They might follow laws, but but the laws aren’t explained ‘A is the right thing to do’ or ‘B is wrong’. Strong version: someone whose parents never told them ‘don’t do that, that’s wrong/mean/bad/etc’ or ‘you should do this, because it’s the right thing/what good people do/etc.’ Someone raised in that context would probably be strange, and kind of undisciplined, and probably pretty thoughtless about the consequences of actions, and might include only a small number of people in their ‘circle of empathy’...but I don’t think they’d be incapable of having friends/being nice.′
I can see a case like this, but morality is a much broader idea than can be captured by a list of divine commands and similar such things. Even Christians, Jews, and Muslims would say that the ten commandments are just a sort of beginning, and not all on their own sufficient to be moral ideas.
Huh, we have pretty different intuitions about this: I have a hard time imagining how you’d even get a human being out of that situation. I mean, animals, even really crappy ones like rats, can be empathetic toward one another. But there’s no morality in a rat, and we would never think to praise or blame one for its behavior. Empathy itself is necessary for morality, but far from sufficient.