So are categorical reasons any worse off than categorical oughts?
utilitymonster
I can see that you might question the usefulness of the notion of a “reason for action” as something over and above the notion of “ought”, but I don’t see a better case for thinking that “reason for action” is confused.
The main worry here seems to have to do with categorical reasons for action. Diagnostic question: are these more troubling/confused than categorical “ought” statements? If so, why?
Perhaps I should note that philosophers talking this way make a distinction between “motivating reasons” and “normative reasons”. A normative reason to do A is a good reason to do A, something that would help explain why you ought to do A, or something that counts in favor of doing A. A motivating reason just helps explain why someone did, in fact, do A. One of my motivating reasons for killing my mother might be to prevent her from being happy. By saying this, I do not suggest that this is a normative reason to kill my mother. It could also be that R would be a normative reason for me to A, but R does not motivate my to do A. (ata seems to assume otherwise, since ata is getting caught up with who these considerations would motivate. Whether reasons could work like this is a matter of philosophical controversy. Saying this more for others than you, Luke.)
Back to the main point, I am puzzled largely because the most natural ways of getting categorical oughts can get you categorical reasons. Example: simple total utilitarianism. On this view, R is a reason to do A if R is the fact that doing A would cause someone’s well-being to increase. The strength of R is the extent to which that person’s well-being increases. One weighs one’s reasons by adding up all of their strengths. On then does the thing that one has most reason to do. (It’s pretty clear in this case that the notion of a reason plays an inessential role in the theory. We can get by just fine with well-being, ought, causal notions, and addition.)
Utilitarianism, as always, is a simple case. But it seems like many categorical oughts can be thought of as being determined by weighing factors that count in favor of and count against the course of action in question. In these cases, we should be able to do something like what we did for util (though sometimes that method of weighing the reasons will be different/more complicated; in some bad cases, this might make the detour through reasons pointless).
The reasons framework seems a bit more natural in non-consequentialist cases. Imagine I try to maximize aggregate well-being, but I hate lying to do it. I might count the fact that an action would involve lying as a reason not to do it, but not believe that my lying makes the world worse. To get oughts out of a utility function instead, you might model my utility function as the result of adding up aggregate well-being and subtracting a factor that scales with the number of lies I would have to tell if I took the action in question. Again, it’s pretty clear that you don’t HAVE to think about things this way, but it is far from clear that this is confused/incoherent.
Perhaps the LW crowd is perplexed because people here take utility functions as primitive, whereas philosophers talking this way tend to take reasons as primitive and derive ought statements (and, on a very lucky day, utility functions) from them. This paper, which tries to help reasons folks and utility function folks understand/communicate with each other, might be helpful for anyone who cares much about this. My impression is that we clearly need utility functions, but don’t necessarily need the reason talk. The main advantage to getting up on the reason talk would be trying to understand philosophers who talk that way, if that’s important to you. (Much of the recent work in meta-ethics relies heavily on the notion of a normative reason, as I’m sure Luke knows.)
I’m sort of surprised by how people are taking the notion of “reason for action”. Isn’t this a familiar process when making a decision?
For all courses of action you’re thinking of taking, identify the features (consequences if you that’s you think about things) that count in favor of taking that course of action and those that count against it.
Consider how those considerations weigh against each other. (Do the pros outweigh the cons, by how much, etc.)
Then choose the thing that does best in this weighing process.
The same thing can be a reason for action, a reason for inaction, a reason for belief and a reason for disbelief all at once, in different contexts depending on what consequences these things will have. This makes me think that “reason for action” does not carve reality, or morality, at the joints.
It is not a presupposition of the people talking this way that if R is a reason to do A in a context C, then R is a reason to do in all contexts.
The people talking this way also understand that a single R might be both a reason to do A and a reason to believe X at the same time. You could also have R be a reason to believe X and a reason to cause yourself to not believe X. Why do you think these things make the discourse incoherent/non-perspicuous? This seems no more puzzling than the familiar fact that believing a certain thing could be epistemically irrational but prudentially rational to (cause yourself) to believe.
Even if we grant that one’s meta-ethical position will determine one’s normative theory (which is very contentious), one would like some evidence that it would be easier to find the correct meta-ethical view than it would be to find the correct (or appropriate, or whatever) normative ethical view. Otherwise, why not just do normative ethics?
Yes, this is what I thought EY’s theory was. EY? Is this your view?
On the symbolic action point, you can try making the symbolic action into a public commitment. Research suggests this will increase the strength of the effect you’re talking about. Of course, this could also make you overcommit, so this strategy should be used carefully.
Especially if WBE comes late (so there is a big hardware overhang), you wouldn’t need a lot of time to spend loads of subjective years designing FAI. A small lead time could be enough. Of course, you’d have to be first and have significant influence on the project.
Edited for spelling.
Don’t forget about the ridiculous levels of teaching you’re responsible for in that situation. Lots worse than at an elite institution.
I thought this was really, really good.
Yep, good idea.
Enjoyed most of this, some worries about how far you’re getting with point 8 (on giving now rather than later).
Give now (rather than later) - I’ve seen fascinating arguments that it might be possible to do more good by investing your money in the stock market for a long period of time and then giving all the proceeds to charity later. It’s an interesting strategy but it has a number of limitations. To name just two: 1) Not contributing to charity each year prevents you from taking advantage of the best tax planning strategy available to you. That tax-break is free money. You should take free money.
If you are worried about this you could start a donor advised fund for yourself.
2) Non-profit organizations can have endowments and those endowments can invest in securities just like individuals. So if long term-investment in the stock market were really a superior strategy, the charity you’re intending to give your money to could do the exact same thing. They could tuck all your annual contributions away in a big fat, tax-free fund to earn market returns until they were ready to unleash a massive bundle of money just like you would have. If they aren’t doing this already, it’s probably because the problem they’re trying to solve is compounding faster than the stock market compounds interest.
These assumptions about the motivations of people running non-profits seem too rosy. Most organizations seem to have a heavy bias toward the near. Maybe the best don’t, but I’d like to see more evidence.
Diseases spread, poverty is passed down, existential risk increases.
There is a very relevant point here, but, unfortunately, we aren’t given enough evidence to decide whether this outweighs the reasons to wait.
Do we want x-risk explicitly mentioned without explanation if this is for the contest?
Giving What We Can does not accept donations. Just give it all to Deworm the World.
Would like to see it.
Some wisdom on warm fuzzies: http://www.pbfcomics.com/?cid=PBF162-Executive_Decision.jpg
[Not a quote, but doesn’t seem suitable for a discussion article.]
My reaction is that moral philosophy just isn’t science. Sure, if you’re a utilitarian you can use empirical evidence to figure out what maximizes aggregate welfare, relative to your account of well-being, but you can’t use science to discover that utilitarianism is true. This is because utilitarianism, like any other first-order normative theory and many meta-ethical theories, doesn’t lead you to expect any experiences over any other experiences.
Thanks for writing this, Carl. I’m going to post a link in the GWWC forum.
Here are some papers you should add to your bibliography, if you haven’t already:
What is the Probability Your Vote Will Make a Difference? Voting as a Rational Choice
In the first paper, his probability estimate is 1 in 60 million on average for a voter in a US presidential election, 1 in 10 million in the best cases (New Mexico, Virginia, New Hampshire, and Colorado).
If you focused on the best case, that could mean an order of magnitude for you.
On this point, it is noteworthy that international health aid eliminated small pox. According to Toby Ord, it is estimated that this has prevented over 100 million deaths, which is more than the total number of people that died in all wars in the 20th century. If you assumed that all of the rest of international health aid achieved nothing at all, this single effort would make the average number of dollars per DALY achieved by international health aid better than what the British Government achieves.
Still don’t get it. Let’s say cards are being put in front of my face, and all I’m getting is their color. I can reliability distinguish the colors here “http://www.webspresso.com/color.htm″. How do I associate a sequence of cards with a string? It doesn’t seem like there is any canonical way of doing this. Maybe it won’t matter that much in the end, but are there better and worse ways of starting?
Ok, but how?
R is a categorical reason for S to do A iff R counts in favor doing A for S, and would so count for other agents in a similar situation, regardless of their preferences. If it were true that we always have reasons to benefit others, regardless of what we care about, that would be a categorical reason. I don’t use the term “categorical reason” any differently than “external reason”.
S categorically ought to do A just when S ought to do A, regardless of what S cares about, and it would still be true that S ought to do A in similar situations, regardless of what S cares about. The rule: always maximize happiness, would, if true, ground a categorical ought.
I see very little reason to be more or less skeptical of categorical reasons or categorical oughts than the other.