You have a good encapsulation of what I’m trying to say, yes.
I’m not arguing against “all moral reasoning from scratch,” however, which I would regard as a strawman representation of rational ethics. (It was difficult to wholly avoid an apparent argument against morality from scratch, however, in establishing that rationality is not always rational, and trying to establish this in ethics as well, so I suspect I failed to some extent there, in particular the bit about the reasons for adopting rational ethics.)
My focus, although it might not have been plain, was primarily on day-to-day decisions; most people might encounter one or two serious Moral Questions in their entire -lives-; whether or not to leave grandma on life support, for example. Societal ethics are more than sufficient for day-to-day decisions; don’t shoplift that candy bar, don’t drink yourself into a stupor, don’t cheat on your math test.
For most people, a rational ethics system costs far more than it provides in benefits. For a few people, it doesn’t; either because they (like me) enjoy the act of calculation itself, or because they (say, a priest, or a counselor) are in a position such that they regularly encounter such Moral Questions, and must be capable of answering them sufficiently. We are, in fact, a -part- of society; relying on society therefore doesn’t mean leaving Moral Questions unaddressed, but rather means leaving the expensive calculation to others, and evaluating the results (listening to the arguments), a considerably cheaper operation.
most people might encounter one or two serious Moral Questions in their entire -lives-; whether or not to leave grandma on life support, for example. Societal ethics are more than sufficient for day-to-day decisions; don’t shoplift that candy bar, don’t drink yourself into a stupor, don’t cheat on your math test.
Agree.
For most people, a rational ethics system costs far more than it provides in benefits.
I don’t think this follows. Calculating every decision costs far more than it provides in benefits, sure. But having a moral system for when serious questions do arise is definitely worth it, and I think they arrive more often than you realize (donating to effective/efficient charity, choosing a career, supporting/opposing gay marriage or abortion or universal health care).
We are, in fact, a -part- of society; relying on society therefore doesn’t mean leaving Moral Questions unaddressed, but rather means leaving the expensive calculation to others, and evaluating the results (listening to the arguments), a considerably cheaper operation.
So are you saying that you agree people ought to spend time considering arguments for various moral systems, but that they shouldn’t all bother with metaethics? Agreed. Or are you saying they shouldn’t bother with thinking about “morality” at all, and should just consider the arguments for and against (for example) abortion independent of a bigger system?
And one note: I think you’re misusing “rational”. Spending an hour puzzling over the optimal purchase of chips is not rational; spending an hour puzzling over whether to shoplift the chips is also not rational. You’re only getting the counterintuitive result “rationality is not always rational” because you’re treating “rational” as synonymous with “logical” or “”optimized” or “thought-through”.
I think you could improve the post—and make your point clearer, by replacing “rational” with one of these words.
“And one note: I think you’re misusing “rational”. Spending an hour puzzling over the optimal purchase of chips is not rational; spending an hour puzzling over whether to shoplift the chips is also not rational. You’re only getting the counterintuitive result “rationality is not always rational” because you’re treating “rational” as synonymous with “logical” or “”optimized” or “thought-through”.”
I think this encapsulates our disagreement.
First, I challenge you to define rationality while excluding those mechanisms. No, I don’t really, just consider how you would do it.
I think the disconnect is in considering the problem as one decision, or two discrete decisions. “A witch did it” is not a rational explanation for something, I hope we can agree, and I hope I established that one can rationally choose to believe this, even though it is an irrational belief.
The first decision is about what decision-making process to use to make a decision. “Blame the witch” is not a good process—it’s not a process at all. But when the decision is unimportant, it may be better to use a bad decision making process than a good one.
Given two decisions, the first about what decision-making process to use, and the second to be the actual decision, you can in fact use a good-decision making process (rationally conclude) that a bad-decision making process (an irrational one) is sufficient for a particular task.
For your examples, picking one to address specifically, I’d suggest that it is ultimately unimportant on an individual basis to most people whether or not to support universal health care; their individual support or lack thereof has almost no effect on whether or not it is implemented. Similarly with abortion and gay marriage.
For effective charities, this decision-making process can be outsourced pretty effectively to somebody who shares your values; most people are religious, and their preacher may make recommendations, for example.
I’m not certain I would consider career choice an ethical decision, per se; I regard that as a case where rationality has a high payoff in almost any circumstances, however, and so agree with there, even if I disagree with its usefulness as an opposing example for the purposes of this debate.
Instrumental rationality is doing whatever has the best expected outcome. So spending a ton of time thinking about metaethics may or may not be instrumentally rational, but saying “thinking rationally about metaethics is not rational” is using the world two different ways, and is the reason your post is so confusing to me.
On your example of a witch, I don’t actually see why believing that would be rational. But if you take a more straightforward example, say, “Not knowing that your boss is engaging in insider training, and not looking, could be rational,” then I agree. You might rationally choose to not check if a belief is false.
Why is it necessary to muddy the waters by saying “You might rationally have an irrational belief?”
you can in fact use a good-decision making process (rationally conclude) that a bad-decision making process (an irrational one) is sufficient for a particular task.
Of course. You can decide that learning something has negative expected consequences, and choose not to learn it. Or decide that learning it would have positive expected consequences, but that the value of information is low. Why use the “rational” and “irrational” labels?
Something like half of women will consider an abortion; their support or lack thereof has an enormous impact on whether that particular abortion is implemented. And if you’re proposing this as a general policy, the relevant question is whether overall people adopting your heuristic is good, meaning that the question of whether any given one of them can impact politics is less relevant. If lots of people adopt your heuristic, it matters.
For effective charities, everyone who gives to the religious organization selected by their church is orders of magnitude less effective than they could be. Thinking for themselves would allow them to save hundreds of lives over their lifetime.
You have a good encapsulation of what I’m trying to say, yes.
I’m not arguing against “all moral reasoning from scratch,” however, which I would regard as a strawman representation of rational ethics. (It was difficult to wholly avoid an apparent argument against morality from scratch, however, in establishing that rationality is not always rational, and trying to establish this in ethics as well, so I suspect I failed to some extent there, in particular the bit about the reasons for adopting rational ethics.)
My focus, although it might not have been plain, was primarily on day-to-day decisions; most people might encounter one or two serious Moral Questions in their entire -lives-; whether or not to leave grandma on life support, for example. Societal ethics are more than sufficient for day-to-day decisions; don’t shoplift that candy bar, don’t drink yourself into a stupor, don’t cheat on your math test.
For most people, a rational ethics system costs far more than it provides in benefits. For a few people, it doesn’t; either because they (like me) enjoy the act of calculation itself, or because they (say, a priest, or a counselor) are in a position such that they regularly encounter such Moral Questions, and must be capable of answering them sufficiently. We are, in fact, a -part- of society; relying on society therefore doesn’t mean leaving Moral Questions unaddressed, but rather means leaving the expensive calculation to others, and evaluating the results (listening to the arguments), a considerably cheaper operation.
Agree.
I don’t think this follows. Calculating every decision costs far more than it provides in benefits, sure. But having a moral system for when serious questions do arise is definitely worth it, and I think they arrive more often than you realize (donating to effective/efficient charity, choosing a career, supporting/opposing gay marriage or abortion or universal health care).
So are you saying that you agree people ought to spend time considering arguments for various moral systems, but that they shouldn’t all bother with metaethics? Agreed. Or are you saying they shouldn’t bother with thinking about “morality” at all, and should just consider the arguments for and against (for example) abortion independent of a bigger system?
And one note: I think you’re misusing “rational”. Spending an hour puzzling over the optimal purchase of chips is not rational; spending an hour puzzling over whether to shoplift the chips is also not rational. You’re only getting the counterintuitive result “rationality is not always rational” because you’re treating “rational” as synonymous with “logical” or “”optimized” or “thought-through”.
I think you could improve the post—and make your point clearer, by replacing “rational” with one of these words.
“And one note: I think you’re misusing “rational”. Spending an hour puzzling over the optimal purchase of chips is not rational; spending an hour puzzling over whether to shoplift the chips is also not rational. You’re only getting the counterintuitive result “rationality is not always rational” because you’re treating “rational” as synonymous with “logical” or “”optimized” or “thought-through”.”
I think this encapsulates our disagreement.
First, I challenge you to define rationality while excluding those mechanisms. No, I don’t really, just consider how you would do it.
Can we call rationality as “A good decision-making process”? (Borrowing from http://lesswrong.com/lw/20p/what_is_rationality/ )
I think the disconnect is in considering the problem as one decision, or two discrete decisions. “A witch did it” is not a rational explanation for something, I hope we can agree, and I hope I established that one can rationally choose to believe this, even though it is an irrational belief.
The first decision is about what decision-making process to use to make a decision. “Blame the witch” is not a good process—it’s not a process at all. But when the decision is unimportant, it may be better to use a bad decision making process than a good one.
Given two decisions, the first about what decision-making process to use, and the second to be the actual decision, you can in fact use a good-decision making process (rationally conclude) that a bad-decision making process (an irrational one) is sufficient for a particular task.
For your examples, picking one to address specifically, I’d suggest that it is ultimately unimportant on an individual basis to most people whether or not to support universal health care; their individual support or lack thereof has almost no effect on whether or not it is implemented. Similarly with abortion and gay marriage.
For effective charities, this decision-making process can be outsourced pretty effectively to somebody who shares your values; most people are religious, and their preacher may make recommendations, for example.
I’m not certain I would consider career choice an ethical decision, per se; I regard that as a case where rationality has a high payoff in almost any circumstances, however, and so agree with there, even if I disagree with its usefulness as an opposing example for the purposes of this debate.
Instrumental rationality is doing whatever has the best expected outcome. So spending a ton of time thinking about metaethics may or may not be instrumentally rational, but saying “thinking rationally about metaethics is not rational” is using the world two different ways, and is the reason your post is so confusing to me.
On your example of a witch, I don’t actually see why believing that would be rational. But if you take a more straightforward example, say, “Not knowing that your boss is engaging in insider training, and not looking, could be rational,” then I agree. You might rationally choose to not check if a belief is false.
Why is it necessary to muddy the waters by saying “You might rationally have an irrational belief?”
Of course. You can decide that learning something has negative expected consequences, and choose not to learn it. Or decide that learning it would have positive expected consequences, but that the value of information is low. Why use the “rational” and “irrational” labels?
Something like half of women will consider an abortion; their support or lack thereof has an enormous impact on whether that particular abortion is implemented. And if you’re proposing this as a general policy, the relevant question is whether overall people adopting your heuristic is good, meaning that the question of whether any given one of them can impact politics is less relevant. If lots of people adopt your heuristic, it matters.
For effective charities, everyone who gives to the religious organization selected by their church is orders of magnitude less effective than they could be. Thinking for themselves would allow them to save hundreds of lives over their lifetime.