On one level it’s almost akin to what a Bayesian calculation (taking “weird but harmless behaviour” as positive evidence of “weird and harmful”) would feel like from the inside, and in that respect I can see the value in virtue ethics (even though it strikes me as a mind projection issue of creating a person’s ethical ‘character’ when all you need is the likelihood of them performing this act or that).
But on another level, I can see it is as a description of a sort of hard-coded irrationality that we have evolution to thank for. All things being equal, we prefer to associate with people who will never murder us, rather than people who will only murder us when it would be good to do so—because we personally calculate good with a term for our existence. People with an irrational, compelling commitment are more trustworthy than people compelled by rational or utilitarian concerns (Schelling’s Strategy of Conflict), because we are aware that there exists situations where the best outcome overall is not the best outcome personally.
So I am torn between lumping virtue ethics in with deontological ethics as “descriptions of human moral behaviour” and repairing it into a usable set of prescriptions for human moral behaviour.
(even though it strikes me as a mind projection issue of creating a person’s ethical ‘character’ when all you need is the likelihood of them performing this act or that).
Character just is a compressed representation of patterns of likely behavior and the algorithms generating them.
All things being equal, we prefer to associate with people who will never murder us, rather than people who will only murder us when it would be good to do so—because we personally calculate good with a term for our existence. People with an irrational, compelling commitment are more trustworthy than people compelled by rational or utilitarian concerns (Schelling’s Strategy of Conflict), because we are aware that there exists situations where the best outcome overall is not the best outcome personally.
This connotes that wanting others to self-bind comes from unvirtuous selfishness, which seems like the wrong connotation to apply to a phenomenon that enables very general and large Pareto improvements (yay!).
In particular (not maximally relevant in this conversation, but particularly important for LW), among fallible (including selfishly biased in their beliefs) agents that wish to pursue common non-indexical values, self-binding to cooperate in the epistemic prisoner’s dilemma enables greater group success than a war of all against all who disagree, or mere refusal to cooperate given strategic disagreement.
So I am torn between lumping virtue ethics in with deontological ethics as “descriptions of human moral behaviour” and repairing it into a usable set of prescriptions for human moral behaviour.
Why not do both? Treat naive virtue ethics as a description of human moralizing verbal behavior, and treat the virtue-ethical things people do as human game-theoretic behavior, and, because behaviors tend to have interesting and not completely insane causes, look for any good reasons for these behaviors that you aren’t already aware of and craft a set of prescriptions from them.
(even though it strikes me as a mind projection issue of creating a person’s ethical ‘character’ when all you need is the likelihood of them performing this act or that).
It’s not a fallacy if the thing your projecting onto is an actual human with an actual human mind. Another way to see this is as using the priors on how humans tend to behave that evolution has provided you.
But on another level, I can see it is as a description of a sort of hard-coded irrationality that we have evolution to thank for. All things being equal, we prefer to associate with people who will never murder us, rather than people who will only murder us when it would be good to do so—because we personally calculate good with a term for our existence. People with an irrational, compelling commitment are more trustworthy than people compelled by rational or utilitarian concerns (Schelling’s Strategy of Conflict), because we are aware that there exists situations where the best outcome overall is not the best outcome personally.
The definition of “rational” you’re using in that paragraph has the problem that it will cause you to regret your rationality. If having an “irrational” commitment helps you be more trusted and thus achieve your goals, it’s not irrational. See the articles about decision theory for more details on this.
It’s not a fallacy if the thing your projecting onto is an actual human with an actual human mind. Another way to see this is as using the priors on how humans tend to behave that evolution has provided you.
That only works if you’re (a) not running in to cultural differences and (b) not dealing with someone who has major neurological differences. Using your default priors on “how humans work” to handle an autistic or a schizophrenic is probably going to produce sub-par results. Same if you assume that “homosexuality is wrong” or “steak is delicious” is culturally universal.
It’s unlikely that you’ll run in to someone who prioritizes prime-sized stacks of pebbles, but it’s entirely likely you’ll run in to people who thinks eating meat is wrong, or that gay marriage ought to be legalized :)
Using your default priors on “how humans work” to handle an autistic or a schizophrenic is probably going to produce sub-par results.
They’re going to produce the result that this human’s brain is wired strangely and thus he’s liable to exhibit other strange and likely negative behaviors. Which is more-or-less accurate.
Because his comment is evidence for the hypothesis that he has a divergent neurology from mine, and is therefor liable to exhibit negative behaviors :P
Indeed, and it probably needs to be emphasized that nations are not monocultures. Americans reading mainly utilitarian blogs and Americans reading mainly deontologist blogs live in different cultures, for instance. (To say nothing about Americans reading atheist blogs and Americans reading fundamentalist blogs, let alone Americans reading any kinds of blogs and Americans who don’t read period.)
(even though it strikes me as a mind projection issue of creating a person’s ethical ‘character’ when all you need is the likelihood of them performing this act or that).
This may be part of the reason many virtue ethical theories are prescriptions on what one should do themselves, and usually disapprove of trying to apply it to other human’s.On this level, it’s poor for predicting, but wonderful for meaningful signalling of cooperative intent. I tend to consider virtue ethics as my low-level compressed version of consequentialist morality; it gives me the ability to develop actions for snap situations that I’d want to take for consequentialist reasons.
I am torn on virtue ethics.
On one level it’s almost akin to what a Bayesian calculation (taking “weird but harmless behaviour” as positive evidence of “weird and harmful”) would feel like from the inside, and in that respect I can see the value in virtue ethics (even though it strikes me as a mind projection issue of creating a person’s ethical ‘character’ when all you need is the likelihood of them performing this act or that).
But on another level, I can see it is as a description of a sort of hard-coded irrationality that we have evolution to thank for. All things being equal, we prefer to associate with people who will never murder us, rather than people who will only murder us when it would be good to do so—because we personally calculate good with a term for our existence. People with an irrational, compelling commitment are more trustworthy than people compelled by rational or utilitarian concerns (Schelling’s Strategy of Conflict), because we are aware that there exists situations where the best outcome overall is not the best outcome personally.
So I am torn between lumping virtue ethics in with deontological ethics as “descriptions of human moral behaviour” and repairing it into a usable set of prescriptions for human moral behaviour.
Character just is a compressed representation of patterns of likely behavior and the algorithms generating them.
This connotes that wanting others to self-bind comes from unvirtuous selfishness, which seems like the wrong connotation to apply to a phenomenon that enables very general and large Pareto improvements (yay!).
In particular (not maximally relevant in this conversation, but particularly important for LW), among fallible (including selfishly biased in their beliefs) agents that wish to pursue common non-indexical values, self-binding to cooperate in the epistemic prisoner’s dilemma enables greater group success than a war of all against all who disagree, or mere refusal to cooperate given strategic disagreement.
As to “irrational” (and, come to think of it, also cooperation), see Bayesians vs. Barbarians.
Why not do both? Treat naive virtue ethics as a description of human moralizing verbal behavior, and treat the virtue-ethical things people do as human game-theoretic behavior, and, because behaviors tend to have interesting and not completely insane causes, look for any good reasons for these behaviors that you aren’t already aware of and craft a set of prescriptions from them.
It’s not a fallacy if the thing your projecting onto is an actual human with an actual human mind. Another way to see this is as using the priors on how humans tend to behave that evolution has provided you.
The definition of “rational” you’re using in that paragraph has the problem that it will cause you to regret your rationality. If having an “irrational” commitment helps you be more trusted and thus achieve your goals, it’s not irrational. See the articles about decision theory for more details on this.
That only works if you’re (a) not running in to cultural differences and (b) not dealing with someone who has major neurological differences. Using your default priors on “how humans work” to handle an autistic or a schizophrenic is probably going to produce sub-par results. Same if you assume that “homosexuality is wrong” or “steak is delicious” is culturally universal.
It’s unlikely that you’ll run in to someone who prioritizes prime-sized stacks of pebbles, but it’s entirely likely you’ll run in to people who thinks eating meat is wrong, or that gay marriage ought to be legalized :)
They’re going to produce the result that this human’s brain is wired strangely and thus he’s liable to exhibit other strange and likely negative behaviors. Which is more-or-less accurate.
Why on Earth is this comment getting downvoted?
Because his comment is evidence for the hypothesis that he has a divergent neurology from mine, and is therefor liable to exhibit negative behaviors :P
My guess is it’s in response to the phrase “negative behaviors” describing a non-neurotypical person’s behavior.
Indeed, and it probably needs to be emphasized that nations are not monocultures. Americans reading mainly utilitarian blogs and Americans reading mainly deontologist blogs live in different cultures, for instance. (To say nothing about Americans reading atheist blogs and Americans reading fundamentalist blogs, let alone Americans reading any kinds of blogs and Americans who don’t read period.)
This may be part of the reason many virtue ethical theories are prescriptions on what one should do themselves, and usually disapprove of trying to apply it to other human’s.On this level, it’s poor for predicting, but wonderful for meaningful signalling of cooperative intent. I tend to consider virtue ethics as my low-level compressed version of consequentialist morality; it gives me the ability to develop actions for snap situations that I’d want to take for consequentialist reasons.