(even though it strikes me as a mind projection issue of creating a person’s ethical ‘character’ when all you need is the likelihood of them performing this act or that).
Character just is a compressed representation of patterns of likely behavior and the algorithms generating them.
All things being equal, we prefer to associate with people who will never murder us, rather than people who will only murder us when it would be good to do so—because we personally calculate good with a term for our existence. People with an irrational, compelling commitment are more trustworthy than people compelled by rational or utilitarian concerns (Schelling’s Strategy of Conflict), because we are aware that there exists situations where the best outcome overall is not the best outcome personally.
This connotes that wanting others to self-bind comes from unvirtuous selfishness, which seems like the wrong connotation to apply to a phenomenon that enables very general and large Pareto improvements (yay!).
In particular (not maximally relevant in this conversation, but particularly important for LW), among fallible (including selfishly biased in their beliefs) agents that wish to pursue common non-indexical values, self-binding to cooperate in the epistemic prisoner’s dilemma enables greater group success than a war of all against all who disagree, or mere refusal to cooperate given strategic disagreement.
So I am torn between lumping virtue ethics in with deontological ethics as “descriptions of human moral behaviour” and repairing it into a usable set of prescriptions for human moral behaviour.
Why not do both? Treat naive virtue ethics as a description of human moralizing verbal behavior, and treat the virtue-ethical things people do as human game-theoretic behavior, and, because behaviors tend to have interesting and not completely insane causes, look for any good reasons for these behaviors that you aren’t already aware of and craft a set of prescriptions from them.
Character just is a compressed representation of patterns of likely behavior and the algorithms generating them.
This connotes that wanting others to self-bind comes from unvirtuous selfishness, which seems like the wrong connotation to apply to a phenomenon that enables very general and large Pareto improvements (yay!).
In particular (not maximally relevant in this conversation, but particularly important for LW), among fallible (including selfishly biased in their beliefs) agents that wish to pursue common non-indexical values, self-binding to cooperate in the epistemic prisoner’s dilemma enables greater group success than a war of all against all who disagree, or mere refusal to cooperate given strategic disagreement.
As to “irrational” (and, come to think of it, also cooperation), see Bayesians vs. Barbarians.
Why not do both? Treat naive virtue ethics as a description of human moralizing verbal behavior, and treat the virtue-ethical things people do as human game-theoretic behavior, and, because behaviors tend to have interesting and not completely insane causes, look for any good reasons for these behaviors that you aren’t already aware of and craft a set of prescriptions from them.