Thanks for writing this. I would object to calling a decision theory an “algorithm”, though, since it doesn’t actually specify how to make the computation, and in practice the implied computations from most decision theories are completely infeasible (for instance, the chess decision theory requires a full search of the game tree).
Of course, it would be much more satisfying and useful if decision theories actually were algorithms, and I would be very interested to see any that achieve this or move in that direction.
One answer is that if we feed in what-we-want into an advanced decision theory, then just as cooperation emerges in the Prisoner’s Dilemma, many kinds of patterns that we take as basic moral rules emerge as the equilibrium behavior. The idea is developed more substantially in Gary Drescher’s Good and Real, and (before there was a candidate for an advanced decision theory) in Douglas Hofstadter’s concept of superrationality.
This reasoning strikes me as somewhat odd. Even if it turned out that these patterns don’t emerge at all, we would still distinguish “what-we-want” from “what-is-right”.
This reasoning strikes me as somewhat odd. Even if it turned out that these patterns don’t emerge at all, we would still distinguish “what-we-want” from “what-is-right”.
True. The speculation is that what-we-want, when processed through advanced decision theory, comes out as a good match for our intuitions on what-is-right, and this would serve as a legitimate reductionistic grounding of metaethics. If it turned out not to match, we’d have to look for other ways to ground metaethics.
Thanks for writing this. I would object to calling a decision theory an “algorithm”, though, since it doesn’t actually specify how to make the computation, and in practice the implied computations from most decision theories are completely infeasible (for instance, the chess decision theory requires a full search of the game tree).
Of course, it would be much more satisfying and useful if decision theories actually were algorithms, and I would be very interested to see any that achieve this or move in that direction.
This reasoning strikes me as somewhat odd. Even if it turned out that these patterns don’t emerge at all, we would still distinguish “what-we-want” from “what-is-right”.
True. The speculation is that what-we-want, when processed through advanced decision theory, comes out as a good match for our intuitions on what-is-right, and this would serve as a legitimate reductionistic grounding of metaethics. If it turned out not to match, we’d have to look for other ways to ground metaethics.
Or perhaps we’d have to stop taking our intuitions on what-is-right at face value.
Or that, yes.
I wish you’d stop saying “advanced decision theory”, as it’s way too infantile currently to be called “advanced”...
I want a term to distinguish the decision theories (TDT, UDT, ADT) that pass the conditions 1-5 above. I’m open to suggestions.
Actually, hang on, I’ll make a quick Discussion post.