“Implicit in this metaphor is the localization of personal identity primarily in the system 2 rider. Imagine reversing that, so that the experience and behaviour you identify with are primarily driven by your system 1, with a system 2 that is mostly a Hansonian rationalization engine on top (one which occasionally also does useful maths). Does this shift your intuitions about the ideas above, e.g. by making your CEV feel less well-defined?”
I find this very interesting because locating personal identity in system 1 feels conceptually impossible or deeply confusing. No matter how much rationalization goes on, it never seems intuitive to identify myself with system 1. How can you identify with the part of yourself that isn’t doing the explicit thinking, including the decision about which part of yourself to identify with? It reminds me of Nagel’s The Last Word: This doesn’t feel like an empirical question to me.
Perhaps this just means that I have a very deep ‘realism about rationality’ assumption. I also think that the existing philosophy literature on realism about practical reasons is relevant here. I think realism about rationality and about ‘practical reasons’ are the same thing.
If this ‘realism about rationality’ really is rather like “realism about epistemic reasons/‘epistemic facts’”, then you have the ‘normative web argument’ to contend with—if you are a moral antirealist. Convergence and ‘Dutch book’ type arguments oftenappear in more recent metaethics, and the similarity has been noted, leading to arguments such as these:
These and other points of analogy between the moral and epistemic domains might well invite the suspicion that the respective prospects of realism and anti-realism in the two domains are not mutually independent, that what is most plausibly true of the one is likewise most plausibly true of the other. This suspicion is developed in Cuneo’s “core argument” which runs as follows (p. 6):
(1) If moral facts do not exist, then epistemic facts do not exist.
(2) Epistemic facts exist.
(3) So moral facts exist.
(4) If moral facts exist, then moral realism is true.
(5) So moral realism is true.
These considerations seem to clearly indicate ‘realism about epistemic facts’ in the metaethical sense:
The idea that there is an “ideal” decision theory.
The idea that, given certain evidence for a proposition, there’s an “objective” level of subjective credence which you should assign to it, even under computational constraints.
The idea that having having contradictory preferences or beliefs is really bad, even when there’s no clear way that they’ll lead to bad consequences (and you’re very good at avoiding dutch books and money pumps and so on).
This seems to directly concede or imply the ‘normative web’ Argument, or to imply some form of normative (if not exactly moral) realism:
The idea that morality is quite like mathematics, in that there are certain types of moral reasoning that are just correct.
The idea that defining coherent extrapolated volition in terms of an idealised process of reflection roughly makes sense, and that it converges in a way which doesn’t depend very much on morally arbitrary factors.
If ‘realism about rationality’ is really just normative realism in general, or realism about epistemic facts, then there is already an extensive literature on whether it is right or not. The links above are just the obvious starting points that came to my mind.
“Implicit in this metaphor is the localization of personal identity primarily in the system 2 rider. Imagine reversing that, so that the experience and behaviour you identify with are primarily driven by your system 1, with a system 2 that is mostly a Hansonian rationalization engine on top (one which occasionally also does useful maths). Does this shift your intuitions about the ideas above, e.g. by making your CEV feel less well-defined?”
I find this very interesting because locating personal identity in system 1 feels conceptually impossible or deeply confusing. No matter how much rationalization goes on, it never seems intuitive to identify myself with system 1. How can you identify with the part of yourself that isn’t doing the explicit thinking, including the decision about which part of yourself to identify with? It reminds me of Nagel’s The Last Word: This doesn’t feel like an empirical question to me.
Perhaps this just means that I have a very deep ‘realism about rationality’ assumption. I also think that the existing philosophy literature on realism about practical reasons is relevant here. I think realism about rationality and about ‘practical reasons’ are the same thing.
If this ‘realism about rationality’ really is rather like “realism about epistemic reasons/‘epistemic facts’”, then you have the ‘normative web argument’ to contend with—if you are a moral antirealist. Convergence and ‘Dutch book’ type arguments often appear in more recent metaethics, and the similarity has been noted, leading to arguments such as these:
These considerations seem to clearly indicate ‘realism about epistemic facts’ in the metaethical sense:
The idea that there is an “ideal” decision theory.
The idea that, given certain evidence for a proposition, there’s an “objective” level of subjective credence which you should assign to it, even under computational constraints.
The idea that having having contradictory preferences or beliefs is really bad, even when there’s no clear way that they’ll lead to bad consequences (and you’re very good at avoiding dutch books and money pumps and so on).
This seems to directly concede or imply the ‘normative web’ Argument, or to imply some form of normative (if not exactly moral) realism:
The idea that morality is quite like mathematics, in that there are certain types of moral reasoning that are just correct.
The idea that defining coherent extrapolated volition in terms of an idealised process of reflection roughly makes sense, and that it converges in a way which doesn’t depend very much on morally arbitrary factors.
If ‘realism about rationality’ is really just normative realism in general, or realism about epistemic facts, then there is already an extensive literature on whether it is right or not. The links above are just the obvious starting points that came to my mind.