W.r.t. moral reflection: Probably many agents put little intrinsic value on whether society engages in a lot of moral reflection. However, I would guess that as a whole the set of agents having a similar decision mechanism as I have do care about this significantly and positively. (Empirically, disvaluing moral reflection seems to be rare.) Hence, (if the basic argument of the paper goes through) I should give some weight to it.
W.r.t. moral pluralism: Probably even fewer agents care about this intrinsically. I certainly don’t care about it intrinsically. The idea is that moral pluralism may avoid conflict or create gains from “trade”. For example, let’s say the aggregated values of agents with my decision algorithm contain two values A and B. (As I argue in the paper, I should maximize these aggregated values to maximize my own values throughout the multiverse.) Now, I might be in some particular environment with agents who themselves care about A and/or B. Let’s say I can choose between two distributions of caring about A and B: Either each of the agents cares about A and B, or some care only about A and the others only about B. The former will tend to be better if I (or rather the set of agents with my decision algorithm) care about A and B, because it avoids conflicts, makes it more easy to exploit comparative advantages, etc.
Note that I think neither promoting moral reflection nor promoting moral pluralism is a strong candidate for a top intervention. Multiverse-wide superrationality just increases their value relative to what, say, what a utilitarian would think about these interventions. I think it’s a lot more important to ensure that AI uses the right decision theory. (Of course, this is important, anyway, but I think multiverse-wide superrationality drastically increases its value.)
Thanks for the comment!
W.r.t. moral reflection: Probably many agents put little intrinsic value on whether society engages in a lot of moral reflection. However, I would guess that as a whole the set of agents having a similar decision mechanism as I have do care about this significantly and positively. (Empirically, disvaluing moral reflection seems to be rare.) Hence, (if the basic argument of the paper goes through) I should give some weight to it.
W.r.t. moral pluralism: Probably even fewer agents care about this intrinsically. I certainly don’t care about it intrinsically. The idea is that moral pluralism may avoid conflict or create gains from “trade”. For example, let’s say the aggregated values of agents with my decision algorithm contain two values A and B. (As I argue in the paper, I should maximize these aggregated values to maximize my own values throughout the multiverse.) Now, I might be in some particular environment with agents who themselves care about A and/or B. Let’s say I can choose between two distributions of caring about A and B: Either each of the agents cares about A and B, or some care only about A and the others only about B. The former will tend to be better if I (or rather the set of agents with my decision algorithm) care about A and B, because it avoids conflicts, makes it more easy to exploit comparative advantages, etc.
Note that I think neither promoting moral reflection nor promoting moral pluralism is a strong candidate for a top intervention. Multiverse-wide superrationality just increases their value relative to what, say, what a utilitarian would think about these interventions. I think it’s a lot more important to ensure that AI uses the right decision theory. (Of course, this is important, anyway, but I think multiverse-wide superrationality drastically increases its value.)