This strikes me as the wrong approach. I think that you probably need to go down to the level of meta-preferences and apply VNM-type reasoning to this structure rather than working with the higher-level construct of utility functions. What do I mean by that? Well, let M denote the model space and O denote the outcome space. What I’m talking about is a preference relation > on the space MxO. If we simply assume such a > is given (satisfying the constraint that (m1, o1) > (m1, o2) iff o1 >_m1 o2 where >_m1 is model m1′s preference relation) , then the VNM axioms applied to (>, MxO) and the distribution on M are probably sufficient to give a utility function, and it should have some interesting relationship with the utility functions of each competing ethical model. (I don’t actually know this, it just seems intuitively plausible. Feel free to do the actual math and prove me wrong.)
On the other hand, we’d like to allow the set of >_m’s to determine > (along with P(m)), but I’m not optimistic. It seems like this should only happen when the utility functions associated with each >_m, U_m(o), are fully unique rather than unique up to affine transformation. Basically, we need our meta-preferences over the relative badness of doing the wrong thing under competing ethical theories to play some role in determining >, and that information simply isn’t present in the >_m’s.
(Even though my comment is a criticism, I still liked the post—it was good enough to get me thinking at least)
Basically, we need our meta-preferences over the relative badness of doing the wrong thing under competing ethical theories to play some role in determining >, and that information simply isn’t present in the >_m’s.
That was like, half the point of my post. I obviously suck at explaining myself.
And yes, I agree now that starting with utility functions is the wrong way. We should actually just build something from the ground up aimed squarely at indirect normativity.
(Even though my comment is a criticism, I still liked the post—it was good enough to get me thinking at least)
Even though my post is an argument, the point really is to get us all thinking about this and see where we can go with it.
That was like, half the point of my post. I obviously suck at explaining myself.
I think the combination of me skimming and thinking in terms of the underlying preference relation instead of intertheoretic weights caused me to miss it, but yeah, It’s clear you already said that.
Thanks for throwing your brain into the pile.
No problem :) Here are some more thoughts:
It seems correct to allow the probability distribution over ethical theories to depend on the outcome—there are facts about the world which would change my probability distribution over ethical theories, e.g. facts about the brain or human psychology. Not all meta-ethical theories would allow this, but some do.
I’m nearly certain that if you use preference relation over sets framework, you’ll recover a version of each ethical theory’s utility function, and this even happens if you allow the true ethical theory to be correlated with the outcome of the lottery by using a conditional distribution P(m|o) instead of P(m). Implicitly, this will define your k_m’s and c_m’s, given a version of each m’s utility function, U_m(.).
It seems straightforward to add uncertainty over meta-preferences into the mix, though now we’ll need meta-meta-preferences over M2xM1xO. In general, you can always add uncertainty over meta^n-preferences, and the standard VNM axioms should get you what you want, but in the limit the space becomes infinite-dimensional and thus infinite, so the usual VNM proof doesn’t apply to the infinite tower of uncertainty.
It seems incorrect to have M be a finite set in the first place since competing ethical theories will say something like “1 human life = X dog lives”, and X could be any real number. This means, once again, we blow up the VNM proof. On the other hand, I’m not sure this is any different than complaining that O is finite, in which case if you’re going to simplify and assume O is finite, you may as well do the same for M.
This strikes me as the wrong approach. I think that you probably need to go down to the level of meta-preferences and apply VNM-type reasoning to this structure rather than working with the higher-level construct of utility functions. What do I mean by that? Well, let M denote the model space and O denote the outcome space. What I’m talking about is a preference relation > on the space MxO. If we simply assume such a > is given (satisfying the constraint that (m1, o1) > (m1, o2) iff o1 >_m1 o2 where >_m1 is model m1′s preference relation) , then the VNM axioms applied to (>, MxO) and the distribution on M are probably sufficient to give a utility function, and it should have some interesting relationship with the utility functions of each competing ethical model. (I don’t actually know this, it just seems intuitively plausible. Feel free to do the actual math and prove me wrong.)
On the other hand, we’d like to allow the set of >_m’s to determine > (along with P(m)), but I’m not optimistic. It seems like this should only happen when the utility functions associated with each >_m, U_m(o), are fully unique rather than unique up to affine transformation. Basically, we need our meta-preferences over the relative badness of doing the wrong thing under competing ethical theories to play some role in determining >, and that information simply isn’t present in the >_m’s.
(Even though my comment is a criticism, I still liked the post—it was good enough to get me thinking at least)
Edit: clarity and fixing _′s
That was like, half the point of my post. I obviously suck at explaining myself.
And yes, I agree now that starting with utility functions is the wrong way. We should actually just build something from the ground up aimed squarely at indirect normativity.
Even though my post is an argument, the point really is to get us all thinking about this and see where we can go with it.
Thanks for throwing your brain into the pile.
I think the combination of me skimming and thinking in terms of the underlying preference relation instead of intertheoretic weights caused me to miss it, but yeah, It’s clear you already said that.
No problem :) Here are some more thoughts:
It seems correct to allow the probability distribution over ethical theories to depend on the outcome—there are facts about the world which would change my probability distribution over ethical theories, e.g. facts about the brain or human psychology. Not all meta-ethical theories would allow this, but some do.
I’m nearly certain that if you use preference relation over sets framework, you’ll recover a version of each ethical theory’s utility function, and this even happens if you allow the true ethical theory to be correlated with the outcome of the lottery by using a conditional distribution P(m|o) instead of P(m). Implicitly, this will define your k_m’s and c_m’s, given a version of each m’s utility function, U_m(.).
It seems straightforward to add uncertainty over meta-preferences into the mix, though now we’ll need meta-meta-preferences over M2xM1xO. In general, you can always add uncertainty over meta^n-preferences, and the standard VNM axioms should get you what you want, but in the limit the space becomes infinite-dimensional and thus infinite, so the usual VNM proof doesn’t apply to the infinite tower of uncertainty.
It seems incorrect to have M be a finite set in the first place since competing ethical theories will say something like “1 human life = X dog lives”, and X could be any real number. This means, once again, we blow up the VNM proof. On the other hand, I’m not sure this is any different than complaining that O is finite, in which case if you’re going to simplify and assume O is finite, you may as well do the same for M.