That was like, half the point of my post. I obviously suck at explaining myself.
I think the combination of me skimming and thinking in terms of the underlying preference relation instead of intertheoretic weights caused me to miss it, but yeah, It’s clear you already said that.
Thanks for throwing your brain into the pile.
No problem :) Here are some more thoughts:
It seems correct to allow the probability distribution over ethical theories to depend on the outcome—there are facts about the world which would change my probability distribution over ethical theories, e.g. facts about the brain or human psychology. Not all meta-ethical theories would allow this, but some do.
I’m nearly certain that if you use preference relation over sets framework, you’ll recover a version of each ethical theory’s utility function, and this even happens if you allow the true ethical theory to be correlated with the outcome of the lottery by using a conditional distribution P(m|o) instead of P(m). Implicitly, this will define your k_m’s and c_m’s, given a version of each m’s utility function, U_m(.).
It seems straightforward to add uncertainty over meta-preferences into the mix, though now we’ll need meta-meta-preferences over M2xM1xO. In general, you can always add uncertainty over meta^n-preferences, and the standard VNM axioms should get you what you want, but in the limit the space becomes infinite-dimensional and thus infinite, so the usual VNM proof doesn’t apply to the infinite tower of uncertainty.
It seems incorrect to have M be a finite set in the first place since competing ethical theories will say something like “1 human life = X dog lives”, and X could be any real number. This means, once again, we blow up the VNM proof. On the other hand, I’m not sure this is any different than complaining that O is finite, in which case if you’re going to simplify and assume O is finite, you may as well do the same for M.
I think the combination of me skimming and thinking in terms of the underlying preference relation instead of intertheoretic weights caused me to miss it, but yeah, It’s clear you already said that.
No problem :) Here are some more thoughts:
It seems correct to allow the probability distribution over ethical theories to depend on the outcome—there are facts about the world which would change my probability distribution over ethical theories, e.g. facts about the brain or human psychology. Not all meta-ethical theories would allow this, but some do.
I’m nearly certain that if you use preference relation over sets framework, you’ll recover a version of each ethical theory’s utility function, and this even happens if you allow the true ethical theory to be correlated with the outcome of the lottery by using a conditional distribution P(m|o) instead of P(m). Implicitly, this will define your k_m’s and c_m’s, given a version of each m’s utility function, U_m(.).
It seems straightforward to add uncertainty over meta-preferences into the mix, though now we’ll need meta-meta-preferences over M2xM1xO. In general, you can always add uncertainty over meta^n-preferences, and the standard VNM axioms should get you what you want, but in the limit the space becomes infinite-dimensional and thus infinite, so the usual VNM proof doesn’t apply to the infinite tower of uncertainty.
It seems incorrect to have M be a finite set in the first place since competing ethical theories will say something like “1 human life = X dog lives”, and X could be any real number. This means, once again, we blow up the VNM proof. On the other hand, I’m not sure this is any different than complaining that O is finite, in which case if you’re going to simplify and assume O is finite, you may as well do the same for M.