Maybe I just need to read up on the theory a little more, because I’m still quite confused. Is my CEV the set of things I would want given all the correct moral arguments and all the information? As opposed (probably) to be the set of things I want now?
I can see how the set of things I want now would change over time, but I’m having a hard time seeing why my CEV could ever change. Compare the CEPT, the Coherent Extrapolated Physical Theory, which is the theory of physics we would have if we had all the information and all the correct physics arguments. I can see how our present physical theories would change, but CEPT seems like it should be fixed.
But I suppose it’s also true that CEPT supervenes on a set of basic, contingent physical facts. So does CEV also supervene on a set of basic, contingent wants? If so, I suppose a CEV can change depending on which basic wants I have. Is that right?
If so, does that mean I have to agree to disagree with an ancient greek person on moral matters? Or that, on some level, I can no longer reasonably ask whether my wanting something is good or bad?
Is my CEV the set of things I would want given all the correct moral arguments and all the information? As opposed (probably) to be the set of things I want now?
Yes. This needn’t be the same for all agents: a rock would still not want anything no matter how many correct moral arguments and how much information you gave it, so CEV is indifferent to everything. Now you and Homer are much more similar than you and a rock, so your CEVs will be much more similar, but it’s not obvious to me that they are necessarily exactly identical just because you’re individuals of the same species.
Technically this is just EV (extrapolated volition); then CEV is just some way of compromising between your EV and everyone else’s (possibly including Homer, but presumably not including rocks).
Thanks, I think I get it. Do you have any thoughts on my last two questions:
If so, does that mean I have to agree to disagree with an ancient greek person on moral matters? Or that, on some level, I can no longer reasonably ask whether my wanting something is good or bad?
I’d say that would just mean that the two of you mean different things by the word good (see also TimS’s comment), but for some reason I feel that would just amount to dodging the question, so I’m going to say “I don’t know” instead.
I think you’ve got the right idea that CEV aims to find that fixed, ultimately-best-possible set of values.
If I understand correctly, CEV is mostly intended as a shortcut to arrive as close as possible to the same ethics we would have if all humans sat and thought and discussed and researched ethics for [insert arbitrarily large amount of time] until no more changes would occur in those ethics and the system would remain logically consistent and always the best choice for all circumstances and in all futures barring direct alteration of elementary human values.
There may be some conflation between CEV and particular implementations of it that were discussed previously, or with other CEV-like theories (e.g. Coherent Blended Volition). I may also be the one doing the conflating, though.
Maybe I just need to read up on the theory a little more, because I’m still quite confused. Is my CEV the set of things I would want given all the correct moral arguments and all the information? As opposed (probably) to be the set of things I want now?
I can see how the set of things I want now would change over time, but I’m having a hard time seeing why my CEV could ever change. Compare the CEPT, the Coherent Extrapolated Physical Theory, which is the theory of physics we would have if we had all the information and all the correct physics arguments. I can see how our present physical theories would change, but CEPT seems like it should be fixed.
But I suppose it’s also true that CEPT supervenes on a set of basic, contingent physical facts. So does CEV also supervene on a set of basic, contingent wants? If so, I suppose a CEV can change depending on which basic wants I have. Is that right?
If so, does that mean I have to agree to disagree with an ancient greek person on moral matters? Or that, on some level, I can no longer reasonably ask whether my wanting something is good or bad?
Yes. This needn’t be the same for all agents: a rock would still not want anything no matter how many correct moral arguments and how much information you gave it, so CEV is indifferent to everything. Now you and Homer are much more similar than you and a rock, so your CEVs will be much more similar, but it’s not obvious to me that they are necessarily exactly identical just because you’re individuals of the same species.
Technically this is just EV (extrapolated volition); then CEV is just some way of compromising between your EV and everyone else’s (possibly including Homer, but presumably not including rocks).
Thanks, I think I get it. Do you have any thoughts on my last two questions:
I’d say that would just mean that the two of you mean different things by the word good (see also TimS’s comment), but for some reason I feel that would just amount to dodging the question, so I’m going to say “I don’t know” instead.
I think you’ve got the right idea that CEV aims to find that fixed, ultimately-best-possible set of values.
If I understand correctly, CEV is mostly intended as a shortcut to arrive as close as possible to the same ethics we would have if all humans sat and thought and discussed and researched ethics for [insert arbitrarily large amount of time] until no more changes would occur in those ethics and the system would remain logically consistent and always the best choice for all circumstances and in all futures barring direct alteration of elementary human values.
There may be some conflation between CEV and particular implementations of it that were discussed previously, or with other CEV-like theories (e.g. Coherent Blended Volition). I may also be the one doing the conflating, though.