CEV, until designed and defined properly, is just a black box that everyone universally agrees is ‘good’, but has little else in term of defining features.
If the question is “what should we want?” then CEV is much better than a black box, because it fleshes out some of the intuitions behind the magical category “want.”
If the question is “how should we measure what we want?” then CEV is just a black box, because it doesn’t solve or suggest a method for solving any of our measurement problems. We know we want coherent, extrapolated, volitional thingies, but we have no idea, for example, how to rigorously define “volitional.” We likewise have no idea how far into the future we should be extrapolating things, nor how many facets of a personality or society can reasonably be expected to converge/cohere.
CEV, until designed and defined properly, is just a black box that everyone universally agrees is ‘good’, but has little else in term of defining features.
See This paper for a relatively decent account of what CEV is getting at.
If the question is “what should we want?” then CEV is much better than a black box, because it fleshes out some of the intuitions behind the magical category “want.”
If the question is “how should we measure what we want?” then CEV is just a black box, because it doesn’t solve or suggest a method for solving any of our measurement problems. We know we want coherent, extrapolated, volitional thingies, but we have no idea, for example, how to rigorously define “volitional.” We likewise have no idea how far into the future we should be extrapolating things, nor how many facets of a personality or society can reasonably be expected to converge/cohere.
It’s not so much a box, but a method of filling the box. We just haven’t filled the box yet.