If the question is “what should we want?” then CEV is much better than a black box, because it fleshes out some of the intuitions behind the magical category “want.”
If the question is “how should we measure what we want?” then CEV is just a black box, because it doesn’t solve or suggest a method for solving any of our measurement problems. We know we want coherent, extrapolated, volitional thingies, but we have no idea, for example, how to rigorously define “volitional.” We likewise have no idea how far into the future we should be extrapolating things, nor how many facets of a personality or society can reasonably be expected to converge/cohere.
See This paper for a relatively decent account of what CEV is getting at.
If the question is “what should we want?” then CEV is much better than a black box, because it fleshes out some of the intuitions behind the magical category “want.”
If the question is “how should we measure what we want?” then CEV is just a black box, because it doesn’t solve or suggest a method for solving any of our measurement problems. We know we want coherent, extrapolated, volitional thingies, but we have no idea, for example, how to rigorously define “volitional.” We likewise have no idea how far into the future we should be extrapolating things, nor how many facets of a personality or society can reasonably be expected to converge/cohere.