Suppose that by “values” in that sentence I meant something similar to the firing rates of certain populations of neurons, and by “normative requirements” I meant what I’d mean if I had solved metaethics.
Then that would refer to the “extrapolation” step (falling out of decision theory, as opposed to something CEV-esque), and assume that the results of an “extraction” step are already available, right? Does (did) Drescher hold this view?
From what I meant, it needn’t assume that the results of an extraction step are already available, and I don’t recall Drescher talking in so much detail about it. He just treats humans as utility material, however that might work.
(In general, it’s not clear in what ways descriptive “utility” can be more useful than original humans, or what it means as “utility”, unless it’s already normative preference, in which case it can’t be “extrapolated” any further. “Extrapolation” makes more sense as a way of constructing normative preference from something more like an algorithm that specifies behavior, which seems to be CEV’s purpose, and could then be seen as a particular method of extraction-without-need-for-extrapolation.)
Suppose that by “values” in that sentence I meant something similar to the firing rates of certain populations of neurons, and by “normative requirements” I meant what I’d mean if I had solved metaethics.
Then that would refer to the “extrapolation” step (falling out of decision theory, as opposed to something CEV-esque), and assume that the results of an “extraction” step are already available, right? Does (did) Drescher hold this view?
From what I meant, it needn’t assume that the results of an extraction step are already available, and I don’t recall Drescher talking in so much detail about it. He just treats humans as utility material, however that might work.
OK, thanks! That would agree with my plan then.
(In general, it’s not clear in what ways descriptive “utility” can be more useful than original humans, or what it means as “utility”, unless it’s already normative preference, in which case it can’t be “extrapolated” any further. “Extrapolation” makes more sense as a way of constructing normative preference from something more like an algorithm that specifies behavior, which seems to be CEV’s purpose, and could then be seen as a particular method of extraction-without-need-for-extrapolation.)