In these terms, the plan I see as the most promising is that the correct way of extracting preferences from humans that doesn’t require further “extrapolation” falls out of decision theory.
(Not sure what you meant by Drescher’s option (what’s “response to preferences”?): does the book suggest that it’s unnecessary to use humans as utility definition material? In any case, this doesn’t sound like something he would currently believe.)
As I recall, Drescher still used humans as utility definition material but thought that there might be a single correct response to these utilities — one which falls out of decision theory and game theory.
What’s “response to utilities” (in grandparent you used “response to preferences” which I also didn’t understand)? Response of what for what purpose? (Perhaps, the right question is about what you mean by “utilities” here, as in extracted/descriptive or extrapolated/normative.)
No, it’s not a clarifying question about subtleties of that construction, I have no inkling of what you mean (seriously, no irony), and hence fail to parse what you wrote (related to “response to utilities” and “response to preferences”) at the most basic level. This is what I see in the grandparent:
Drescher still used humans as utility definition material but thought that there might be a single correct borogove — one which falls out of decision theory and game theory.
Drescher still used humans as utility definition material but thought that there might be a single, morally correct way to derive normative requirements from values — one which falls out of decision theory and game theory.
Suppose that by “values” in that sentence I meant something similar to the firing rates of certain populations of neurons, and by “normative requirements” I meant what I’d mean if I had solved metaethics.
Then that would refer to the “extrapolation” step (falling out of decision theory, as opposed to something CEV-esque), and assume that the results of an “extraction” step are already available, right? Does (did) Drescher hold this view?
From what I meant, it needn’t assume that the results of an extraction step are already available, and I don’t recall Drescher talking in so much detail about it. He just treats humans as utility material, however that might work.
(In general, it’s not clear in what ways descriptive “utility” can be more useful than original humans, or what it means as “utility”, unless it’s already normative preference, in which case it can’t be “extrapolated” any further. “Extrapolation” makes more sense as a way of constructing normative preference from something more like an algorithm that specifies behavior, which seems to be CEV’s purpose, and could then be seen as a particular method of extraction-without-need-for-extrapolation.)
In these terms, the plan I see as the most promising is that the correct way of extracting preferences from humans that doesn’t require further “extrapolation” falls out of decision theory.
(Not sure what you meant by Drescher’s option (what’s “response to preferences”?): does the book suggest that it’s unnecessary to use humans as utility definition material? In any case, this doesn’t sound like something he would currently believe.)
As I recall, Drescher still used humans as utility definition material but thought that there might be a single correct response to these utilities — one which falls out of decision theory and game theory.
What’s “response to utilities” (in grandparent you used “response to preferences” which I also didn’t understand)? Response of what for what purpose? (Perhaps, the right question is about what you mean by “utilities” here, as in extracted/descriptive or extrapolated/normative.)
Yeah, I don’t know. It’s kind of like asking what “should” or “ought” means. I don’t know.
No, it’s not a clarifying question about subtleties of that construction, I have no inkling of what you mean (seriously, no irony), and hence fail to parse what you wrote (related to “response to utilities” and “response to preferences”) at the most basic level. This is what I see in the grandparent:
For our purposes, how about...
Still no luck. What’s the distinction between “normative requirements” and “values”, in what way are these two ideas (as intended) not the same?
Suppose that by “values” in that sentence I meant something similar to the firing rates of certain populations of neurons, and by “normative requirements” I meant what I’d mean if I had solved metaethics.
Then that would refer to the “extrapolation” step (falling out of decision theory, as opposed to something CEV-esque), and assume that the results of an “extraction” step are already available, right? Does (did) Drescher hold this view?
From what I meant, it needn’t assume that the results of an extraction step are already available, and I don’t recall Drescher talking in so much detail about it. He just treats humans as utility material, however that might work.
OK, thanks! That would agree with my plan then.
(In general, it’s not clear in what ways descriptive “utility” can be more useful than original humans, or what it means as “utility”, unless it’s already normative preference, in which case it can’t be “extrapolated” any further. “Extrapolation” makes more sense as a way of constructing normative preference from something more like an algorithm that specifies behavior, which seems to be CEV’s purpose, and could then be seen as a particular method of extraction-without-need-for-extrapolation.)