I thought the whole point of CEV was to make the AI infer what the process of extrapolation would return if anyone were fool enough to actually do it. A merely human-level intelligence could figure out that humanity, extrapolated as we wish, would tell the AI not to kill us all and use our atoms for computing power (in order to, say, carry out the extrapolation).
Good point. Actually, reconsidering the whole setup, I think my argument actually works the other way and ends showing that Manfred is being pessimistic.
Manfred’s claim was something to the effect that you can get a good approximation to CEV by coherently extrapolating the volition of a random sample of people. Why? Because under some unstated assumptions (data on individuals reveals information about their CEV with some sort of iid error model, CEV has a representation with bounded complexity, etc.), it’s reasonable to expect that the error in the inference falls off slowly with the number of individuals observed. Hence, you don’t lose very much by looking at just a few people.
I mentioned that one of the unstated assumptions, bounded complexity of CEV, might not be justified, resulting in a slower fall off in the inferential error. However, this actually justifies working with a small sample more strongly. There’s less expected gain in the quality of inference (or even no expected gain in case of inconsistency) for using a larger sample.
I’m not sure there’s any magic way for the AI to jump right to what it “would return” without actually doing work in the form of looking at data and doing stuff like inference. In such tasks, its performance is governed pretty tightly by established statistical theory.
I thought the whole point of CEV was to make the AI infer what the process of extrapolation would return if anyone were fool enough to actually do it. A merely human-level intelligence could figure out that humanity, extrapolated as we wish, would tell the AI not to kill us all and use our atoms for computing power (in order to, say, carry out the extrapolation).
Good point. Actually, reconsidering the whole setup, I think my argument actually works the other way and ends showing that Manfred is being pessimistic.
Manfred’s claim was something to the effect that you can get a good approximation to CEV by coherently extrapolating the volition of a random sample of people. Why? Because under some unstated assumptions (data on individuals reveals information about their CEV with some sort of iid error model, CEV has a representation with bounded complexity, etc.), it’s reasonable to expect that the error in the inference falls off slowly with the number of individuals observed. Hence, you don’t lose very much by looking at just a few people.
I mentioned that one of the unstated assumptions, bounded complexity of CEV, might not be justified, resulting in a slower fall off in the inferential error. However, this actually justifies working with a small sample more strongly. There’s less expected gain in the quality of inference (or even no expected gain in case of inconsistency) for using a larger sample.
I’m not sure there’s any magic way for the AI to jump right to what it “would return” without actually doing work in the form of looking at data and doing stuff like inference. In such tasks, its performance is governed pretty tightly by established statistical theory.