According to the 2004 paper, Eliezer thinks (or thought, anyway) “what we would decide if we knew more, thought faster, were more the people we wished we were, had grown up farther together...” would do the trick. Presumably that’s the part to be hard-coded in. Or you could extrapolate (using the above) what people would say “wisdom” amounts to and use that instead.
Actually, I can’t imagine someone who knew and understood both the methods of rationality (having been directed to LessWrong) and all the teachings of the church (having been directed to church) would then direct a person to church. Maybe the FAI can let a person take both directions to become wiser.
ETA: Of course, in FAI ‘maybe’ isn’t good enough...
I mentioned this problem already. And I (07/2010) thought about ways to ensure that FAI will prefer my/our/rational way of extrapolating.
Now I think it would be better if FAI will select coherent subset of volitions of all reflectively consistent extrapolations. As I suspect it will be something like: protect humanity from existential risk, but don’t touch it beyond that.
That’s true.
According to the 2004 paper, Eliezer thinks (or thought, anyway) “what we would decide if we knew more, thought faster, were more the people we wished we were, had grown up farther together...” would do the trick. Presumably that’s the part to be hard-coded in. Or you could extrapolate (using the above) what people would say “wisdom” amounts to and use that instead.
Actually, I can’t imagine someone who knew and understood both the methods of rationality (having been directed to LessWrong) and all the teachings of the church (having been directed to church) would then direct a person to church. Maybe the FAI can let a person take both directions to become wiser.
ETA: Of course, in FAI ‘maybe’ isn’t good enough...
I mentioned this problem already. And I (07/2010) thought about ways to ensure that FAI will prefer my/our/rational way of extrapolating.
Now I think it would be better if FAI will select coherent subset of volitions of all reflectively consistent extrapolations. As I suspect it will be something like: protect humanity from existential risk, but don’t touch it beyond that.