Weighing in late here, I’ll briefly note that my current stance on the difficulty of philosophical issues is (in colloquial terms) “for the love of all that is good, please don’t attempt to implement CEV with your first transhuman intelligence”. My strategy at this point is very much “build the minimum AI system that is capable of stabilizing the overall strategic situation, and then buy a whole lot of time, and then use that time to figure out what to do with the future.” I might be more optimistic than you about how easy it will turn out to be to find a reasonable method for extrapolating human volition, but I suspect that that’s a moot point either way, because regardless, thou shalt not attempt to implement CEV with humanity’s very first transhuman intelligence.
Also, +1 to the overall point of “also pursue other approaches”.
Weighing in late here, I’ll briefly note that my current stance on the difficulty of philosophical issues is (in colloquial terms) “for the love of all that is good, please don’t attempt to implement CEV with your first transhuman intelligence”. My strategy at this point is very much “build the minimum AI system that is capable of stabilizing the overall strategic situation, and then buy a whole lot of time, and then use that time to figure out what to do with the future.” I might be more optimistic than you about how easy it will turn out to be to find a reasonable method for extrapolating human volition, but I suspect that that’s a moot point either way, because regardless, thou shalt not attempt to implement CEV with humanity’s very first transhuman intelligence.
Also, +1 to the overall point of “also pursue other approaches”.