I think it would be significantly easier to make FAI than LukeFreindly AI
Massively backwards! Creating an FAI (presumably ‘friendly to humanity’) requires an AI that can somehow harvest and aggregate preferences over humans in general but an FAI just needs to scan one brain.
Massively backwards! Creating an FAI (presumably ‘friendly to humanity’) requires an AI that can somehow harvest and aggregate preferences over humans in general but an FAI just needs to scan one brain.
Scanning is unlikely to be the bottleneck for a GAI, and it seems most of the difficulty with CEV is from the Extrapolation part, not the Coherence.
It doesn’t matter how easy the parts may be, scanning, extrapolating and cohering all of humanity is harder than scanning and extrapolating Luke.
Not if Luke’s values contain pointers to all those other humans.