Understanding what a self (and a volition) is matters; CEV relies on extrapolating the volition of selves, and therefore on understanding selves/volitions. But there’s no reason to think that there’s a unique reduction of “self”; indeed, there’s almost certainly not (Diego gives various examples). Also, there’s various other things constraining our intuitive definition, like that they be utility maximizing.
One way out for CEV is that the Turing Test is reliable means of identifying a subset of selves; once we can identify an AGI as a self via the Turing Test, it can then itself use the Turing Test to identify (some) other selves.
Sure, why not?
Understanding what a self (and a volition) is matters; CEV relies on extrapolating the volition of selves, and therefore on understanding selves/volitions. But there’s no reason to think that there’s a unique reduction of “self”; indeed, there’s almost certainly not (Diego gives various examples). Also, there’s various other things constraining our intuitive definition, like that they be utility maximizing.
One way out for CEV is that the Turing Test is reliable means of identifying a subset of selves; once we can identify an AGI as a self via the Turing Test, it can then itself use the Turing Test to identify (some) other selves.