And it wouldn’t defeat the OT because you’d still have to prove you couldn’t have a utility function over e.g. causal continuity (note: you can have a utility function over causal continuity).
A certain machine could perhaps be programmed with an utility function over causal continuity, but a privileged stance for one’s own values wouldn’t be rational lacking a personal identity, in an objective “God’s eye view”, as David Pearce says. That would call at least for something like coherent extrapolated volition, at least including agents with contextually equivalent reasoning capacity. Note that I use “at least” twice, to accommodate your ethical views. More sensible would be to include not only humans, but all known sentient perspectives, because the ethical value(s) of subjects arguably depend more on sentience than on reasoning capacity.
And it wouldn’t defeat the OT because you’d still have to prove you couldn’t have a utility function over e.g. causal continuity (note: you can have a utility function over causal continuity).
A certain machine could perhaps be programmed with an utility function over causal continuity, but a privileged stance for one’s own values wouldn’t be rational lacking a personal identity, in an objective “God’s eye view”, as David Pearce says. That would call at least for something like coherent extrapolated volition, at least including agents with contextually equivalent reasoning capacity. Note that I use “at least” twice, to accommodate your ethical views. More sensible would be to include not only humans, but all known sentient perspectives, because the ethical value(s) of subjects arguably depend more on sentience than on reasoning capacity.