To the end of what? The sequence? Or the humanity as we know it?
The end of SI’s mission, in success, failure, or change of paradigm.
Is there one “true” ontology/morality?
There’s one reality so all “true ontologies” ought to be specializations of the same truth. One true morality is a shakier proposition, given that morality is the judgment of an agent and there’s more than one agent. It’s not even clear that just picking out the moral component of the human decision procedure is enough for SI’s purposes. What FAI research is really after is “decision procedure that a sober-minded and fully-informed human being would prefer to be employed by an AGI”.
The end of SI’s mission, in success, failure, or change of paradigm.
There’s one reality so all “true ontologies” ought to be specializations of the same truth. One true morality is a shakier proposition, given that morality is the judgment of an agent and there’s more than one agent. It’s not even clear that just picking out the moral component of the human decision procedure is enough for SI’s purposes. What FAI research is really after is “decision procedure that a sober-minded and fully-informed human being would prefer to be employed by an AGI”.