If everything comes together, then it will now be a straight line from here to the end.
To the end of what? The sequence? Or the humanity as we know it?
You need to find the true ontology, find the true morality, and win the intelligence race. For example, if your Friendly AI was to be an expected utility maximizer, it would need to model the world correctly (“true ontology”), value the world correctly (“true morality”), and it would need to outsmart its opponents (“win the intelligence race”).
Is there one “true” ontology/morality? Most likely there are many, leading in different directions, not necessarily easy to rank based on our current morality.
Personally, I am not overly worried that SI will produce anything resembling a working AGI, let alone a self-improving one, on any kind of reasonable time frame, and there is little cause for concern at this point, and definitely no rush to fix any deep conceptual issues you think SI might have. The best outcome one can hope for is that the SI research will produce some interesting results and gets a certain amount of respect from the CS/AI crowd in general. Formulating and proving a theorem or two could be one of those results. Maybe an insight into machine learning.
I am not at all concerned that they would lock a “wrong” epistemology (MWI/Tegmark, whatever) into a self-improving AGI, partly because I think that algorithms for unnecessarily complicated models will be simplified, either externally by the programmers or internally by the machine, into something that better reflects the interaction with the external world.
A more likely outcome is that they hit an unexpected road block fairly quickly, one that makes them go back to the drawing board and reevaluate the basics.
To the end of what? The sequence? Or the humanity as we know it?
The end of SI’s mission, in success, failure, or change of paradigm.
Is there one “true” ontology/morality?
There’s one reality so all “true ontologies” ought to be specializations of the same truth. One true morality is a shakier proposition, given that morality is the judgment of an agent and there’s more than one agent. It’s not even clear that just picking out the moral component of the human decision procedure is enough for SI’s purposes. What FAI research is really after is “decision procedure that a sober-minded and fully-informed human being would prefer to be employed by an AGI”.
To the end of what? The sequence? Or the humanity as we know it?
Is there one “true” ontology/morality? Most likely there are many, leading in different directions, not necessarily easy to rank based on our current morality.
Personally, I am not overly worried that SI will produce anything resembling a working AGI, let alone a self-improving one, on any kind of reasonable time frame, and there is little cause for concern at this point, and definitely no rush to fix any deep conceptual issues you think SI might have. The best outcome one can hope for is that the SI research will produce some interesting results and gets a certain amount of respect from the CS/AI crowd in general. Formulating and proving a theorem or two could be one of those results. Maybe an insight into machine learning.
I am not at all concerned that they would lock a “wrong” epistemology (MWI/Tegmark, whatever) into a self-improving AGI, partly because I think that algorithms for unnecessarily complicated models will be simplified, either externally by the programmers or internally by the machine, into something that better reflects the interaction with the external world.
A more likely outcome is that they hit an unexpected road block fairly quickly, one that makes them go back to the drawing board and reevaluate the basics.
The end of SI’s mission, in success, failure, or change of paradigm.
There’s one reality so all “true ontologies” ought to be specializations of the same truth. One true morality is a shakier proposition, given that morality is the judgment of an agent and there’s more than one agent. It’s not even clear that just picking out the moral component of the human decision procedure is enough for SI’s purposes. What FAI research is really after is “decision procedure that a sober-minded and fully-informed human being would prefer to be employed by an AGI”.