Presumably the reason there is no mention of superintelligence or recursively self-modifying anything because those concepts don’t exist with today’s technology.
Superintelligence doesn’t, but recursive self-modification has been a feature of AI research since the Seventies. As MIRI predicts, value stability proved to be a problem.
(Eurisko’s authors solved it—allegedly, there isn’t much open data—by walling off the agent’s utility function from modification. This would be much harder to do to a superintelligent agent.)
Superintelligence doesn’t, but recursive self-modification has been a feature of AI research since the Seventies. As MIRI predicts, value stability proved to be a problem.
(Eurisko’s authors solved it—allegedly, there isn’t much open data—by walling off the agent’s utility function from modification. This would be much harder to do to a superintelligent agent.)