There are lots of reasons why an optimizable OS and tool chain are relevant:
control over the lower level OS allows for significant performance gains (there have been significant algorithmic gains in process isolation, scheduling and e.g. garbage collection on the OS level all of which improve run-time).
access to a comparatively simple OS and tool chain allows the AI to spread to other systems. Writing a low level virus is significantly more ‘simple’, powerful, effective and possile to hide than spreaing via text interface.
a kind of self-optimizable tool chain is presumably needed within an AI system anyway and STEPS proposes a way to not only model but actually build this.
It may be irrelevant in the end but not in the beginning. I’m not really talking about the runaway phase of some AI but about the hard or non-hard takeoff and there any factor will weigh heavily. 10^3 will make the difference between years or hours.
There are lots of reasons why an optimizable OS and tool chain are relevant:
control over the lower level OS allows for significant performance gains (there have been significant algorithmic gains in process isolation, scheduling and e.g. garbage collection on the OS level all of which improve run-time).
access to a comparatively simple OS and tool chain allows the AI to spread to other systems. Writing a low level virus is significantly more ‘simple’, powerful, effective and possile to hide than spreaing via text interface.
a kind of self-optimizable tool chain is presumably needed within an AI system anyway and STEPS proposes a way to not only model but actually build this.
Even if you got a 10^6 speedup (you wouldn’t), that gain is not compoundable. So it’s irrelevant.
Only if those other systems are kind enough to run the O/S you want them to run.
It may be irrelevant in the end but not in the beginning. I’m not really talking about the runaway phase of some AI but about the hard or non-hard takeoff and there any factor will weigh heavily. 10^3 will make the difference between years or hours.