The unstated assumption is that a non-negligible proportion of the difficulty in creating a self-optimising AI has to do with the compiler toolchain. I guess most people wouldn’t agree with that. For one thing, even if the toolchain is a complicated tower of Babel, why isn’t it good enough to just optimise one’s source code at the top level? Isn’t there a limit to how much you can gain by running on top of a perfect O/S?
(BTW the “tower of Babel” is a nice phrase which gets at the sense of unease associated with these long toolchains, (eg) Python—RPython—LLVM - ??? - electrons.)
There are lots of reasons why an optimizable OS and tool chain are relevant:
control over the lower level OS allows for significant performance gains (there have been significant algorithmic gains in process isolation, scheduling and e.g. garbage collection on the OS level all of which improve run-time).
access to a comparatively simple OS and tool chain allows the AI to spread to other systems. Writing a low level virus is significantly more ‘simple’, powerful, effective and possile to hide than spreaing via text interface.
a kind of self-optimizable tool chain is presumably needed within an AI system anyway and STEPS proposes a way to not only model but actually build this.
It may be irrelevant in the end but not in the beginning. I’m not really talking about the runaway phase of some AI but about the hard or non-hard takeoff and there any factor will weigh heavily. 10^3 will make the difference between years or hours.
The unstated assumption is that a non-negligible proportion of the difficulty in creating a self-optimising AI has to do with the compiler toolchain. I guess most people wouldn’t agree with that. For one thing, even if the toolchain is a complicated tower of Babel, why isn’t it good enough to just optimise one’s source code at the top level? Isn’t there a limit to how much you can gain by running on top of a perfect O/S?
(BTW the “tower of Babel” is a nice phrase which gets at the sense of unease associated with these long toolchains, (eg) Python—RPython—LLVM - ??? - electrons.)
There are lots of reasons why an optimizable OS and tool chain are relevant:
control over the lower level OS allows for significant performance gains (there have been significant algorithmic gains in process isolation, scheduling and e.g. garbage collection on the OS level all of which improve run-time).
access to a comparatively simple OS and tool chain allows the AI to spread to other systems. Writing a low level virus is significantly more ‘simple’, powerful, effective and possile to hide than spreaing via text interface.
a kind of self-optimizable tool chain is presumably needed within an AI system anyway and STEPS proposes a way to not only model but actually build this.
Even if you got a 10^6 speedup (you wouldn’t), that gain is not compoundable. So it’s irrelevant.
Only if those other systems are kind enough to run the O/S you want them to run.
It may be irrelevant in the end but not in the beginning. I’m not really talking about the runaway phase of some AI but about the hard or non-hard takeoff and there any factor will weigh heavily. 10^3 will make the difference between years or hours.