Seriously, I guess Eliezer really needs this kind of reality check wakeup, before his whole idea of “FOOM” and “recursion” etc… turns into complete cargo cult science.
While I think the basic premise (strong AI friendliness) is quite concern, many of his recent posts sound like he had read too much science fiction and watched Terminator movie too many times.
There are some very basic issues with the whole recursion and singleton ideas… GenericThinker is right, ‘halting problem’ is very relevant there, in fact it proves that the whole “recursion foom in 48 hours” is completely bogus.
As for ‘singleton’, if nothing else (and there is a lot), speed of light is limiting factor. Therefore, to meaningfully react to local information, you need independent intelligent local agent. No matter what you do, independent intelligent local agent will always diverge from singleton’s global policy. End of story, forget about singletons. Strong AI will be small, fast, and there will be a lot of units.
So, while the basic premise, the concern about strong AI safety, remains, I think we should consider alternative scenario: AI grows relatively slowly (but follows the pattern of current ongoing foom), there is no singleton.
Nice thread.
Seriously, I guess Eliezer really needs this kind of reality check wakeup, before his whole idea of “FOOM” and “recursion” etc… turns into complete cargo cult science.
While I think the basic premise (strong AI friendliness) is quite concern, many of his recent posts sound like he had read too much science fiction and watched Terminator movie too many times.
There are some very basic issues with the whole recursion and singleton ideas… GenericThinker is right, ‘halting problem’ is very relevant there, in fact it proves that the whole “recursion foom in 48 hours” is completely bogus.
As for ‘singleton’, if nothing else (and there is a lot), speed of light is limiting factor. Therefore, to meaningfully react to local information, you need independent intelligent local agent. No matter what you do, independent intelligent local agent will always diverge from singleton’s global policy. End of story, forget about singletons. Strong AI will be small, fast, and there will be a lot of units.
So, while the basic premise, the concern about strong AI safety, remains, I think we should consider alternative scenario: AI grows relatively slowly (but follows the pattern of current ongoing foom), there is no singleton.