Now, there’s a generalized answer. It even covers the possibility of meeting aliens—finding TDT is a necessary condition for reaching the stars. Harsh punishment of inconsiderate expanders might still be required, but there could be a stable equilibrium without ever actually inflicting that punishment. That’s a new perspective for me, thanks!
To be even more general, suppose that there is at least one thing X that is universally necessary for effective superintelligences to function. X might be knowledge of the second law of thermodynamics, TDT, a computational substrate of some variety, or any number of other things. There are probably very many such X’s, many of which are entirely non-obvious to any entity that is not itself a superintelligence (i.e. us). Furthermore, there may be at least one thing Y that is universally incompatible with effective superintelligence. Y might be an absolute belief in the existence of the deity Thor or desiring only to solve the Halting Problem using a TM-equivalent. For the Hansonian model to hold, all X’s and no Y’s must be compatible with the desire and ability to expand and/or replicate.
This argument is generally why I dislike speculating about superintelligences. It is impossible for ordinary humans to have exhaustive (or even useful, partial) knowledge of all X and all Y. The set of all things Y in particular may not even be enumerable.
We cannot be sure that there are difficulties beyond our comprehension but we are certainly able to assign probabilities to that hypothesis based on what we know. I would be justifiably shocked if something we could call a super-intelligence couldn’t be formed based on knowledge that is accessible to us, even if the process of putting the seed of a super-intelligence together is beyond us.
Humans aren’t even remotely optimised for generalised intelligence, it’s just a trick we picked up to, crudely speaking, get laid. There is no reason that a intelligence of the form “human thinking minus the parts that suck and a bit more of the parts that don’t suck” couldn’t be created using the knowledge available to us and that is something we can easily place a high probability on. Then you run the hardware at more than 60hz.
Oh, I agree. We just don’t know what self-modifications will be necessary to achieve non-speed-based optimizations.
To put it another way, if superintelligences are competing with each other and self-modifying in order to do so, predictions about the qualities those superintelligences will possess are all but worthless.
To put it another way, if superintelligences are competing with each other and self-modifying in order to do so, predictions about the qualities those superintelligences will possess are all but worthless.
Now, there’s a generalized answer. It even covers the possibility of meeting aliens—finding TDT is a necessary condition for reaching the stars. Harsh punishment of inconsiderate expanders might still be required, but there could be a stable equilibrium without ever actually inflicting that punishment. That’s a new perspective for me, thanks!
To be even more general, suppose that there is at least one thing X that is universally necessary for effective superintelligences to function. X might be knowledge of the second law of thermodynamics, TDT, a computational substrate of some variety, or any number of other things. There are probably very many such X’s, many of which are entirely non-obvious to any entity that is not itself a superintelligence (i.e. us). Furthermore, there may be at least one thing Y that is universally incompatible with effective superintelligence. Y might be an absolute belief in the existence of the deity Thor or desiring only to solve the Halting Problem using a TM-equivalent. For the Hansonian model to hold, all X’s and no Y’s must be compatible with the desire and ability to expand and/or replicate.
This argument is generally why I dislike speculating about superintelligences. It is impossible for ordinary humans to have exhaustive (or even useful, partial) knowledge of all X and all Y. The set of all things Y in particular may not even be enumerable.
We cannot be sure that there are difficulties beyond our comprehension but we are certainly able to assign probabilities to that hypothesis based on what we know. I would be justifiably shocked if something we could call a super-intelligence couldn’t be formed based on knowledge that is accessible to us, even if the process of putting the seed of a super-intelligence together is beyond us.
Humans aren’t even remotely optimised for generalised intelligence, it’s just a trick we picked up to, crudely speaking, get laid. There is no reason that a intelligence of the form “human thinking minus the parts that suck and a bit more of the parts that don’t suck” couldn’t be created using the knowledge available to us and that is something we can easily place a high probability on. Then you run the hardware at more than 60hz.
Oh, I agree. We just don’t know what self-modifications will be necessary to achieve non-speed-based optimizations.
To put it another way, if superintelligences are competing with each other and self-modifying in order to do so, predictions about the qualities those superintelligences will possess are all but worthless.
On this I totally agree!