Unexpected advances in physics lead to super-exponential increases in computing power (such as were expected from quantum computing), allowing brute-force algorithms or a simulated ecosystem to achieve super-intelligence.
Real-time implanted or worn sensors plus genomics, physiology simulation, and massive on-line collaboration enables people to identify the self-improvement techniques that are useful to them.
Someone uses ancient DNA to make a Neanderthal, and it turns out they died out because their superhuman intelligence was too metabolically expensive.
It seems like the characterization of outcomes into distinct “routes” is likely to be fraught; even if such a breakdown was in some sense exhaustive I would not be surprised if actual developments didn’t really fit into the proposed framework. For example, there is a complicated and hard to divide space between AI, improved tools, brain-computer interfaces, better institutions, and better training. I expect that ex post the whole thing will look like a bit of a mess.
One practical result is that even if there is a strong deductive argument for X given “We achieve superintelligence along route Y” for every particular Y in Bostrom’s list, I would not take this as particularly strong evidence for X. I would instead view it as having considered a few random scenarios where X is true, and weighing this up alongside other kinds of evidence.
Besides the one’s with extremely low likelihood (being handed SI by the simulators of our universe, or aliens find it first.)
However this may be an artifice of Bostrom’s construction. If you partition the space of possible progress in a way where one of the categories captures “non of the above”, then it only seems as if the area has been searched.
Have we missed any plausible routes to superintelligence? (p50)
Unexpected advances in physics lead to super-exponential increases in computing power (such as were expected from quantum computing), allowing brute-force algorithms or a simulated ecosystem to achieve super-intelligence.
Real-time implanted or worn sensors plus genomics, physiology simulation, and massive on-line collaboration enables people to identify the self-improvement techniques that are useful to them.
Someone uses ancient DNA to make a Neanderthal, and it turns out they died out because their superhuman intelligence was too metabolically expensive.
Reading all of LessWrong in a single sitting.
It seems like the characterization of outcomes into distinct “routes” is likely to be fraught; even if such a breakdown was in some sense exhaustive I would not be surprised if actual developments didn’t really fit into the proposed framework. For example, there is a complicated and hard to divide space between AI, improved tools, brain-computer interfaces, better institutions, and better training. I expect that ex post the whole thing will look like a bit of a mess.
One practical result is that even if there is a strong deductive argument for X given “We achieve superintelligence along route Y” for every particular Y in Bostrom’s list, I would not take this as particularly strong evidence for X. I would instead view it as having considered a few random scenarios where X is true, and weighing this up alongside other kinds of evidence.
Besides the one’s with extremely low likelihood (being handed SI by the simulators of our universe, or aliens find it first.) However this may be an artifice of Bostrom’s construction. If you partition the space of possible progress in a way where one of the categories captures “non of the above”, then it only seems as if the area has been searched.