It seems like the characterization of outcomes into distinct “routes” is likely to be fraught; even if such a breakdown was in some sense exhaustive I would not be surprised if actual developments didn’t really fit into the proposed framework. For example, there is a complicated and hard to divide space between AI, improved tools, brain-computer interfaces, better institutions, and better training. I expect that ex post the whole thing will look like a bit of a mess.
One practical result is that even if there is a strong deductive argument for X given “We achieve superintelligence along route Y” for every particular Y in Bostrom’s list, I would not take this as particularly strong evidence for X. I would instead view it as having considered a few random scenarios where X is true, and weighing this up alongside other kinds of evidence.
It seems like the characterization of outcomes into distinct “routes” is likely to be fraught; even if such a breakdown was in some sense exhaustive I would not be surprised if actual developments didn’t really fit into the proposed framework. For example, there is a complicated and hard to divide space between AI, improved tools, brain-computer interfaces, better institutions, and better training. I expect that ex post the whole thing will look like a bit of a mess.
One practical result is that even if there is a strong deductive argument for X given “We achieve superintelligence along route Y” for every particular Y in Bostrom’s list, I would not take this as particularly strong evidence for X. I would instead view it as having considered a few random scenarios where X is true, and weighing this up alongside other kinds of evidence.