Present-day humanity is a collective intelligence that is clearly ‘superintelligent’ relative to individual humans; yet Bostrom expresses little to no interest in this power disparity, and he clearly doesn’t think his book is about the 2014 human race.
So I think his definitions of ‘superintelligence’ are rough, and Bostrom is primarily interested in the invincible inhuman singleton scenario: the possibility of humans building something other than humanity itself that can vastly outperform the entire human race in arbitrary tasks. He’s also mainly interested in sudden, short-term singletons (the prototype being seed AI). Things like AGI and ems mainly interest him because they might produce an invincible singleton of that sort.
Wal-Mart and South Korea have a lot more generality and optimization power than any living human, but they’re not likely to become invincibly superior to rival collectives anytime soon, in the manner of a paperclipper, and they’re also unlikely to explosively self-improve. That matters more to Bostrom than whether they technically get defined as ‘superintelligences’. I get the impression Bostrom ignores that kind of optimizer more because it doesn’t fit his prototype, and because the short-term risks and benefits prima facie seem much smaller, than because of any detailed analysis of the long-term effects of power-acquiring networks.
It’s important (from Bostrom’s perspective) that the invincible singleton scenario is defined relative to humans at the time it’s invented; if we build an AGI in 2100 that’s superintelligent relative to 2014 humans, but stupid relative to 2100 humans, then Bostrom doesn’t particularly care (unless that technology might lead to an AI that’s superintelligent relative to its contemporaries).
It’s also important for invincible singleton, at least in terms of selecting a prototype case, that it’s some optimizer extrinsic to humanity (or, in the case of ems and biologically super-enhanced humans—which I get the impression are edge cases in Bostrom’s conceptual scheme—the optimizer is at least extrinsic to some privileged subset of humanity). That’s why it’s outside the scope of the book Superintelligence to devote a lot of time to the risks of mundane totalitarianism, the promise of a world government, or the general class of cases where humanity just keeps gradually improving in intelligence but without any (intragenerational) conflicts or values clashes. Even though it’s hard to define ‘superintelligence’ in a way that excludes governments, corporations, humanity-as-a-whole, etc.
(I get the vague feeling in Superintelligence that Bostrom finds ‘merely human’ collective superintelligence relatively boring, except in so far as it affects the likely invincible inhuman singleton scenarios. It’s not obvious to me that Hansonian em-world scenarios deserve multiple chapters while ‘Networks and organizations’ deserve a fairly dismissive page-and-a-half mention; but if you’re interested in invincible singletons extrinsic to humanity, and especially in near-term AI pathways to such, it makes sense to see ems as more strategically relevant.)
Bostrom’s secondary interest is the effects of enhancing humans’ / machines’ / institutions’ general problem-solving abilities relative to ~2014 levels. So he does discuss things other than invincible singletons, and he does care about how human intelligence will change relative to today (much more so than he cares about superintelligence relative to, say, 900 BC). But I don’t think this is the main focus.
Wal-Mart and South Korea have a lot more generality and optimization power than any living human, but they’re not likely to become invincibly superior to rival collectives anytime soon, in the manner of a paperclipper, and they’re also unlikely to explosively self-improve.
I’m curious about what makes organizations unlikely to explosively self-improve, if one thinks other entities that reach super-human intelligence are by default likely to do so. Is it just that organizations have been superintelligent for a while and have not exploded so far? But perhaps this is a better question for the kinetics of intelligence explosions week, when we discuss the reasons for thinking anything will explosively self-improve.
That’s a good question. There are trivial senses in which Wal-Mart could become ‘superintelligent’ if sufficiently powerful emulations or AIs joined the organization. So I gather we’re interested in:
Are there plausible ways for an organization to rapidly become an invincible singleton without recourse to AI or emulations? (And without just attaining de-facto-dominance by peacefully subsuming rivals, like a World Government.)
If certain kinds of organization rapidly became an invincible singleton via AI or emulation technologies, would their peculiarities importantly change the strategic picture?
Re 1, if a single organization invents (and successfully monopolizes) a technology that quickly gives it vastly more wealth (or vastly more destructive power) than the entire rest of the planet, it could attain a dominant advantage even if it isn’t technically exploding in ‘general intelligence’. (The advantage might not count as ‘general’ because it’s a single narrow superpower that just happens to be strong enough to trump every other agency. Or it might not count as ‘intelligence’ because it’s a resource advantage rather than an intrinsic capability.)
Closer to the spirit of ‘intelligence explosion’ would be an organization that comes up with a clever way to biologically enhance its humans (e.g., an amazing new nootropic) or enhance the speed with which humans share information or filter out bad ideas. All of these examples, like the ones in the previous paragraph, rely on there being a huge first-mover advantage—either it’s easy to hide the necessary insights from other organizations, or at some threshold point the insights have an enormous effect on an extremely small timescale, or other organizations for some reason don’t want to mimic the first one. (Perhaps the game-changing technology is extremely taboo, and the invincible singleton arises because only one organization is willing to break the taboo within the first few years of the tech’s availability.)
I think there are two fairly distinct questions: whether an organization is likely to rapidly become much more superintelligent than it is, and whether it is likely to do this without other organizations catching up. I mostly mean to ask about the first.
You mention several improvements an organization could make to their intelligence, however in an ‘intelligence explosion’ presumably there would be lots of improvements one after the other. I’m thinking of the kinds of things you mention, along with improving the nature of interactions and what individual humans do in the organization etc—there seem to be many possibilities. However I don’t mean to reason from the promisingness of any of these to the conclusion that there could be an organizational intelligence explosion. I rather mean to point out that the arguments for an AI intelligence explosion seems to apply just as well to other kinds of entity such as organizations, since they don’t seem make any reference to being a software agent. So if you (reasonably, I think) don’t expect human organizations to undergo an ‘intelligence explosion’ soon, you need a story about how the argument does apply to AI but doesn’t apply to organizations. I don’t think such stories are that hard to come by, but it is good to think about.
Present-day humanity is a collective intelligence that is clearly ‘superintelligent’ relative to individual humans; yet Bostrom expresses little to no interest in this power disparity, and he clearly doesn’t think his book is about the 2014 human race.
So I think his definitions of ‘superintelligence’ are rough, and Bostrom is primarily interested in the invincible inhuman singleton scenario: the possibility of humans building something other than humanity itself that can vastly outperform the entire human race in arbitrary tasks. He’s also mainly interested in sudden, short-term singletons (the prototype being seed AI). Things like AGI and ems mainly interest him because they might produce an invincible singleton of that sort.
Wal-Mart and South Korea have a lot more generality and optimization power than any living human, but they’re not likely to become invincibly superior to rival collectives anytime soon, in the manner of a paperclipper, and they’re also unlikely to explosively self-improve. That matters more to Bostrom than whether they technically get defined as ‘superintelligences’. I get the impression Bostrom ignores that kind of optimizer more because it doesn’t fit his prototype, and because the short-term risks and benefits prima facie seem much smaller, than because of any detailed analysis of the long-term effects of power-acquiring networks.
It’s important (from Bostrom’s perspective) that the invincible singleton scenario is defined relative to humans at the time it’s invented; if we build an AGI in 2100 that’s superintelligent relative to 2014 humans, but stupid relative to 2100 humans, then Bostrom doesn’t particularly care (unless that technology might lead to an AI that’s superintelligent relative to its contemporaries).
It’s also important for invincible singleton, at least in terms of selecting a prototype case, that it’s some optimizer extrinsic to humanity (or, in the case of ems and biologically super-enhanced humans—which I get the impression are edge cases in Bostrom’s conceptual scheme—the optimizer is at least extrinsic to some privileged subset of humanity). That’s why it’s outside the scope of the book Superintelligence to devote a lot of time to the risks of mundane totalitarianism, the promise of a world government, or the general class of cases where humanity just keeps gradually improving in intelligence but without any (intragenerational) conflicts or values clashes. Even though it’s hard to define ‘superintelligence’ in a way that excludes governments, corporations, humanity-as-a-whole, etc.
(I get the vague feeling in Superintelligence that Bostrom finds ‘merely human’ collective superintelligence relatively boring, except in so far as it affects the likely invincible inhuman singleton scenarios. It’s not obvious to me that Hansonian em-world scenarios deserve multiple chapters while ‘Networks and organizations’ deserve a fairly dismissive page-and-a-half mention; but if you’re interested in invincible singletons extrinsic to humanity, and especially in near-term AI pathways to such, it makes sense to see ems as more strategically relevant.)
Bostrom’s secondary interest is the effects of enhancing humans’ / machines’ / institutions’ general problem-solving abilities relative to ~2014 levels. So he does discuss things other than invincible singletons, and he does care about how human intelligence will change relative to today (much more so than he cares about superintelligence relative to, say, 900 BC). But I don’t think this is the main focus.
Thanks for your thoughts.
I’m curious about what makes organizations unlikely to explosively self-improve, if one thinks other entities that reach super-human intelligence are by default likely to do so. Is it just that organizations have been superintelligent for a while and have not exploded so far? But perhaps this is a better question for the kinetics of intelligence explosions week, when we discuss the reasons for thinking anything will explosively self-improve.
That’s a good question. There are trivial senses in which Wal-Mart could become ‘superintelligent’ if sufficiently powerful emulations or AIs joined the organization. So I gather we’re interested in:
Are there plausible ways for an organization to rapidly become an invincible singleton without recourse to AI or emulations? (And without just attaining de-facto-dominance by peacefully subsuming rivals, like a World Government.)
If certain kinds of organization rapidly became an invincible singleton via AI or emulation technologies, would their peculiarities importantly change the strategic picture?
Re 1, if a single organization invents (and successfully monopolizes) a technology that quickly gives it vastly more wealth (or vastly more destructive power) than the entire rest of the planet, it could attain a dominant advantage even if it isn’t technically exploding in ‘general intelligence’. (The advantage might not count as ‘general’ because it’s a single narrow superpower that just happens to be strong enough to trump every other agency. Or it might not count as ‘intelligence’ because it’s a resource advantage rather than an intrinsic capability.)
Closer to the spirit of ‘intelligence explosion’ would be an organization that comes up with a clever way to biologically enhance its humans (e.g., an amazing new nootropic) or enhance the speed with which humans share information or filter out bad ideas. All of these examples, like the ones in the previous paragraph, rely on there being a huge first-mover advantage—either it’s easy to hide the necessary insights from other organizations, or at some threshold point the insights have an enormous effect on an extremely small timescale, or other organizations for some reason don’t want to mimic the first one. (Perhaps the game-changing technology is extremely taboo, and the invincible singleton arises because only one organization is willing to break the taboo within the first few years of the tech’s availability.)
I think there are two fairly distinct questions: whether an organization is likely to rapidly become much more superintelligent than it is, and whether it is likely to do this without other organizations catching up. I mostly mean to ask about the first.
You mention several improvements an organization could make to their intelligence, however in an ‘intelligence explosion’ presumably there would be lots of improvements one after the other. I’m thinking of the kinds of things you mention, along with improving the nature of interactions and what individual humans do in the organization etc—there seem to be many possibilities. However I don’t mean to reason from the promisingness of any of these to the conclusion that there could be an organizational intelligence explosion. I rather mean to point out that the arguments for an AI intelligence explosion seems to apply just as well to other kinds of entity such as organizations, since they don’t seem make any reference to being a software agent. So if you (reasonably, I think) don’t expect human organizations to undergo an ‘intelligence explosion’ soon, you need a story about how the argument does apply to AI but doesn’t apply to organizations. I don’t think such stories are that hard to come by, but it is good to think about.
What is this word ‘em’ you keep using?
Robin Hanson’s term for software-emulated humans.
Thanks for the very nice post.