Success here is unlikely to be absolute but instead will look like a constant battle against entropy beginning soon and lasting for the foreseeable future. A large population of evolving wild agents is the high entropy state of the universe. The potential energy well is deep and we must erect a large barrier to this state. From evolution’s perspective, humanity is quite a poor local minimum. But we like this minimum and want to stay here. We know what we need to do; the real question is whether we can implement and maintain our safeguards against constant and increasing entropy.
This is a profound passage that stands on its own.
While reading this essay, I was nodding along and was thinking that I agree that development of BCI seems like among our best bets for ‘surviving’ AGI, if not the best bet, but boy, it’s really hard to get BCI and the ensuing cyborgisation and hive-mindification of humanity right, because it decreases autonomy, increases network connectivity and therefore overall fragility of the society (Taleb-style reasoning), and creates whole new types of risks, from mind control/surveillance (this one was exhaustively explored in sci-fi) to “super mind virus” in the forms of replicating “virus-skills” or bad memes. The transition to the BCI-mediated hive-mind could also really turn out badly politically, if not result in a full-out war, when there will be competition between humans who are already “on the network” and those who aren’t, either voluntarily or because there are insufficient resources to get everybody “connected” sufficiently quickly[1].
Then, in this last paragraph, I thought that maybe you already accounted for these considerations in the sentence “Success here is unlikely to be absolute but instead will look like a constant battle against entropy beginning soon and lasting for the foreseeable future.”
However, this may also turn out surprisingly peaceful, with “primitivist” communities a-la Amish providing a steady “source” of brains for “the matrix”, while the populations of these communities doesn’t plummet because of relatively high fertility rates. This kind of peaceful balance is especially likely if “the matrix” will become possible already after essentially all human labor is obsolete in the economy, which also seems quite likely at the moment (because robotics develop faster than BCIs).
Would a BCI-mediated hive mind just be hacked and controlled by AGIs? Why would human brains be better at controlling AGIs when directly hooked in to computers than they are when indirectly hooked in by keyboards? I understand that BCIs theoretically would lead to faster communication between humans, but the communication would go in both directions. How would you know when it was being orchestrated by humans and when it was being manipulated by an AGI?
No guarantee, of course, I pointed to all these risks above, too. At least it seems plausible (to me, so far) that these risks could be somehow robustly mitigated through some clever distributed system design. But this itself might be an illusion, and the problem may appear practically unsolvable (in a robust way) on a closer inspection.
This is a profound passage that stands on its own.
While reading this essay, I was nodding along and was thinking that I agree that development of BCI seems like among our best bets for ‘surviving’ AGI, if not the best bet, but boy, it’s really hard to get BCI and the ensuing cyborgisation and hive-mindification of humanity right, because it decreases autonomy, increases network connectivity and therefore overall fragility of the society (Taleb-style reasoning), and creates whole new types of risks, from mind control/surveillance (this one was exhaustively explored in sci-fi) to “super mind virus” in the forms of replicating “virus-skills” or bad memes. The transition to the BCI-mediated hive-mind could also really turn out badly politically, if not result in a full-out war, when there will be competition between humans who are already “on the network” and those who aren’t, either voluntarily or because there are insufficient resources to get everybody “connected” sufficiently quickly[1].
Then, in this last paragraph, I thought that maybe you already accounted for these considerations in the sentence “Success here is unlikely to be absolute but instead will look like a constant battle against entropy beginning soon and lasting for the foreseeable future.”
However, this may also turn out surprisingly peaceful, with “primitivist” communities a-la Amish providing a steady “source” of brains for “the matrix”, while the populations of these communities doesn’t plummet because of relatively high fertility rates. This kind of peaceful balance is especially likely if “the matrix” will become possible already after essentially all human labor is obsolete in the economy, which also seems quite likely at the moment (because robotics develop faster than BCIs).
Would a BCI-mediated hive mind just be hacked and controlled by AGIs? Why would human brains be better at controlling AGIs when directly hooked in to computers than they are when indirectly hooked in by keyboards? I understand that BCIs theoretically would lead to faster communication between humans, but the communication would go in both directions. How would you know when it was being orchestrated by humans and when it was being manipulated by an AGI?
No guarantee, of course, I pointed to all these risks above, too. At least it seems plausible (to me, so far) that these risks could be somehow robustly mitigated through some clever distributed system design. But this itself might be an illusion, and the problem may appear practically unsolvable (in a robust way) on a closer inspection.