To the extent that a dichotomy between “biological, evolutionary evolved intelligent” life and “superintelligent robots” will even make sense in the far future (utopias, dystopias, in most any scenario in which intelligence still exists), we’d probably refer to ourselves as “superintelligent robots” at that stage.
The idea that long after a point when uploading was feasible, we’d still stick meatbags in spaceships to travel to distant stars, is somewhat ludicrous. Unless aliens, for unknown quirks in their utility function, really value their biological forms, and got negentropy to waste, it’s a no-brainer that they’d travel/live in some resource-optimized (e.g. electronic) form.
There will always be a causal link from an “artificial” intelligence back to some naturally evolved intelligence. Whether we call the invariable “jumps” in between (e.g. humans creating AI) as breaking that link, or merely as a transformation, is just a definitional quibble. After a certain point in a civilization’s evolution, we’d call them “robots” (Even if that word has some strange etymological connotation, since the majority wouldn’t necessarily be ‘working’.).
There will always be a causal link from an “artificial” intelligence back to some naturally evolved intelligence.
A causal link yes.
we’d probably refer to ourselves as “superintelligent robots” at that stage.
But only if we did manage to enhance or upload or whatever else during the transition with the distinguishing feature that we’d perceive this as a continuation of our identity.
But if we created a AGI that didn’t care for our values and facilitated our transition but just did whatever else its utility function decreed than you’d still have ‘superintelligent robots’ but humans would be the bootloader for them just like dinos were the bootloader for mammals (kind of).
tl/dr: “Survival” inside an AGI does not require Friendliness, but only that it is able to create models of us that are good enough for us to accept as genuine copies.
I don’t think the AGI needs to care for our values in order to facilitate our transition. For the sake of argument, lets assume an AGI that doesn’t care about human values—the Paperclip Maximizer will do.
Couldn’t this AGI, if it so chose, easily create something that we’d accept as a continuation of our identity? A digital copy of a human that is so convincing that this person (and the people that know him or her) could accept it as identical? Or a hyper-persuasive philosophy that tells people their non-copyable features (say consciousness) are nonessential?
I imagine that it could (alternative discussed below). Which leads to the question: Would it?
I think it would (alternative discussed below). Any AGI that wants self-preservation would want to minimize risk of conflict by appearing helpful or at least non-threatening—at least until the cost of appearing so is greater that the cost of being repeatedly nuked. If it can convince people it is offering genuine immortality in upload form, its risk of being nuked is greatly reduced. It could delete the (probably imperfect) models after humans aren’t a threat anymore if—and only if—it it so sure it’ll never need specimina of the best work of Darwinian evolution again that it’d rather turn the comparatively tiny piece of computronium they exist in into paperclips. But how could it be sure?
So unless it is much, much better at nanotech than it is at modeling people, I do expect an Earth AGI would contain at least some vestiges of human identity (maybe even more of those than of the vestiges of oak or flatworm identity). Of course they would be irrelevant to almost the entire rest of the system, because they’re not good enough at making paperclips to matter.
This leaves the scenarios where my assumptions are wrong and the Paperclip Maximizer is somehow either unable to create a persuasive “transition” offer, or decides against making it. Such Paperclip Maximizer variants don’t seem superintelligent to me (more like Grey Goo), but I suppose they could be built. However, in this case, its lack of human values is only a problem because it also lacks the ability to model humans and a credible deterrent. The former of these two might be an easier problem than Friendlyness, if we’re only talking about our survival (as superintelligent robots or whatever) inside that AGI, not about a goal of actually having a say in what it does.
To the extent that a dichotomy between “biological, evolutionary evolved intelligent” life and “superintelligent robots” will even make sense in the far future (utopias, dystopias, in most any scenario in which intelligence still exists), we’d probably refer to ourselves as “superintelligent robots” at that stage.
The idea that long after a point when uploading was feasible, we’d still stick meatbags in spaceships to travel to distant stars, is somewhat ludicrous. Unless aliens, for unknown quirks in their utility function, really value their biological forms, and got negentropy to waste, it’s a no-brainer that they’d travel/live in some resource-optimized (e.g. electronic) form.
There will always be a causal link from an “artificial” intelligence back to some naturally evolved intelligence. Whether we call the invariable “jumps” in between (e.g. humans creating AI) as breaking that link, or merely as a transformation, is just a definitional quibble. After a certain point in a civilization’s evolution, we’d call them “robots” (Even if that word has some strange etymological connotation, since the majority wouldn’t necessarily be ‘working’.).
A causal link yes.
But only if we did manage to enhance or upload or whatever else during the transition with the distinguishing feature that we’d perceive this as a continuation of our identity.
But if we created a AGI that didn’t care for our values and facilitated our transition but just did whatever else its utility function decreed than you’d still have ‘superintelligent robots’ but humans would be the bootloader for them just like dinos were the bootloader for mammals (kind of).
tl/dr: “Survival” inside an AGI does not require Friendliness, but only that it is able to create models of us that are good enough for us to accept as genuine copies.
I don’t think the AGI needs to care for our values in order to facilitate our transition. For the sake of argument, lets assume an AGI that doesn’t care about human values—the Paperclip Maximizer will do.
Couldn’t this AGI, if it so chose, easily create something that we’d accept as a continuation of our identity? A digital copy of a human that is so convincing that this person (and the people that know him or her) could accept it as identical? Or a hyper-persuasive philosophy that tells people their non-copyable features (say consciousness) are nonessential?
I imagine that it could (alternative discussed below). Which leads to the question: Would it?
I think it would (alternative discussed below). Any AGI that wants self-preservation would want to minimize risk of conflict by appearing helpful or at least non-threatening—at least until the cost of appearing so is greater that the cost of being repeatedly nuked. If it can convince people it is offering genuine immortality in upload form, its risk of being nuked is greatly reduced. It could delete the (probably imperfect) models after humans aren’t a threat anymore if—and only if—it it so sure it’ll never need specimina of the best work of Darwinian evolution again that it’d rather turn the comparatively tiny piece of computronium they exist in into paperclips. But how could it be sure?
So unless it is much, much better at nanotech than it is at modeling people, I do expect an Earth AGI would contain at least some vestiges of human identity (maybe even more of those than of the vestiges of oak or flatworm identity). Of course they would be irrelevant to almost the entire rest of the system, because they’re not good enough at making paperclips to matter.
This leaves the scenarios where my assumptions are wrong and the Paperclip Maximizer is somehow either unable to create a persuasive “transition” offer, or decides against making it. Such Paperclip Maximizer variants don’t seem superintelligent to me (more like Grey Goo), but I suppose they could be built. However, in this case, its lack of human values is only a problem because it also lacks the ability to model humans and a credible deterrent. The former of these two might be an easier problem than Friendlyness, if we’re only talking about our survival (as superintelligent robots or whatever) inside that AGI, not about a goal of actually having a say in what it does.