I agree generally, but I think when we talk about wiping out humanity we should include the idea that if we were to lose a significant portion of our accumulated information it would be essentially the same as extinction. I don’t see a difference between a stone age tech. group of humans surviving the apocalypse and slowly repopulating the world and a different species (whether dogs, squirrels, or porpoises) doing the same thing.
I don’t see a difference between a stone age tech. group of humans surviving the apocalypse and slowly repopulating the world and a different species (whether dogs, squirrels, or porpoises) doing the same thing.
We have pretty solid evidence that a stone age tech group of humans can develop a technologically advanced society in a few 10s of thousands of years. I imagine it would take considerably longer for squirrels to get there and I would be much less confident they can do it at all. It may well be that human intelligence is an evolutionary accident that has only happened once in the universe.
The squirrel civilization would be a pretty impressive achievement, granted. The destruction of this particular species (humans) would seemingly be a tremendous loss universally, if intelligence is a rare thing. Nonetheless, I see it as only a certain vessel in which intelligence happened to arise. I see no particular reason why intelligence should be specific to it, or why we should prefer it over other containers should the opportunity present itself. We would share more in common with an intelligent squirrel civilization than a band of gorillas, even though we would share more genetically with the latter. If I were cryogenically frozen and thawed out a million years later by the world-dominating Squirrel Confederacy, I would certainly live with them rather than seek out my closest primate relatives.
EDIT: I want to expand on this slightly. Say our civilization were to be completely destroyed, and a group of humans that had no contact with us were to develop a new civilization of their own concurrent with a squirrel population doing the same on the other side of the world. If that squirrel civilization were to find some piece of our history, say the design schematics of an electric toothbrush, and adopt it as a part of their knowledge, I would say that for all intents and purposes, the squirrels are more “us” than the humans, and we would survive through the former, not the latter.
I don’t see any fundamental reason why intelligence should be restricted to humans. I think it’s quite possible that intelligence arising in the universe is an extremely rare event though. If you value intelligence and think it might be an unlikely occurrence then the survival of some humans rather than no humans should surely be a much preferred outcome?
I disagree that we would have more in common with the electric toothbrush wielding squirrels. I’ve elaborated more on that in another comment.
Preferred, absolutely. I just think that the survival of our knowledge is more important than the survival of the species sans knowledge. If we are looking to save the world, I think an AI living on the moon pondering its existence should be a higher priority than a hunter-gatherer tribe stalking wildebeest. The former is our heritage, the latter just looks like us.
If I implied that, it was unintentional. All I mean is that I see no reason why we should feel a kinship toward humans as humans, as opposed to any species of people as people. If our civilization were to collapse entirely and had to be rebuilt from scratch, I don’t see why the species that is doing the rebuilding is all that important—they aren’t “us” in any real sense. We can die even if humanity survives. By that same token, if the paperclip AI contains none of our accumulated knowledge, we go extinct along with the species. If the AI contains some our of knowledge and a good degree of sentience, I would argue that part of us survives despite the loss of this particular species.
Bear in mind, the paperclip AI won’t ever look up to the broader challenges of being a sentient being in the Universe; the only thing that will ever matter to it, until the end of time, is paperclips. I wouldn’t feel in that instance that we had left behind a creature that represented our legacy, no matter how much it knows about the Beatles.
OK, I can see that. In that case, maybe a better metric would be the instrumental use of our accumulated knowledge, rather than its mere possession. Living in a library doesn’t mean you can read, after all.
What I think you’re driving at is that you want it to value the Beatles in some way. Having some sort of useful crossover between our values and its is the entire project of FAI.
I’m just trying to figure out under what circumstances we could consider a completely artificial entity a continuation of our existence. As you pointed out, merely containing our knowledge isn’t enough. Human knowledge is a constantly growing edifice, where each generation adds to and build upon the successes of the past. I wouldn’t expect an AI to find value in everything we have produced, just as we don’t. But if our species were wiped out, I would feel comfortable calling an AI which traveled the universe occasionally writing McCartney- or Lennon-inspired songs “us.” That would be survival. (I could even deal with a Ringo Starr AI, in a pinch.)
How much of what it means to be human do you think is cultural conditioning versus innate biological tendency? I think the evidence points to a very large biologically determined element to humanity. I would expect to find more in common with a hunter gatherer in a previously undiscovered tribe, or even with a paleolithic tribesman, than with an alien intelligence or an evolved dolphin.
If you read ancient Greek literature, it is easy to empathize with most of the motivations and drives of the characters even though they lived in a very different world. You could argue that our culture’s direct lineage from theirs is a factor but it seems that westerners can recognize as fellow humans the minds behind ancient Chinese or Indian texts with less shared cultural heritage with our own.
I don’t consider our innate biological tendencies the core of our being. We are an intelligence superimposed on a particular biological creature. It may be difficult to separate the aspects of one from the other (and I don’t pretend to be fully able to do so), but I think it’s important that we learn which is which so that we can slowly deemphasize and discard the biological in favor of the solely rational.
I’m not interested in what it means to be human, I want to know what it means to be a person. Humanity is just an accident as far as I’m concerned. It might as well have been anything else.
I’m curious as to what sorts of goals you think a “solely rational” creature possesses. Do you have a particular point of disagreement with Eliezer’s take on the biological heritage of our values?
Oh, I don’t know that. What would remain of you if you could download your mind into a computer? Who would you be if you were no longer affected by the level of serotonin or adrenaline you are producing, or if pheromones didn’t affect you? Once you subtract the biological from the human, I imagine what remains to be pure person. There should be no difference between that person and one who was created intentionally or one that evolved in a different species, beyond their personal experiences (controlling for the effects of their physiology).
I don’t have any disagreement with Eliezer’s description of how our biology molded our growth, but I see no reason why we should hold on to that biology forever. I could be wrong, however. It may not be possible to be a person without certain biological-like reactions. I can certainly see how this would be the case for people in early learning stages of development, particularly if your goal is to mold that person into a friendly one. Even then, though, I think it would be beneficial to keep those parts to the bare minimum required to function.
What would remain of you if you could download your mind into a computer?
That depends on the resolution of the simulation. Wouldn’t you agree?
Once you subtract the biological from the human, I imagine what remains to be pure person.
I think you’re using the word “biological” to denote some kind of unnatural category.
I don’t have any disagreement with Eliezer’s description of how our biology molded our growth, but I see no reason why we should hold on to that biology forever.
The reasons you see for why any of us “should” do anything almost certainly have biologically engineered goals behind them in some way or another. What of self-preservation?
I meant this kind of unnatural category. I don’t quite know what you mean by “biological” in this context. A high-resolution neurological simulation might not require any physical carbon atoms, but the simulated mind would presumably still act according to all the same “biological” drives.
I agree generally, but I think when we talk about wiping out humanity we should include the idea that if we were to lose a significant portion of our accumulated information it would be essentially the same as extinction. I don’t see a difference between a stone age tech. group of humans surviving the apocalypse and slowly repopulating the world and a different species (whether dogs, squirrels, or porpoises) doing the same thing.
See In Praise of Boredom and Sympathetic Minds: random evolved intelligent species are not guaranteed to be anything we would consider valuable.
I like humans. I think they’re cute :3
We have pretty solid evidence that a stone age tech group of humans can develop a technologically advanced society in a few 10s of thousands of years. I imagine it would take considerably longer for squirrels to get there and I would be much less confident they can do it at all. It may well be that human intelligence is an evolutionary accident that has only happened once in the universe.
The squirrel civilization would be a pretty impressive achievement, granted. The destruction of this particular species (humans) would seemingly be a tremendous loss universally, if intelligence is a rare thing. Nonetheless, I see it as only a certain vessel in which intelligence happened to arise. I see no particular reason why intelligence should be specific to it, or why we should prefer it over other containers should the opportunity present itself. We would share more in common with an intelligent squirrel civilization than a band of gorillas, even though we would share more genetically with the latter. If I were cryogenically frozen and thawed out a million years later by the world-dominating Squirrel Confederacy, I would certainly live with them rather than seek out my closest primate relatives.
EDIT: I want to expand on this slightly. Say our civilization were to be completely destroyed, and a group of humans that had no contact with us were to develop a new civilization of their own concurrent with a squirrel population doing the same on the other side of the world. If that squirrel civilization were to find some piece of our history, say the design schematics of an electric toothbrush, and adopt it as a part of their knowledge, I would say that for all intents and purposes, the squirrels are more “us” than the humans, and we would survive through the former, not the latter.
I don’t see any fundamental reason why intelligence should be restricted to humans. I think it’s quite possible that intelligence arising in the universe is an extremely rare event though. If you value intelligence and think it might be an unlikely occurrence then the survival of some humans rather than no humans should surely be a much preferred outcome?
I disagree that we would have more in common with the electric toothbrush wielding squirrels. I’ve elaborated more on that in another comment.
Preferred, absolutely. I just think that the survival of our knowledge is more important than the survival of the species sans knowledge. If we are looking to save the world, I think an AI living on the moon pondering its existence should be a higher priority than a hunter-gatherer tribe stalking wildebeest. The former is our heritage, the latter just looks like us.
Does this imply that you are OK with a Paperclip AI wiping out humanity, since it will be an intelligent life form much more developed than we are?
If I implied that, it was unintentional. All I mean is that I see no reason why we should feel a kinship toward humans as humans, as opposed to any species of people as people. If our civilization were to collapse entirely and had to be rebuilt from scratch, I don’t see why the species that is doing the rebuilding is all that important—they aren’t “us” in any real sense. We can die even if humanity survives. By that same token, if the paperclip AI contains none of our accumulated knowledge, we go extinct along with the species. If the AI contains some our of knowledge and a good degree of sentience, I would argue that part of us survives despite the loss of this particular species.
Bear in mind, the paperclip AI won’t ever look up to the broader challenges of being a sentient being in the Universe; the only thing that will ever matter to it, until the end of time, is paperclips. I wouldn’t feel in that instance that we had left behind a creature that represented our legacy, no matter how much it knows about the Beatles.
OK, I can see that. In that case, maybe a better metric would be the instrumental use of our accumulated knowledge, rather than its mere possession. Living in a library doesn’t mean you can read, after all.
What I think you’re driving at is that you want it to value the Beatles in some way. Having some sort of useful crossover between our values and its is the entire project of FAI.
I’m just trying to figure out under what circumstances we could consider a completely artificial entity a continuation of our existence. As you pointed out, merely containing our knowledge isn’t enough. Human knowledge is a constantly growing edifice, where each generation adds to and build upon the successes of the past. I wouldn’t expect an AI to find value in everything we have produced, just as we don’t. But if our species were wiped out, I would feel comfortable calling an AI which traveled the universe occasionally writing McCartney- or Lennon-inspired songs “us.” That would be survival. (I could even deal with a Ringo Starr AI, in a pinch.)
I strongly suspect that that is the same thing as a Friendly AI, and therefore I still consider UFAI an existential risk.
The Paperclip AI will optimally use its knowledge about the Beatles to make more paperclips.
How much of what it means to be human do you think is cultural conditioning versus innate biological tendency? I think the evidence points to a very large biologically determined element to humanity. I would expect to find more in common with a hunter gatherer in a previously undiscovered tribe, or even with a paleolithic tribesman, than with an alien intelligence or an evolved dolphin.
If you read ancient Greek literature, it is easy to empathize with most of the motivations and drives of the characters even though they lived in a very different world. You could argue that our culture’s direct lineage from theirs is a factor but it seems that westerners can recognize as fellow humans the minds behind ancient Chinese or Indian texts with less shared cultural heritage with our own.
I don’t consider our innate biological tendencies the core of our being. We are an intelligence superimposed on a particular biological creature. It may be difficult to separate the aspects of one from the other (and I don’t pretend to be fully able to do so), but I think it’s important that we learn which is which so that we can slowly deemphasize and discard the biological in favor of the solely rational.
I’m not interested in what it means to be human, I want to know what it means to be a person. Humanity is just an accident as far as I’m concerned. It might as well have been anything else.
I’m curious as to what sorts of goals you think a “solely rational” creature possesses. Do you have a particular point of disagreement with Eliezer’s take on the biological heritage of our values?
Oh, I don’t know that. What would remain of you if you could download your mind into a computer? Who would you be if you were no longer affected by the level of serotonin or adrenaline you are producing, or if pheromones didn’t affect you? Once you subtract the biological from the human, I imagine what remains to be pure person. There should be no difference between that person and one who was created intentionally or one that evolved in a different species, beyond their personal experiences (controlling for the effects of their physiology).
I don’t have any disagreement with Eliezer’s description of how our biology molded our growth, but I see no reason why we should hold on to that biology forever. I could be wrong, however. It may not be possible to be a person without certain biological-like reactions. I can certainly see how this would be the case for people in early learning stages of development, particularly if your goal is to mold that person into a friendly one. Even then, though, I think it would be beneficial to keep those parts to the bare minimum required to function.
That depends on the resolution of the simulation. Wouldn’t you agree?
I think you’re using the word “biological” to denote some kind of unnatural category.
The reasons you see for why any of us “should” do anything almost certainly have biologically engineered goals behind them in some way or another. What of self-preservation?
Not unnatural, obviously, but a contaminant to intelligence. Manure is a great fertilizer, but you wash it off before you use the vegetable.
I meant this kind of unnatural category. I don’t quite know what you mean by “biological” in this context. A high-resolution neurological simulation might not require any physical carbon atoms, but the simulated mind would presumably still act according to all the same “biological” drives.