If I implied that, it was unintentional. All I mean is that I see no reason why we should feel a kinship toward humans as humans, as opposed to any species of people as people. If our civilization were to collapse entirely and had to be rebuilt from scratch, I don’t see why the species that is doing the rebuilding is all that important—they aren’t “us” in any real sense. We can die even if humanity survives. By that same token, if the paperclip AI contains none of our accumulated knowledge, we go extinct along with the species. If the AI contains some our of knowledge and a good degree of sentience, I would argue that part of us survives despite the loss of this particular species.
Bear in mind, the paperclip AI won’t ever look up to the broader challenges of being a sentient being in the Universe; the only thing that will ever matter to it, until the end of time, is paperclips. I wouldn’t feel in that instance that we had left behind a creature that represented our legacy, no matter how much it knows about the Beatles.
OK, I can see that. In that case, maybe a better metric would be the instrumental use of our accumulated knowledge, rather than its mere possession. Living in a library doesn’t mean you can read, after all.
What I think you’re driving at is that you want it to value the Beatles in some way. Having some sort of useful crossover between our values and its is the entire project of FAI.
I’m just trying to figure out under what circumstances we could consider a completely artificial entity a continuation of our existence. As you pointed out, merely containing our knowledge isn’t enough. Human knowledge is a constantly growing edifice, where each generation adds to and build upon the successes of the past. I wouldn’t expect an AI to find value in everything we have produced, just as we don’t. But if our species were wiped out, I would feel comfortable calling an AI which traveled the universe occasionally writing McCartney- or Lennon-inspired songs “us.” That would be survival. (I could even deal with a Ringo Starr AI, in a pinch.)
How much of what it means to be human do you think is cultural conditioning versus innate biological tendency? I think the evidence points to a very large biologically determined element to humanity. I would expect to find more in common with a hunter gatherer in a previously undiscovered tribe, or even with a paleolithic tribesman, than with an alien intelligence or an evolved dolphin.
If you read ancient Greek literature, it is easy to empathize with most of the motivations and drives of the characters even though they lived in a very different world. You could argue that our culture’s direct lineage from theirs is a factor but it seems that westerners can recognize as fellow humans the minds behind ancient Chinese or Indian texts with less shared cultural heritage with our own.
I don’t consider our innate biological tendencies the core of our being. We are an intelligence superimposed on a particular biological creature. It may be difficult to separate the aspects of one from the other (and I don’t pretend to be fully able to do so), but I think it’s important that we learn which is which so that we can slowly deemphasize and discard the biological in favor of the solely rational.
I’m not interested in what it means to be human, I want to know what it means to be a person. Humanity is just an accident as far as I’m concerned. It might as well have been anything else.
I’m curious as to what sorts of goals you think a “solely rational” creature possesses. Do you have a particular point of disagreement with Eliezer’s take on the biological heritage of our values?
Oh, I don’t know that. What would remain of you if you could download your mind into a computer? Who would you be if you were no longer affected by the level of serotonin or adrenaline you are producing, or if pheromones didn’t affect you? Once you subtract the biological from the human, I imagine what remains to be pure person. There should be no difference between that person and one who was created intentionally or one that evolved in a different species, beyond their personal experiences (controlling for the effects of their physiology).
I don’t have any disagreement with Eliezer’s description of how our biology molded our growth, but I see no reason why we should hold on to that biology forever. I could be wrong, however. It may not be possible to be a person without certain biological-like reactions. I can certainly see how this would be the case for people in early learning stages of development, particularly if your goal is to mold that person into a friendly one. Even then, though, I think it would be beneficial to keep those parts to the bare minimum required to function.
What would remain of you if you could download your mind into a computer?
That depends on the resolution of the simulation. Wouldn’t you agree?
Once you subtract the biological from the human, I imagine what remains to be pure person.
I think you’re using the word “biological” to denote some kind of unnatural category.
I don’t have any disagreement with Eliezer’s description of how our biology molded our growth, but I see no reason why we should hold on to that biology forever.
The reasons you see for why any of us “should” do anything almost certainly have biologically engineered goals behind them in some way or another. What of self-preservation?
I meant this kind of unnatural category. I don’t quite know what you mean by “biological” in this context. A high-resolution neurological simulation might not require any physical carbon atoms, but the simulated mind would presumably still act according to all the same “biological” drives.
If I implied that, it was unintentional. All I mean is that I see no reason why we should feel a kinship toward humans as humans, as opposed to any species of people as people. If our civilization were to collapse entirely and had to be rebuilt from scratch, I don’t see why the species that is doing the rebuilding is all that important—they aren’t “us” in any real sense. We can die even if humanity survives. By that same token, if the paperclip AI contains none of our accumulated knowledge, we go extinct along with the species. If the AI contains some our of knowledge and a good degree of sentience, I would argue that part of us survives despite the loss of this particular species.
Bear in mind, the paperclip AI won’t ever look up to the broader challenges of being a sentient being in the Universe; the only thing that will ever matter to it, until the end of time, is paperclips. I wouldn’t feel in that instance that we had left behind a creature that represented our legacy, no matter how much it knows about the Beatles.
OK, I can see that. In that case, maybe a better metric would be the instrumental use of our accumulated knowledge, rather than its mere possession. Living in a library doesn’t mean you can read, after all.
What I think you’re driving at is that you want it to value the Beatles in some way. Having some sort of useful crossover between our values and its is the entire project of FAI.
I’m just trying to figure out under what circumstances we could consider a completely artificial entity a continuation of our existence. As you pointed out, merely containing our knowledge isn’t enough. Human knowledge is a constantly growing edifice, where each generation adds to and build upon the successes of the past. I wouldn’t expect an AI to find value in everything we have produced, just as we don’t. But if our species were wiped out, I would feel comfortable calling an AI which traveled the universe occasionally writing McCartney- or Lennon-inspired songs “us.” That would be survival. (I could even deal with a Ringo Starr AI, in a pinch.)
I strongly suspect that that is the same thing as a Friendly AI, and therefore I still consider UFAI an existential risk.
The Paperclip AI will optimally use its knowledge about the Beatles to make more paperclips.
How much of what it means to be human do you think is cultural conditioning versus innate biological tendency? I think the evidence points to a very large biologically determined element to humanity. I would expect to find more in common with a hunter gatherer in a previously undiscovered tribe, or even with a paleolithic tribesman, than with an alien intelligence or an evolved dolphin.
If you read ancient Greek literature, it is easy to empathize with most of the motivations and drives of the characters even though they lived in a very different world. You could argue that our culture’s direct lineage from theirs is a factor but it seems that westerners can recognize as fellow humans the minds behind ancient Chinese or Indian texts with less shared cultural heritage with our own.
I don’t consider our innate biological tendencies the core of our being. We are an intelligence superimposed on a particular biological creature. It may be difficult to separate the aspects of one from the other (and I don’t pretend to be fully able to do so), but I think it’s important that we learn which is which so that we can slowly deemphasize and discard the biological in favor of the solely rational.
I’m not interested in what it means to be human, I want to know what it means to be a person. Humanity is just an accident as far as I’m concerned. It might as well have been anything else.
I’m curious as to what sorts of goals you think a “solely rational” creature possesses. Do you have a particular point of disagreement with Eliezer’s take on the biological heritage of our values?
Oh, I don’t know that. What would remain of you if you could download your mind into a computer? Who would you be if you were no longer affected by the level of serotonin or adrenaline you are producing, or if pheromones didn’t affect you? Once you subtract the biological from the human, I imagine what remains to be pure person. There should be no difference between that person and one who was created intentionally or one that evolved in a different species, beyond their personal experiences (controlling for the effects of their physiology).
I don’t have any disagreement with Eliezer’s description of how our biology molded our growth, but I see no reason why we should hold on to that biology forever. I could be wrong, however. It may not be possible to be a person without certain biological-like reactions. I can certainly see how this would be the case for people in early learning stages of development, particularly if your goal is to mold that person into a friendly one. Even then, though, I think it would be beneficial to keep those parts to the bare minimum required to function.
That depends on the resolution of the simulation. Wouldn’t you agree?
I think you’re using the word “biological” to denote some kind of unnatural category.
The reasons you see for why any of us “should” do anything almost certainly have biologically engineered goals behind them in some way or another. What of self-preservation?
Not unnatural, obviously, but a contaminant to intelligence. Manure is a great fertilizer, but you wash it off before you use the vegetable.
I meant this kind of unnatural category. I don’t quite know what you mean by “biological” in this context. A high-resolution neurological simulation might not require any physical carbon atoms, but the simulated mind would presumably still act according to all the same “biological” drives.