Bear in mind, the paperclip AI won’t ever look up to the broader challenges of being a sentient being in the Universe; the only thing that will ever matter to it, until the end of time, is paperclips. I wouldn’t feel in that instance that we had left behind a creature that represented our legacy, no matter how much it knows about the Beatles.
OK, I can see that. In that case, maybe a better metric would be the instrumental use of our accumulated knowledge, rather than its mere possession. Living in a library doesn’t mean you can read, after all.
What I think you’re driving at is that you want it to value the Beatles in some way. Having some sort of useful crossover between our values and its is the entire project of FAI.
I’m just trying to figure out under what circumstances we could consider a completely artificial entity a continuation of our existence. As you pointed out, merely containing our knowledge isn’t enough. Human knowledge is a constantly growing edifice, where each generation adds to and build upon the successes of the past. I wouldn’t expect an AI to find value in everything we have produced, just as we don’t. But if our species were wiped out, I would feel comfortable calling an AI which traveled the universe occasionally writing McCartney- or Lennon-inspired songs “us.” That would be survival. (I could even deal with a Ringo Starr AI, in a pinch.)
Bear in mind, the paperclip AI won’t ever look up to the broader challenges of being a sentient being in the Universe; the only thing that will ever matter to it, until the end of time, is paperclips. I wouldn’t feel in that instance that we had left behind a creature that represented our legacy, no matter how much it knows about the Beatles.
OK, I can see that. In that case, maybe a better metric would be the instrumental use of our accumulated knowledge, rather than its mere possession. Living in a library doesn’t mean you can read, after all.
What I think you’re driving at is that you want it to value the Beatles in some way. Having some sort of useful crossover between our values and its is the entire project of FAI.
I’m just trying to figure out under what circumstances we could consider a completely artificial entity a continuation of our existence. As you pointed out, merely containing our knowledge isn’t enough. Human knowledge is a constantly growing edifice, where each generation adds to and build upon the successes of the past. I wouldn’t expect an AI to find value in everything we have produced, just as we don’t. But if our species were wiped out, I would feel comfortable calling an AI which traveled the universe occasionally writing McCartney- or Lennon-inspired songs “us.” That would be survival. (I could even deal with a Ringo Starr AI, in a pinch.)
I strongly suspect that that is the same thing as a Friendly AI, and therefore I still consider UFAI an existential risk.
The Paperclip AI will optimally use its knowledge about the Beatles to make more paperclips.