A lot of the steps in your chain are tenuous. For example, if I were making replicators, I’d ensure they were faithful replicators (not that hard from an engineering standpoint). Making faithful replicators negates step 3.
(Note: I won’t respond to anything you write here. I have too many things to respond to right now. But I saw the negative vote total and no comments, a situation I’d find frustrating if I were in it, so I wanted to give you some idea of what someone might disagree with/consider sloppy/wish they hadn’t spent their time reading.)
“For example, if I were making replicators, I’d ensure they were faithful replicators ”
Isn’t this the whole danger of unaligned AI? It’s intelligent, it “replicates” and it doesn’t do what you want.
Besides physics-breaking 6, I think the only tenuous link in the chain is 5; that AI (“replicators”) will want to convert everything to comptronium. But that seems like at least a plausible value function, right? That’s basically what we are trying to do. It’s either that or paperclips, I’d expect.
(Note, applaud your commenting to explain downvote.)
Well put! While you’re of course right in your implication that conventional “AI as we know it” would not necessarily “desire” anything, an evolved machine species would. Evolution would select for a survival instinct in them as it did in us. All of our activities you observe fall along those same lines are driven by instincts programmed into us by evolution, which we should expect to be common to all products of evolution. I speculate a strong AI trained on human connectomes would also have this quality, for the same reasons.
>”A lot of the steps in your chain are tenuous. For example, if I were making replicators, I’d ensure they were faithful replicators (not that hard from an engineering standpoint). Making faithful replicators negates step 3.”
This assumes three things: First, the continued use of deterministic computing into the indefinite future. Quantum computing, though effectively deterministic, would also increase the opportunity for copying errors because of the added difficulty in extracting the result. Second, you assume that the mechanism which ensures faithful copies could not, itself, be disabled by radiation. Third, that nobody would intentionally create robotic evolvers which not only do not prevent mutations, but intentionally introduce them.
The article also addresses the possibility that strong AI itself, or self replicating robots, are impossible (or not evolvable) when it talks about a future universe saturated instead with space colonies:
”if self replicating machines or strong AI are impossible, then instead the matter of the universe is converted into space colonies with biological creatures like us inside, closely networked. “Self replicating intelligent matter” in some form, be it biology, machines or something we haven’t seen yet. Many paths, but to the same destination.”
>”But I saw the negative vote total and no comments, a situation I’d find frustrating if I were in it,”
I appreciate the consideration but assure you that I feel no kind of way about it. I expect that response as it’s also how I responded when first exposed to ideas along these lines, mistrusting any conclusion so grandiose that I did not put together on my own. LessWrong is a haven for people with that mindset which is why I feel comfortable here and why I am not surprised, disappointed or offended that they would also reject a conclusion like this at first blush, only coming around to it months or years later, upon doing the internal legwork themselves.
A lot of the steps in your chain are tenuous. For example, if I were making replicators, I’d ensure they were faithful replicators (not that hard from an engineering standpoint). Making faithful replicators negates step 3.
(Note: I won’t respond to anything you write here. I have too many things to respond to right now. But I saw the negative vote total and no comments, a situation I’d find frustrating if I were in it, so I wanted to give you some idea of what someone might disagree with/consider sloppy/wish they hadn’t spent their time reading.)
“For example, if I were making replicators, I’d ensure they were faithful replicators ”
Isn’t this the whole danger of unaligned AI? It’s intelligent, it “replicates” and it doesn’t do what you want.
Besides physics-breaking 6, I think the only tenuous link in the chain is 5; that AI (“replicators”) will want to convert everything to comptronium. But that seems like at least a plausible value function, right? That’s basically what we are trying to do. It’s either that or paperclips, I’d expect.
(Note, applaud your commenting to explain downvote.)
Well put! While you’re of course right in your implication that conventional “AI as we know it” would not necessarily “desire” anything, an evolved machine species would. Evolution would select for a survival instinct in them as it did in us. All of our activities you observe fall along those same lines are driven by instincts programmed into us by evolution, which we should expect to be common to all products of evolution. I speculate a strong AI trained on human connectomes would also have this quality, for the same reasons.
>”A lot of the steps in your chain are tenuous. For example, if I were making replicators, I’d ensure they were faithful replicators (not that hard from an engineering standpoint). Making faithful replicators negates step 3.”
This assumes three things: First, the continued use of deterministic computing into the indefinite future. Quantum computing, though effectively deterministic, would also increase the opportunity for copying errors because of the added difficulty in extracting the result. Second, you assume that the mechanism which ensures faithful copies could not, itself, be disabled by radiation. Third, that nobody would intentionally create robotic evolvers which not only do not prevent mutations, but intentionally introduce them.
The article also addresses the possibility that strong AI itself, or self replicating robots, are impossible (or not evolvable) when it talks about a future universe saturated instead with space colonies:
”if self replicating machines or strong AI are impossible, then instead the matter of the universe is converted into space colonies with biological creatures like us inside, closely networked. “Self replicating intelligent matter” in some form, be it biology, machines or something we haven’t seen yet. Many paths, but to the same destination.”
>”But I saw the negative vote total and no comments, a situation I’d find frustrating if I were in it,”
I appreciate the consideration but assure you that I feel no kind of way about it. I expect that response as it’s also how I responded when first exposed to ideas along these lines, mistrusting any conclusion so grandiose that I did not put together on my own. LessWrong is a haven for people with that mindset which is why I feel comfortable here and why I am not surprised, disappointed or offended that they would also reject a conclusion like this at first blush, only coming around to it months or years later, upon doing the internal legwork themselves.