The majority of Friendly AI’s ability to do good comes from its ability to modify its own code. Recursive self improvement is key to gaining intelligence and ability swiftly. An AI that is about as powerful as a human is only about as useful as a human.
I disagree. AIs can be copied, which is a huge boost. You just need a single Stephen Hawking AI to come out of the population, then you make 1 million copies of it and dramatically speed up science.
I don’t buy any argument saying that an FAI must be able to modify its own code in order to take off. Computer programs that can’t modify their own code can be Turing-complete; adding self-modification doesn’t add anything to Turing-completeness.
That said, I do kind of buy this argument about how if an AI is allowed to write and execute arbitrary code, that’s kind of like self-modification. I think there may be important differences.
The majority of Friendly AI’s ability to do good comes from its ability to modify its own code. Recursive self improvement is key to gaining intelligence and ability swiftly. An AI that is about as powerful as a human is only about as useful as a human.
I disagree. AIs can be copied, which is a huge boost. You just need a single Stephen Hawking AI to come out of the population, then you make 1 million copies of it and dramatically speed up science.
I don’t buy any argument saying that an FAI must be able to modify its own code in order to take off. Computer programs that can’t modify their own code can be Turing-complete; adding self-modification doesn’t add anything to Turing-completeness.
That said, I do kind of buy this argument about how if an AI is allowed to write and execute arbitrary code, that’s kind of like self-modification. I think there may be important differences.
It makes sense to say that a computer language is Turing-complete.
It doesn’t make sense to say that a computer program is Turing-complete.
Arguably, a computer program with input is a computer language. In any case, I don’t think this matters to my point.