The probability I assign to achieving a capability state where it is (1) possible to prove a mind Friendly even if it has been constructed by a hostile superintelligence, (2) possible to build a hostile superintelligence, and (3) not possible to build a Friendly AI directly, is very low.
A general theory of quarantines would nevertheless be useful.
I disagree. The weak point of the scheme is the friendliness test, not the quarantine. If I prove the quarantine scheme will work, then it will work unless my computational assumptions are incorrect. If I prove it will work without assumptions, it will work without assumptions.
If you think that an AI can manipulate our moral values without ever getting to say anything to us, then that is a different story. This danger occurs even before putting an AI in a box though, and in fact even before the design of AI becomes possible. This scheme does nothing to exacerbate that danger.
If you think that an AI can manipulate our moral values without ever getting to say anything to us, then that is a different story.
With a few seconds of thought it is easy to see how this is possible even without caring about imaginary people. This is a question of cooperation among humans.
This danger occurs even before putting an AI in a box though, and in fact even before the design of AI becomes possible. This scheme does nothing to exacerbate that danger.
This is a good point too, although I may not go as far as saying it does nothing to exacerbate the danger. The increased tangibility matters.
I think that running an AI in this way is no worse than simply having the code of an AGI exist. I agree that just having the code sitting around is probably dangerous.
Nod, in terms of direct danger the two cases aren’t much different. The difference in risk is only due to the psychological impact on our fellow humans. The Pascal’s Commons becomes that much more salient to them. (Yes, I did just make that term up. The implications of the combination are clear I hope.)
Separate “let’s develop a theory of quarantines” from “let’s implement some quarantines.”
It’s just too difficult, you are bound to miss something.
Christiano should take it as a compliment that his idea is formal enough that one could imagine proving that it doesn’t work. Other than that, I don’t see why your remark should go for “quarantining an AI using cryptography” and not “creating a friendly AI.”
The winning move is not to play.
Prove it. Prove it by developing a theory of quarantines.
You are parrying my example, but not the pattern it exemplifies (not speaking of the larger pattern of the point I’m arguing for). If certain people are insensitive to this particular kind of moral arguments, they are still bound to be sensitive to some moral arguments. Maybe the AI will generate recipes for extraordinarily tasty foods for your sociopaths or get-rich-fast schemes that actually work or magically beautiful music.
Indeed. The more thorough solution would seem to be “find a guardian possessing such an utility function that the AI has nothing to offer them that you can’t trump with a counter-offer”. The existence of such guardians would depend on the upper estimations of the AI’s capabilities and on their employer’s means, and would be subject to your ability to correctly assess a candidate’s utility function.
The OP framed the scenario in terms of directing the AI to design a FAI, but the technique is more general. It’s possibly safe for all problems with a verifiable solution.
People I don’t trust but don’t want to kill (or modify to cripple). A non-compliant transhuman with self modification ability may not be able to out-compete an FAI but if it is not quarantined it could force the FAI to burn resources to maintain dominance.
But it is something we can let the FAI build for us.
Shrug. For the purposes here they could be called froogles for all I care. The quarantine could occur in either stage depending on the preferences being implemented.
A general theory of quarantines would nevertheless be useful.
Moral value can manipulate your concerns, even as you prevent causal influence. Maybe the AI will create extraordinary people in its mind, and use that as leverage to work on weak points of your defense. It’s just too difficult, you are bound to miss something. The winning move is not to play.
I disagree. The weak point of the scheme is the friendliness test, not the quarantine. If I prove the quarantine scheme will work, then it will work unless my computational assumptions are incorrect. If I prove it will work without assumptions, it will work without assumptions.
If you think that an AI can manipulate our moral values without ever getting to say anything to us, then that is a different story. This danger occurs even before putting an AI in a box though, and in fact even before the design of AI becomes possible. This scheme does nothing to exacerbate that danger.
With a few seconds of thought it is easy to see how this is possible even without caring about imaginary people. This is a question of cooperation among humans.
This is a good point too, although I may not go as far as saying it does nothing to exacerbate the danger. The increased tangibility matters.
I think that running an AI in this way is no worse than simply having the code of an AGI exist. I agree that just having the code sitting around is probably dangerous.
Nod, in terms of direct danger the two cases aren’t much different. The difference in risk is only due to the psychological impact on our fellow humans. The Pascal’s Commons becomes that much more salient to them. (Yes, I did just make that term up. The implications of the combination are clear I hope.)
Separate “let’s develop a theory of quarantines” from “let’s implement some quarantines.”
Christiano should take it as a compliment that his idea is formal enough that one could imagine proving that it doesn’t work. Other than that, I don’t see why your remark should go for “quarantining an AI using cryptography” and not “creating a friendly AI.”
Prove it. Prove it by developing a theory of quarantines.
I agree.
Sociopathic guardians woud solve that one particular problem (and bring others, of course, but perhaps more easily countered).
You are parrying my example, but not the pattern it exemplifies (not speaking of the larger pattern of the point I’m arguing for). If certain people are insensitive to this particular kind of moral arguments, they are still bound to be sensitive to some moral arguments. Maybe the AI will generate recipes for extraordinarily tasty foods for your sociopaths or get-rich-fast schemes that actually work or magically beautiful music.
Indeed. The more thorough solution would seem to be “find a guardian possessing such an utility function that the AI has nothing to offer them that you can’t trump with a counter-offer”. The existence of such guardians would depend on the upper estimations of the AI’s capabilities and on their employer’s means, and would be subject to your ability to correctly assess a candidate’s utility function.
Very rarely is the winning move not to play.
It seems especially unlikely to be the case if you are trying to build a prison.
For what?
The OP framed the scenario in terms of directing the AI to design a FAI, but the technique is more general. It’s possibly safe for all problems with a verifiable solution.
People I don’t trust but don’t want to kill (or modify to cripple). A non-compliant transhuman with self modification ability may not be able to out-compete an FAI but if it is not quarantined it could force the FAI to burn resources to maintain dominance.
But it is something we can let the FAI build for us.
At what point does a transhuman become posthuman?
Shrug. For the purposes here they could be called froogles for all I care. The quarantine could occur in either stage depending on the preferences being implemented.
You mean posthuman?