I disagree. The weak point of the scheme is the friendliness test, not the quarantine. If I prove the quarantine scheme will work, then it will work unless my computational assumptions are incorrect. If I prove it will work without assumptions, it will work without assumptions.
If you think that an AI can manipulate our moral values without ever getting to say anything to us, then that is a different story. This danger occurs even before putting an AI in a box though, and in fact even before the design of AI becomes possible. This scheme does nothing to exacerbate that danger.
If you think that an AI can manipulate our moral values without ever getting to say anything to us, then that is a different story.
With a few seconds of thought it is easy to see how this is possible even without caring about imaginary people. This is a question of cooperation among humans.
This danger occurs even before putting an AI in a box though, and in fact even before the design of AI becomes possible. This scheme does nothing to exacerbate that danger.
This is a good point too, although I may not go as far as saying it does nothing to exacerbate the danger. The increased tangibility matters.
I think that running an AI in this way is no worse than simply having the code of an AGI exist. I agree that just having the code sitting around is probably dangerous.
Nod, in terms of direct danger the two cases aren’t much different. The difference in risk is only due to the psychological impact on our fellow humans. The Pascal’s Commons becomes that much more salient to them. (Yes, I did just make that term up. The implications of the combination are clear I hope.)
I disagree. The weak point of the scheme is the friendliness test, not the quarantine. If I prove the quarantine scheme will work, then it will work unless my computational assumptions are incorrect. If I prove it will work without assumptions, it will work without assumptions.
If you think that an AI can manipulate our moral values without ever getting to say anything to us, then that is a different story. This danger occurs even before putting an AI in a box though, and in fact even before the design of AI becomes possible. This scheme does nothing to exacerbate that danger.
With a few seconds of thought it is easy to see how this is possible even without caring about imaginary people. This is a question of cooperation among humans.
This is a good point too, although I may not go as far as saying it does nothing to exacerbate the danger. The increased tangibility matters.
I think that running an AI in this way is no worse than simply having the code of an AGI exist. I agree that just having the code sitting around is probably dangerous.
Nod, in terms of direct danger the two cases aren’t much different. The difference in risk is only due to the psychological impact on our fellow humans. The Pascal’s Commons becomes that much more salient to them. (Yes, I did just make that term up. The implications of the combination are clear I hope.)