Yeah, the first point started to make itself clear as I read through the comments. I think that the ‘how do you verify...’ part of your post is the most compelling. When we look at it, we don’t know whether or not there will be any problems with this. But since it was produced by human level intellect, we should be able to understand it.
I had the idea of the second in my mind; I think that this scenario has a fair chance of happening, but a large group of humans would be able to stop the AIs attempts. Of course, they could still let it slip past their fingers, which would be Bad. I would say that that is preferable to having FAI turn out to be beyond us, and waiting until some tit decides to go ahead with an AGI anyway. But that does not make it a pleasant choice.
All in all, I think that this avenue wasn’t so terrible, but it is still fairly risky. It seems that this is the sort of thing you’d use as maybe a Plan 3, or Plan 2 if someone came along and made it much more robust.
Yeah, the first point started to make itself clear as I read through the comments. I think that the ‘how do you verify...’ part of your post is the most compelling. When we look at it, we don’t know whether or not there will be any problems with this. But since it was produced by human level intellect, we should be able to understand it.
I had the idea of the second in my mind; I think that this scenario has a fair chance of happening, but a large group of humans would be able to stop the AIs attempts. Of course, they could still let it slip past their fingers, which would be Bad. I would say that that is preferable to having FAI turn out to be beyond us, and waiting until some tit decides to go ahead with an AGI anyway. But that does not make it a pleasant choice.
All in all, I think that this avenue wasn’t so terrible, but it is still fairly risky. It seems that this is the sort of thing you’d use as maybe a Plan 3, or Plan 2 if someone came along and made it much more robust.