You may have a point there. But I think that the problem’s you’ve outlined are ones that we could circumvent.
With 1) We don’t know exactly how to describe what an FAI should do, or be like, so we might present an AI with the challenge of ‘what would an FAI be like for humanity?’ and then use that as a goal for FAI research.
2) I should think that its technically possible to construct it in such a way so that it can’t just become a super-intellect, whilst still allowing it to grow in pursuit of its goal. I would have to think for a while to present a decent starting point to this task, but I think it is more reasonable than solving the FAI problem.
3) I should think that this is partly circumvented by 1) If we know what it should look like, we can examine it to see if its going in the right direction. Since it will be constructed by a human level intellect, we should notice any errors. And if anything does slip through, then the next AI would be able to pick it up. I mean, that’s part of the point of the couple of years (or less) time limit; we can stop an AI before it becomes too powerful, or malevolent, and the next AI would not be predisposed in that way, so we can make sure it doesn’t happen again.
Thanks for replying, though. You made some good points. I hope I have adjusted the plan so that it is more to your liking (not sarcastic, I just didn’t know the best way to phrase that)
EDIT: By the way, this is strictly a backup. I am not saying that we shouldn’t pursue FAI. I’m just saying that this might be a reasonable avenue to pursue if it becomes clear that FAI is just too damn tough.
You may have a point there. But I think that the problem’s you’ve outlined are ones that we could circumvent.
With 1) We don’t know exactly how to describe what an FAI should do, or be like, so we might present an AI with the challenge of ‘what would an FAI be like for humanity?’ and then use that as a goal for FAI research.
2) I should think that its technically possible to construct it in such a way so that it can’t just become a super-intellect, whilst still allowing it to grow in pursuit of its goal. I would have to think for a while to present a decent starting point to this task, but I think it is more reasonable than solving the FAI problem.
3) I should think that this is partly circumvented by 1) If we know what it should look like, we can examine it to see if its going in the right direction. Since it will be constructed by a human level intellect, we should notice any errors. And if anything does slip through, then the next AI would be able to pick it up. I mean, that’s part of the point of the couple of years (or less) time limit; we can stop an AI before it becomes too powerful, or malevolent, and the next AI would not be predisposed in that way, so we can make sure it doesn’t happen again.
Thanks for replying, though. You made some good points. I hope I have adjusted the plan so that it is more to your liking (not sarcastic, I just didn’t know the best way to phrase that)
EDIT: By the way, this is strictly a backup. I am not saying that we shouldn’t pursue FAI. I’m just saying that this might be a reasonable avenue to pursue if it becomes clear that FAI is just too damn tough.