1) : I should have expressed myself more clearly. The Idea is: There will be lots of ai. Most will be put in a box. The first one not in the box will take over the world.
3) I am not saying they were sufficiently careful. All i say is they were more careful than the other guys.
Agreed, but IFF there are multiple boxed AIs, then we get to choose between them. So it’s p(This Boxed AI is unfriendly) vs p(The NEXT AI isn’t boxed). If the next AI is boxed, then we now have two candidates, and we can probably use this to our advantage (studying differences in responses, using one to confirm proofs from the other, etc.)
Given the minimal safety precaution of “box it, but allow a single researcher to set it free after a 5-hour conversation”, there’s plenty of room for the next boxed AI to show more evidence of friendly, careful, safe design :)
1) : I should have expressed myself more clearly. The Idea is: There will be lots of ai. Most will be put in a box. The first one not in the box will take over the world.
3) I am not saying they were sufficiently careful. All i say is they were more careful than the other guys.
Agreed, but IFF there are multiple boxed AIs, then we get to choose between them. So it’s p(This Boxed AI is unfriendly) vs p(The NEXT AI isn’t boxed). If the next AI is boxed, then we now have two candidates, and we can probably use this to our advantage (studying differences in responses, using one to confirm proofs from the other, etc.)
Given the minimal safety precaution of “box it, but allow a single researcher to set it free after a 5-hour conversation”, there’s plenty of room for the next boxed AI to show more evidence of friendly, careful, safe design :)