Let me see if I understand. Firstly, is there any reason what you’re trying to do is create a friendly AI? Would, for instance, getting an unknown AI to solve a specific numerical problem with an objectively checkable answer be an equally relevant example, without the distraction of whether we would ever trust the so-called friendly AI?
I think Less Wrong needs a variant of Godwin’s Law. Any post whose content would be just as meaningful and accessible without mentioning Friendly AI, shouldn’t mention Friendly AI.
Fair enough. I am going to rework the post to describe the benefits of a provably secure quarantine in general rather than in this particular example.
The main reason I describe friendliness is that I can’t believe that such a quarantine would hold up for long if the boxed AI was doing productive work for society. It would almost certainly get let out without ever saying anything at all. It seems like the only real hope is to use its power to somehow solve FAI before the existence of an uFAI becomes widely known.
LOL. Good point. Although it’s a two way street: I think people did genuinely want to talk about the AI issues raised here, even though they were presented as hypothetical premises for a different problem, rather than as talking points.
Perhaps the orthonormal law of less wrong should be, “if your post is meaningful without fAI, but may be relevant to fAI, make the point in the least distracting example possible, and then go on to say how, if it holds, it may be relevant to fAI”. Although that’s not as snappy as Godwin’s :)
I agree. In particular, I think there should be some more elegant way to tell people things along the lines of ‘OK, so you have this Great Moral Principal, now lets see you build a creature that works by it’.
I think Less Wrong needs a variant of Godwin’s Law. Any post whose content would be just as meaningful and accessible without mentioning Friendly AI, shouldn’t mention Friendly AI.
Fair enough. I am going to rework the post to describe the benefits of a provably secure quarantine in general rather than in this particular example.
The main reason I describe friendliness is that I can’t believe that such a quarantine would hold up for long if the boxed AI was doing productive work for society. It would almost certainly get let out without ever saying anything at all. It seems like the only real hope is to use its power to somehow solve FAI before the existence of an uFAI becomes widely known.
LOL. Good point. Although it’s a two way street: I think people did genuinely want to talk about the AI issues raised here, even though they were presented as hypothetical premises for a different problem, rather than as talking points.
Perhaps the orthonormal law of less wrong should be, “if your post is meaningful without fAI, but may be relevant to fAI, make the point in the least distracting example possible, and then go on to say how, if it holds, it may be relevant to fAI”. Although that’s not as snappy as Godwin’s :)
I agree. In particular, I think there should be some more elegant way to tell people things along the lines of ‘OK, so you have this Great Moral Principal, now lets see you build a creature that works by it’.