You’ve already said the friendly AI problem is terribly hard, and there’s a large chance we’ll fail to solve it in time. Why then do you keep adding these extra minor conditions on what it means to be “friendly”, making your design task all that harder?
While we are on the topic, the problem I see in this area is not that friendliness has too many extra conditions appended on it. It’s that the concept is so vague and amorphous that only Yudkowsky seems to know what it means.
When I last asked what it meant, I was pointed to the CEV document—which seems like a rambling word salad to me—I have great difficulty in taking it seriously. The most glaring problem with the document—from my point of view—is that it assumes that everyone knows what a “human” is. That might be obvious today, but in the future, things could well get a lot more blurry—especially if it is decreed that only “humans” have a say in the proposed future. Do uploads count? What about cyborgs? - and so on.
If it is proposed that everything in the future revolves around “humans” (until the “humans” say otherwise) then—apart from the whole issue of whether that is a good idea in the first place—we (or at least the proposed AI) would first need to know what a “human” is.
While we are on the topic, the problem I see in this area is not that friendliness has too many extra conditions appended on it. It’s that the concept is so vague and amorphous that only Yudkowsky seems to know what it means.
When I last asked what it meant, I was pointed to the CEV document—which seems like a rambling word salad to me—I have great difficulty in taking it seriously. The most glaring problem with the document—from my point of view—is that it assumes that everyone knows what a “human” is. That might be obvious today, but in the future, things could well get a lot more blurry—especially if it is decreed that only “humans” have a say in the proposed future. Do uploads count? What about cyborgs? - and so on.
If it is proposed that everything in the future revolves around “humans” (until the “humans” say otherwise) then—apart from the whole issue of whether that is a good idea in the first place—we (or at least the proposed AI) would first need to know what a “human” is.