Phil, I don’t see the point in criticizing a flawed implementation of CEV. If we don’t know how to implement it properly, if we don’t understand how it’s supposed to work in much more technical detail than the CEV proposal includes, it shouldn’t be implemented at all, no more than a garden-variety unFriendly AI. If you can point out a genuine flaw in a specific scenario of FAI’s operation, right implementation of CEV shouldn’t lead to that. To answer your question, yes, CEV could decide to disappear completely, construct an unintelligent artifact, or produce an AI with some strange utility. It makes a single decision, an attempt to deliver humane values through the threshold of inability to self-reflect, and what comes of it is anyone’s guess.
Phil, I don’t see the point in criticizing a flawed implementation of CEV. If we don’t know how to implement it properly, if we don’t understand how it’s supposed to work in much more technical detail than the CEV proposal includes, it shouldn’t be implemented at all, no more than a garden-variety unFriendly AI. If you can point out a genuine flaw in a specific scenario of FAI’s operation, right implementation of CEV shouldn’t lead to that. To answer your question, yes, CEV could decide to disappear completely, construct an unintelligent artifact, or produce an AI with some strange utility. It makes a single decision, an attempt to deliver humane values through the threshold of inability to self-reflect, and what comes of it is anyone’s guess.