To refine both of our ideas: I was thinking that safety for an autonomous or unleashed AI was practically the same thing as optimality.
But I agree that there may be systems of containments that could make certain AI designs safe, without needing optimality.
To refine both of our ideas: I was thinking that safety for an autonomous or unleashed AI was practically the same thing as optimality.
But I agree that there may be systems of containments that could make certain AI designs safe, without needing optimality.