Steve Omhundro has given several talks that talk about the consequences of a purely logical or rationally exact AI system.
His talk at the Sing. Summit 2007 The Nature of Self-Improving AI discussed what would happen if such an Agent were to have the wrong rules constraining its behavior. I saw a purely logical system as being one such possible agent type to which he referred.
Steve Omhundro has given several talks that talk about the consequences of a purely logical or rationally exact AI system.
His talk at the Sing. Summit 2007 The Nature of Self-Improving AI discussed what would happen if such an Agent were to have the wrong rules constraining its behavior. I saw a purely logical system as being one such possible agent type to which he referred.