However, our world serves as a single counterexample to the rule that all imperfect optimization will be disastrous.
Except that the proposed rule is more like, given an imperfect objective function, the outcome is likely to turn from ok to disastrous at some point as optimization power is increased. See the Context Disaster and Edge Instantiation articles at Arbital.
The idea of context disasters applies to humans or humanity as a whole as well as AIs, since as you mentioned we are already optimizing for something that is not exactly our true values. Even without the possibility of AI we have a race between technological progress (which increases our optimization power) and progress in coordination and understanding our values (which improve our objective function), which we can easily lose.
Except that the proposed rule is more like, given an imperfect objective function, the outcome is likely to turn from ok to disastrous at some point as optimization power is increased. See the Context Disaster and Edge Instantiation articles at Arbital.
The idea of context disasters applies to humans or humanity as a whole as well as AIs, since as you mentioned we are already optimizing for something that is not exactly our true values. Even without the possibility of AI we have a race between technological progress (which increases our optimization power) and progress in coordination and understanding our values (which improve our objective function), which we can easily lose.