Michael, it seems that you are unaware of Eliezer’s work. Basically, he agrees with you that vague appeals to “emergence” will destroy the world. He has written a series of posts that show why almost all possible superintelligent AIs are dangerous. So he has created a theory, called Coherent Extrapolated Volition, that he thinks is a decent recipe for a “Friendly AI”. I think it needs some polish, but I assume that he won’t program it as it is now. He’s actually holding off getting into implementation, specifically because he’s afraid of messing up.
Michael, it seems that you are unaware of Eliezer’s work. Basically, he agrees with you that vague appeals to “emergence” will destroy the world. He has written a series of posts that show why almost all possible superintelligent AIs are dangerous. So he has created a theory, called Coherent Extrapolated Volition, that he thinks is a decent recipe for a “Friendly AI”. I think it needs some polish, but I assume that he won’t program it as it is now. He’s actually holding off getting into implementation, specifically because he’s afraid of messing up.