With regards to your (and Eliezer’s) quest, I think Oppenheimer’s Maxim is relevant:
It is a profound and necessary truth that the deep things in science are not found because they are useful, they are found because it was possible to find them.
A theory of machine ethics may very well be the most useful concept ever discovered by humanity. But as far as I can see, there is no reason to believe that such a theory can be found.
With regards to your (and Eliezer’s) quest, I think Oppenheimer’s Maxim is relevant:
A theory of machine ethics may very well be the most useful concept ever discovered by humanity. But as far as I can see, there is no reason to believe that such a theory can be found.
Daniel_Burfoot,
I share your pessimism. When superintelligence arrives, humanity is almost certainly fucked. But we can try.