In 2008, which is a very long time ago, Eliezer wrote, hugely paraphrased:
There are tons of huge blobs of computation. Some of them look a lot like “Did anyone get killed? Are people happy? Are they in control of their own lives? …” If I were to know more, upgrade my various forms of brainpower, be presented with all the good-faith arguments about morality, etc etc then I would converge on one of those huge blobs that look a lot like “Did anyone get killed? Are people happy? Are they in control of their own lives? …” when I did moral reasoning. This huge blob of computation is what I call “right”. Right now, my moral intuitions are some evidence about the huge blob I’d converge to. Other humans would also converge to some huge blob of computation, and I have hopes it would look extremely similar to the one I would. Maybe it would even be identical! The reason this is plausible is because good-faith arguments about morality are likely to work similarly on similar intelligence architectures/implementations, and if we ended up with, say, 3 similar blobs, it seems fairly likely every(enhanced)one upon inspection of all 3 would choose the same 1. But at the very least there would be enormous overlaps. Regardless, “right” aka the blob of computation is a thing that exists no matter whether humans exist, and luckily our moral intuitions give us evidence of what that blob is. Certainly intelligences could exist which didn’t have good pathways to discovering it, and worse, don’t care about it, but we don’t like them. They’re “wrong”.
https://www.lesswrong.com/posts/fG3g3764tSubr6xvs/the-meaning-of-right
In 2008, which is a very long time ago, Eliezer wrote, hugely paraphrased: