I may prefer Eliezer to grab the One Ring over others who are also trying to grab it, but that does not mean I wouldn’t rather see the ring destroyed, or divided up into smaller bits for more even distribution.
I haven’t met Eliezer. I’m sure he’s a pretty nice guy. But do I trust him to create something that may take over the world? No, definitely not. I find it extremely unlikely that selflessness is the causal factor behind his wanting to create a friendly AI, despite how much he may claim so or how much he may believe so. Genes and memes do not reproduce via selflessness.
What if creating a friendly AI isn’t about creating a friendly AI?
I may prefer Eliezer to grab the One Ring over others who are also trying to grab it, but that does not mean I wouldn’t rather see the ring destroyed, or divided up into smaller bits for more even distribution.
I haven’t met Eliezer. I’m sure he’s a pretty nice guy. But do I trust him to create something that may take over the world? No, definitely not. I find it extremely unlikely that selflessness is the causal factor behind his wanting to create a friendly AI, despite how much he may claim so or how much he may believe so. Genes and memes do not reproduce via selflessness.