That is combined with going about implicitly (it’s this implicit part that I particularly don’t like) assuming that “all of humanity” is what CEV must be run on. I can’t know that CEV will not kill me. Even if it doesn’t kill me it is nearly tautologically true that CEV is better (in the subjectively objective sense of ‘better’).
Here’s the trouble, though: by the same reasoning, if someone is implementing CEV or CEV or CEV or any such, everyone who isn’t a white person, Russian intellectual, or Orthodox Gnostic Pagan has a damned good reason to be worried that it’ll kill them.
Now, it may turn out that CEV is sufficiently similar to CEV that the rest of humanity needn’t worry. But is that a safe bet for all of us who aren’t Orthodox Gnostic Pagans?
For anyone who implements an AI, any justification for including other members of humanity in their CEV calculation is valid iff their CEV would specify that anyway.
So, the rational course of action for anyone implementing an AI is to simply use their own CEV. If that CEV specifies to consider the CEV of other members of humanity then so be it.
For anyone who implements an AI, any justification for including other members of humanity in their CEV calculation is valid iff their CEV would specify that anyway.
YES! CEV is altruism inclusive. For some reason this is often really hard to make people understand that the altruis belongs inside the CEV calculation while the compromise-for-instrumental-purposes goes on the outside.
So, the rational course of action for anyone implementing an AI is to simply use their own CEV. If that CEV specifies to consider the CEV of other members of humanity then so be it.
This is true all else being equal. (The ‘all else’ being specifically that you are just as likely to succeed in creating FAI> as you are in creating FAI>.)
For some reason this is often really hard to make people understand
IAWYC, but who doesn’t get this?
Given our attitude toward politics, I’d expect little if any gain from replacing ‘humanity’ with ‘Less Wrong’. Moreover, others would correctly take our exclusion of them as evidence of a meaningful difference if we actually made this decision. And I can’t write an AGI by myself, nor can the smarter version of me calling itself Eliezer.
Compromise is often necessary for the purpose of cooperation and CEV is a potentially useful Schelling point to agree upon. However, it should be acknowledged that these considerations are instrumental—or at least acknowledged that they are decisions to be made. Eliezer’s discussion of the subject up until now has been completely innocent of even awareness of the possibility that ‘humanity’ is the only thing that could conceivably be plugged in to CEV. This is, as far as I am concerned, a bad thing.
Here’s the trouble, though: by the same reasoning, if someone is implementing CEV or CEV or CEV or any such, everyone who isn’t a white person, Russian intellectual, or Orthodox Gnostic Pagan has a damned good reason to be worried that it’ll kill them.
Now, it may turn out that CEV is sufficiently similar to CEV that the rest of humanity needn’t worry. But is that a safe bet for all of us who aren’t Orthodox Gnostic Pagans?
For anyone who implements an AI, any justification for including other members of humanity in their CEV calculation is valid iff their CEV would specify that anyway.
So, the rational course of action for anyone implementing an AI is to simply use their own CEV. If that CEV specifies to consider the CEV of other members of humanity then so be it.
YES! CEV is altruism inclusive. For some reason this is often really hard to make people understand that the altruis belongs inside the CEV calculation while the compromise-for-instrumental-purposes goes on the outside.
This is true all else being equal. (The ‘all else’ being specifically that you are just as likely to succeed in creating FAI> as you are in creating FAI>.)
IAWYC, but who doesn’t get this?
Given our attitude toward politics, I’d expect little if any gain from replacing ‘humanity’ with ‘Less Wrong’. Moreover, others would correctly take our exclusion of them as evidence of a meaningful difference if we actually made this decision. And I can’t write an AGI by myself, nor can the smarter version of me calling itself Eliezer.
I don’t recall the names. The conversations would be archived though if you are interested.
Compromise is often necessary for the purpose of cooperation and CEV is a potentially useful Schelling point to agree upon. However, it should be acknowledged that these considerations are instrumental—or at least acknowledged that they are decisions to be made. Eliezer’s discussion of the subject up until now has been completely innocent of even awareness of the possibility that ‘humanity’ is the only thing that could conceivably be plugged in to CEV. This is, as far as I am concerned, a bad thing.
Huh?
bad thing. Fixed.