For anyone who implements an AI, any justification for including other members of humanity in their CEV calculation is valid iff their CEV would specify that anyway.
So, the rational course of action for anyone implementing an AI is to simply use their own CEV. If that CEV specifies to consider the CEV of other members of humanity then so be it.
For anyone who implements an AI, any justification for including other members of humanity in their CEV calculation is valid iff their CEV would specify that anyway.
YES! CEV is altruism inclusive. For some reason this is often really hard to make people understand that the altruis belongs inside the CEV calculation while the compromise-for-instrumental-purposes goes on the outside.
So, the rational course of action for anyone implementing an AI is to simply use their own CEV. If that CEV specifies to consider the CEV of other members of humanity then so be it.
This is true all else being equal. (The ‘all else’ being specifically that you are just as likely to succeed in creating FAI> as you are in creating FAI>.)
For some reason this is often really hard to make people understand
IAWYC, but who doesn’t get this?
Given our attitude toward politics, I’d expect little if any gain from replacing ‘humanity’ with ‘Less Wrong’. Moreover, others would correctly take our exclusion of them as evidence of a meaningful difference if we actually made this decision. And I can’t write an AGI by myself, nor can the smarter version of me calling itself Eliezer.
For anyone who implements an AI, any justification for including other members of humanity in their CEV calculation is valid iff their CEV would specify that anyway.
So, the rational course of action for anyone implementing an AI is to simply use their own CEV. If that CEV specifies to consider the CEV of other members of humanity then so be it.
YES! CEV is altruism inclusive. For some reason this is often really hard to make people understand that the altruis belongs inside the CEV calculation while the compromise-for-instrumental-purposes goes on the outside.
This is true all else being equal. (The ‘all else’ being specifically that you are just as likely to succeed in creating FAI> as you are in creating FAI>.)
IAWYC, but who doesn’t get this?
Given our attitude toward politics, I’d expect little if any gain from replacing ‘humanity’ with ‘Less Wrong’. Moreover, others would correctly take our exclusion of them as evidence of a meaningful difference if we actually made this decision. And I can’t write an AGI by myself, nor can the smarter version of me calling itself Eliezer.
I don’t recall the names. The conversations would be archived though if you are interested.