Of course an arbitrarily chosen human’s values are more similar to to the aggregated values of humanity as a whole than humanity’s values are similar to an arbitrarily chosen point in value-space. Value-space is big.
I don’t see how my point depends on that, though. Your argument here claims that “FAI” is easier than “LukeFriendlyAI” because LFAI requires an additional step of defining the target, and FAI doesn’t require that step. I’m pointing out that FAI does require that step. In fact, target definition for “humanity” is a more difficult problem than target definition for “Luke”
Yes, it depends on whether you think Luke is more different from humanity than humanity is from StuffWeCareNotOf
Of course an arbitrarily chosen human’s values are more similar to to the aggregated values of humanity as a whole than humanity’s values are similar to an arbitrarily chosen point in value-space. Value-space is big.
I don’t see how my point depends on that, though. Your argument here claims that “FAI” is easier than “LukeFriendlyAI” because LFAI requires an additional step of defining the target, and FAI doesn’t require that step. I’m pointing out that FAI does require that step. In fact, target definition for “humanity” is a more difficult problem than target definition for “Luke”