Regardless, the kind of free form evolution in values, philosophy, governance/coordination systems we’ve enjoyed for most of human history would become a thing of the past.
I think there is an extent that we want to lock in our values, or meta values, or value update rules anyway. Regardless of the issues about far coordination. Because they are our values. If you wind back time far enough, and let a bunch of homo erectus lock in their values, they would choose somewhat differently. Now I won’t say “Tough, suck’s to be a homo erctus.” The rules we choose to lock in may well be good by homo erectus values. We might set meta level rules that pay attention to their object level values. Our object level values might be similar enough that they would think well of our optimum. Remember “Not exactly the whole universe optimized to max util” != “bad”
If baby eating aliens came and brainwashed all humans into baby eating monsters, you have to say. “No, this isn’t what I value, this isn’t anything close. And getting brainwashed by aliens doesn’t count as the “free form evolution of values” the way I was thinking of it either. I was thinking of ethical arguments swaying humans opinions, not brainwashing. (Actually, the difference between those can be subtle) The object level isn’t right. The meta level isn’t right either. This is just wrong. I want to lock in our values, at least to a sufficient extent to stop this.
I think there is an extent that we want to lock in our values, or meta values, or value update rules anyway. Regardless of the issues about far coordination. Because they are our values. If you wind back time far enough, and let a bunch of homo erectus lock in their values, they would choose somewhat differently. Now I won’t say “Tough, suck’s to be a homo erctus.” The rules we choose to lock in may well be good by homo erectus values. We might set meta level rules that pay attention to their object level values. Our object level values might be similar enough that they would think well of our optimum. Remember “Not exactly the whole universe optimized to max util” != “bad”
If baby eating aliens came and brainwashed all humans into baby eating monsters, you have to say. “No, this isn’t what I value, this isn’t anything close. And getting brainwashed by aliens doesn’t count as the “free form evolution of values” the way I was thinking of it either. I was thinking of ethical arguments swaying humans opinions, not brainwashing. (Actually, the difference between those can be subtle) The object level isn’t right. The meta level isn’t right either. This is just wrong. I want to lock in our values, at least to a sufficient extent to stop this.