I was thinking about less ideal variations more than explicitly harmful ones. If we’re optimizing for a set of values—like happiness, intelligence, virtuousness—through birth and environment, then I thought it unlikely that we’d have multiple options with the exact same maximal optimization distribution. If there are, the identical people part of it doesn’t hold yeah—if there’s more than one option, it’s likely that there are many, so there might not be identicals at all.
Yes its unlikely that the utility turns out literally identical. However, people enjoy having friends that aren’t just clones of themselves. (Alright, I don’t have evidence for this, but it seems like something people might enjoy) Hence it is possible for a mixture of different types of people to be happier than either type of people on their own.
If you use some computational theories of consciousness, there is no morally meaningful difference between one mind and two copies of the same mind.
However, people enjoy having friends that aren’t just clones of themselves.
This is true, yeah, but I think that’s more a natural human trait than something intrinsic to sentient life. It’s possible that entirely different forms of life would still be happier with different types of people, but if that happiness is what we value, wouldn’t replicating it directly achieve the same effect?
Maybe there’s a combination of birth and environment conditions that maximize utility for an individual, but we may have different values for society in general which would lead to a lower overall utility for a society of identical people. For example, we generally value diversity, and I think the utility function we use for society in general would probably return a lower result for a population of identical optimally born/raised people than for a diverse population of slightly-less-than-optimally born/raised people.
If we hold diversity as a terminal value then yes, a diverse population of less-than-optimal people is better. But don’t we generally see diversity less as a terminal value than something that’s useful because it approximates terminal values?
I think at least some people do, but I don’t have a good argument or evidence to support that claim. Even if your only terminal values are more traditional conceptions of utility, diversity still serves those values really well. A homogenous population is not just more boring, but also less resilient to change (and pathogens, depending on the degree of homogeneity). I think it would be shortsighted and overconfident to design an optimal, identical population since they would lack the resilience and variety of experience to maintain that optimally once any problems appeared.
Boring matters if they consider it a negative, which isn’t a necessity (boredom being something we can edit if needed).
Re: resilience, I agree that those are good reasons to not try anything like this today or in the immediate future. But at a far enough point where we understand our environment with enough precision to not have to overly worry about external threats, would that still hold? Or do you think that kind of future isn’t possible? (Realistically, and outside the simplified scenario, AGI could take care of any future problems without our needing to trouble ourselves).
How do you motivate the embedded assumption that there is no such thing as harmless variation?
I was thinking about less ideal variations more than explicitly harmful ones. If we’re optimizing for a set of values—like happiness, intelligence, virtuousness—through birth and environment, then I thought it unlikely that we’d have multiple options with the exact same maximal optimization distribution. If there are, the identical people part of it doesn’t hold yeah—if there’s more than one option, it’s likely that there are many, so there might not be identicals at all.
Yes its unlikely that the utility turns out literally identical. However, people enjoy having friends that aren’t just clones of themselves. (Alright, I don’t have evidence for this, but it seems like something people might enjoy) Hence it is possible for a mixture of different types of people to be happier than either type of people on their own.
If you use some computational theories of consciousness, there is no morally meaningful difference between one mind and two copies of the same mind.
https://slatestarcodex.com/2015/03/15/answer-to-job/
Given the large but finite resources of reality, it is optimal to create a fair bit of harmless variation.
This is true, yeah, but I think that’s more a natural human trait than something intrinsic to sentient life. It’s possible that entirely different forms of life would still be happier with different types of people, but if that happiness is what we value, wouldn’t replicating it directly achieve the same effect?
Maybe there’s a combination of birth and environment conditions that maximize utility for an individual, but we may have different values for society in general which would lead to a lower overall utility for a society of identical people. For example, we generally value diversity, and I think the utility function we use for society in general would probably return a lower result for a population of identical optimally born/raised people than for a diverse population of slightly-less-than-optimally born/raised people.
If we hold diversity as a terminal value then yes, a diverse population of less-than-optimal people is better. But don’t we generally see diversity less as a terminal value than something that’s useful because it approximates terminal values?
I think at least some people do, but I don’t have a good argument or evidence to support that claim. Even if your only terminal values are more traditional conceptions of utility, diversity still serves those values really well. A homogenous population is not just more boring, but also less resilient to change (and pathogens, depending on the degree of homogeneity). I think it would be shortsighted and overconfident to design an optimal, identical population since they would lack the resilience and variety of experience to maintain that optimally once any problems appeared.
Boring matters if they consider it a negative, which isn’t a necessity (boredom being something we can edit if needed).
Re: resilience, I agree that those are good reasons to not try anything like this today or in the immediate future. But at a far enough point where we understand our environment with enough precision to not have to overly worry about external threats, would that still hold? Or do you think that kind of future isn’t possible? (Realistically, and outside the simplified scenario, AGI could take care of any future problems without our needing to trouble ourselves).