Too bad. Let’s just agree to disagree then, until the brain scanning technology is sufficiently advanced.
Or until you provide the evidence that causes you to hold your opinions.
So far, I didn’t see a convincing example of a person who truly wished for everyone to die, even in extrapolation.
I think it’s plausible such people exist. Conversely, if you fine-tune your implementation of “extrapolation” to make their extrapolated values radically different from their current values (and incidentally matching your own current values), that’s not what CEV is supposed to be about. But before talking about that, there’s a more important point:
To them, yes, but not to their CEV.
So why do you care about their extrapolated values? If you think CEV will extrapolate something that matches your current values but not those of many others; and you don’t want to change by force others’ actual values to match their extrapolated ones, so they will suffer in the CEV future; then why extrapolate their values at all? Why not just ignore them and extrapolate your own, if you have the first-mover advantage?
Extrapolated values are the true values. Whereas the current values are approximations, sometimes very bad and corrupted approximations.
What makes you give them such a label as “true”? There is no such thing as a “correct” or “objective” value. Or values are possible in the sense that there can be agents will all possible values, even paperclip-maximizing. The only interesting property of values is who actually holds them. But nobody actually holds your extrapolated values (today).
Current values (and values in general) are not approximations of any other values. All values just are. Why do you call them approximations?
they will suffer in the CEV future
This does not follow.
In your CEV future, the extrapolated values are maximized. Conflicting values, like the actual values held today by many or all people, are necessarily not maximized. In proportion to how much this happens, which is positively correlated to the difference between actual and extrapolated values, people who hold the actual values will suffer living in such a world. (If the AI is a singleton they will not even have a hope of a better future.)
Briefly: suffering ~ failing to achieve your values.
They are reflectively consistent in the limit of infinite knowledge and intelligence. This is a very special and interesting property.
In your CEV future, the extrapolated values are maximized. Conflicting values, like the actual values held today by many or all people, are necessarily not maximized.
But people would change—gaining knowledge and intelligence—and thus would become happier and happier with time. And I think CEV would try to synchronize this with the timing of its optimization process.
They are reflectively consistent in the limit of infinite knowledge and intelligence. This is a very special and interesting property.
Paperclipping is also self-consistent in that limit. That doesn’t make me want to include it in the CEV.
But people would change—gaining knowledge and intelligence—and thus would become happier and happier with time.
Evidence please. There’s a long long leap from ordinary gaining knowledge and intelligence through human life, to “the limit of infinite knowledge and intelligence”. Moreover we’re considering people who currently explicitly value not updating their beliefs in the face of knowledge, and basing their values on faith not evidence. For all I know they’d never approach your limit in the lifetime of the universe, even if it is the limit given infinite time. And meanwhile they’d be very unhappy.
And I think CEV would try to synchronize this with the timing of its optimization process.
So you’re saying it wouldn’t modify the world to fit their new evolved values until they actually evolved those values? Then for all we know it would never do anything at all, and the burden of proof is on you to show otherwise. Or it could modify the world to resemble their partially-evolved values, but then it wouldn’t be a CEV, just a maximizer of whatever values people happen to already have.
Paperclipping is also self-consistent in that limit. That doesn’t make me want to include it in the CEV
Then we can label paperclipping as a “true” value too. However, I still prefer true human values to be maximized, not true clippy values.
Evidence please. There’s a long long leap from ordinary gaining knowledge and intelligence through human life, to “the limit of infinite knowledge and intelligence”. Moreover we’re considering people who currently explicitly value not updating their beliefs in the face of knowledge, and basing their values on faith not evidence. For all I know they’d never approach your limit in the lifetime of the universe, even if it is the limit given infinite time. And meanwhile they’d be very unhappy.
As I said before, if someone’s mind is that incompatible with truth, I’m ok with ignoring their preferences in the actual world. They can be made happy in a simulation, or wireheaded, or whatever the combined other people’s CEV thinks best.
So you’re saying it wouldn’t modify the world to fit their new evolved values until they actually evolved those values?
No, I’m saying, the extrapolated values would probably estimate the optimal speed for their own optimization. You’re right, though, it is all speculations, and the burden of proof is on me. Or on whoever will actually define CEV.
As I said before, if someone’s mind is that incompatible with truth, I’m ok with ignoring their preferences in the actual world. They can be made happy in a simulation, or wireheaded, or whatever the combined other people’s CEV thinks best.
And as I and others said, you haven’t given any evidence that such people are rare or even less than half the population (with respect to some of the values they hold).
You’re right, though, it is all speculations, and the burden of proof is on me.
That’s a good point to end the conversation, then :-)
Too bad. Let’s just agree to disagree then, until the brain scanning technology is sufficiently advanced.
So far, I didn’t see a convincing example of a person who truly wished for everyone to die, even in extrapolation.
To them, yes, but not to their CEV.
Or until you provide the evidence that causes you to hold your opinions.
I think it’s plausible such people exist. Conversely, if you fine-tune your implementation of “extrapolation” to make their extrapolated values radically different from their current values (and incidentally matching your own current values), that’s not what CEV is supposed to be about. But before talking about that, there’s a more important point:
So why do you care about their extrapolated values? If you think CEV will extrapolate something that matches your current values but not those of many others; and you don’t want to change by force others’ actual values to match their extrapolated ones, so they will suffer in the CEV future; then why extrapolate their values at all? Why not just ignore them and extrapolate your own, if you have the first-mover advantage?
Extrapolated values are the true values. Whereas the current values are approximations, sometimes very bad and corrupted approximations.
This does not follow.
What makes you give them such a label as “true”? There is no such thing as a “correct” or “objective” value. Or values are possible in the sense that there can be agents will all possible values, even paperclip-maximizing. The only interesting property of values is who actually holds them. But nobody actually holds your extrapolated values (today).
Current values (and values in general) are not approximations of any other values. All values just are. Why do you call them approximations?
In your CEV future, the extrapolated values are maximized. Conflicting values, like the actual values held today by many or all people, are necessarily not maximized. In proportion to how much this happens, which is positively correlated to the difference between actual and extrapolated values, people who hold the actual values will suffer living in such a world. (If the AI is a singleton they will not even have a hope of a better future.)
Briefly: suffering ~ failing to achieve your values.
They are reflectively consistent in the limit of infinite knowledge and intelligence. This is a very special and interesting property.
But people would change—gaining knowledge and intelligence—and thus would become happier and happier with time. And I think CEV would try to synchronize this with the timing of its optimization process.
Paperclipping is also self-consistent in that limit. That doesn’t make me want to include it in the CEV.
Evidence please. There’s a long long leap from ordinary gaining knowledge and intelligence through human life, to “the limit of infinite knowledge and intelligence”. Moreover we’re considering people who currently explicitly value not updating their beliefs in the face of knowledge, and basing their values on faith not evidence. For all I know they’d never approach your limit in the lifetime of the universe, even if it is the limit given infinite time. And meanwhile they’d be very unhappy.
So you’re saying it wouldn’t modify the world to fit their new evolved values until they actually evolved those values? Then for all we know it would never do anything at all, and the burden of proof is on you to show otherwise. Or it could modify the world to resemble their partially-evolved values, but then it wouldn’t be a CEV, just a maximizer of whatever values people happen to already have.
Then we can label paperclipping as a “true” value too. However, I still prefer true human values to be maximized, not true clippy values.
As I said before, if someone’s mind is that incompatible with truth, I’m ok with ignoring their preferences in the actual world. They can be made happy in a simulation, or wireheaded, or whatever the combined other people’s CEV thinks best.
No, I’m saying, the extrapolated values would probably estimate the optimal speed for their own optimization. You’re right, though, it is all speculations, and the burden of proof is on me. Or on whoever will actually define CEV.
And as I and others said, you haven’t given any evidence that such people are rare or even less than half the population (with respect to some of the values they hold).
That’s a good point to end the conversation, then :-)