About CEV: Am I correct that Eliezer’s main goal would be to find the one utility function for all humans? Or is it equally plausible to assume that some important values cannot be extrapolated coherently, and that a Seed-AI would therefore provide several results clustered around some groups of people?
[edit]Reading helps. This he has actually discussed, in sufficient detail, I think.[/edit]
I think the expectation is that, if all humans had the same knowledge and were better at thinking (and were more the people we’d like to be, etc.), then there would be a much higher degree of coherence than we might expect, but not necessarily that everyone would ultimately have the same utility function.
Or is it equally plausible to assume that some important values cannot be extrapolated coherently, and that a Seed-AI would therefore provide several results clustered around some groups of people?
There is only one world to build something from. “Several results” is never a solution to the problem of what to actually do.
Please bear with my bad English, this did not come across as intended.
So: Either all or nothing?
No possibility that the AI could detect that to maximize this hardcore utility function we need to separate different groups of people, maybe/probably lying to them about their separation, just providing the illusion of unity of humankind to each group? Or is too obvious a thought, or too dumb because of x?
I think the idea is that CEV lets us “grow up more together” and figure that out later.
I have only recently started looking into CEV so I’m not sure whether I a) think it’s a workable theory and b)think it’s a good solution, but I like the way it puts off important questions.
It’s impossible to predict what we will want if age, disease, violence, and poverty become irrelevant (or at least optional).
About CEV: Am I correct that Eliezer’s main goal would be to find the one utility function for all humans? Or is it equally plausible to assume that some important values cannot be extrapolated coherently, and that a Seed-AI would therefore provide several results clustered around some groups of people?
[edit]Reading helps. This he has actually discussed, in sufficient detail, I think.[/edit]
I think the expectation is that, if all humans had the same knowledge and were better at thinking (and were more the people we’d like to be, etc.), then there would be a much higher degree of coherence than we might expect, but not necessarily that everyone would ultimately have the same utility function.
There is only one world to build something from. “Several results” is never a solution to the problem of what to actually do.
Please bear with my bad English, this did not come across as intended.
So: Either all or nothing?
No possibility that the AI could detect that to maximize this hardcore utility function we need to separate different groups of people, maybe/probably lying to them about their separation, just providing the illusion of unity of humankind to each group? Or is too obvious a thought, or too dumb because of x?
I think the idea is that CEV lets us “grow up more together” and figure that out later.
I have only recently started looking into CEV so I’m not sure whether I a) think it’s a workable theory and b)think it’s a good solution, but I like the way it puts off important questions.
It’s impossible to predict what we will want if age, disease, violence, and poverty become irrelevant (or at least optional).