My one-sentence summary of the problem CEV is intended to solve (I do not assert
that it does so) is “how do we define the target condition for a superhuman
environment-optimizing system in such a way that we can be confident that it won’t do
the wrong thing?”
My question was meant to be “What problem does extrapolation solve?”, not “What problem is CEV intended to solve?” To answer the former question, you’d need some example that can be solved with extrapolation that can’t easily be solved without it. I can’t presently see a reason the example should be much more complicated than the Fred-wants-to-kill-Steve example we were talking about earlier.
That is expanded on at great length in the Metaethics and Fun Theory sequences, if
you’re interested.
I might read that eventually, but not for the purpose of getting an answer to this question. I have no reason to believe the problem solved by extrapolation is so complex that one needs to read a long exposition to understand the problem. Understanding why extrapolation solves the problem might take some work, but understanding what the problem is should not. If there’s no short description of a problem that requires extrapolation to solve it, it seems likely to me that extrapolation does not solve a problem.
For example, integral calculus is required to solve the problem “What is the area under this parabola?”, given enough parameters to uniquely determine the parabola. Are you seriously saying that extrapolation is necessary but its role is more obscure than that of integral calculus?
Are you seriously saying that extrapolation is necessary but its role is more obscure than that of integral calculus?
What I said was that the putative role of extrapolation is avoiding optimizing for the wrong thing.
That’s not noticeably more complicated a sentence than “the purpose of calculus is to calculate the area under a parabola”, so I mostly think your question is rhetorically misleading.
Anyway, as I explicitly said, I’m not asserting that extrapolation solves any problem at all. I was answering (EDIT: what I understood to be) your question about what problem it’s meant to solve, and providing some links to further reading if you’re interested, which it sounds like you aren’t, which is fine.
Ah, I see. I was hoping to find an example, about as concrete as the Fred-wants-to-kill-Steve example, that someone believes actually motivates extrapolation. A use-case, as it were.
You gave the general idea behind it. In retrospect, that was a reasonable interpretation of my question.
I’m not asserting that extrapolation solves any problem at all.
Okay, so you don’t have a use case. No problem, I don’t either. Does anybody else?
I realize you haven’t been online for a few months, but yes, I do.
Humanity’s desires are not currently consistent. An FAI couldn’t satisfy them all because some of them contradict each other, like Fred’s and Steve’s in your example. There may not even be a way of averaging them out fairly or meaningfully. Either Steve lives or dies: there’s no average or middle ground and Fred is just out of luck.
However, it might be the case that human beings are similar enough that if you extrapolate everything all humans want, you get something consistent. Extrapolation is a tool to resolve inconsistencies and please both Fred and Steve.
My question was meant to be “What problem does extrapolation solve?”, not “What problem is CEV intended to solve?” To answer the former question, you’d need some example that can be solved with extrapolation that can’t easily be solved without it. I can’t presently see a reason the example should be much more complicated than the Fred-wants-to-kill-Steve example we were talking about earlier.
I might read that eventually, but not for the purpose of getting an answer to this question. I have no reason to believe the problem solved by extrapolation is so complex that one needs to read a long exposition to understand the problem. Understanding why extrapolation solves the problem might take some work, but understanding what the problem is should not. If there’s no short description of a problem that requires extrapolation to solve it, it seems likely to me that extrapolation does not solve a problem.
For example, integral calculus is required to solve the problem “What is the area under this parabola?”, given enough parameters to uniquely determine the parabola. Are you seriously saying that extrapolation is necessary but its role is more obscure than that of integral calculus?
What I said was that the putative role of extrapolation is avoiding optimizing for the wrong thing.
That’s not noticeably more complicated a sentence than “the purpose of calculus is to calculate the area under a parabola”, so I mostly think your question is rhetorically misleading.
Anyway, as I explicitly said, I’m not asserting that extrapolation solves any problem at all. I was answering (EDIT: what I understood to be) your question about what problem it’s meant to solve, and providing some links to further reading if you’re interested, which it sounds like you aren’t, which is fine.
Ah, I see. I was hoping to find an example, about as concrete as the Fred-wants-to-kill-Steve example, that someone believes actually motivates extrapolation. A use-case, as it were.
You gave the general idea behind it. In retrospect, that was a reasonable interpretation of my question.
Okay, so you don’t have a use case. No problem, I don’t either. Does anybody else?
I realize you haven’t been online for a few months, but yes, I do.
Humanity’s desires are not currently consistent. An FAI couldn’t satisfy them all because some of them contradict each other, like Fred’s and Steve’s in your example. There may not even be a way of averaging them out fairly or meaningfully. Either Steve lives or dies: there’s no average or middle ground and Fred is just out of luck.
However, it might be the case that human beings are similar enough that if you extrapolate everything all humans want, you get something consistent. Extrapolation is a tool to resolve inconsistencies and please both Fred and Steve.