Are you seriously saying that extrapolation is necessary but its role is more obscure than that of integral calculus?
What I said was that the putative role of extrapolation is avoiding optimizing for the wrong thing.
That’s not noticeably more complicated a sentence than “the purpose of calculus is to calculate the area under a parabola”, so I mostly think your question is rhetorically misleading.
Anyway, as I explicitly said, I’m not asserting that extrapolation solves any problem at all. I was answering (EDIT: what I understood to be) your question about what problem it’s meant to solve, and providing some links to further reading if you’re interested, which it sounds like you aren’t, which is fine.
Ah, I see. I was hoping to find an example, about as concrete as the Fred-wants-to-kill-Steve example, that someone believes actually motivates extrapolation. A use-case, as it were.
You gave the general idea behind it. In retrospect, that was a reasonable interpretation of my question.
I’m not asserting that extrapolation solves any problem at all.
Okay, so you don’t have a use case. No problem, I don’t either. Does anybody else?
I realize you haven’t been online for a few months, but yes, I do.
Humanity’s desires are not currently consistent. An FAI couldn’t satisfy them all because some of them contradict each other, like Fred’s and Steve’s in your example. There may not even be a way of averaging them out fairly or meaningfully. Either Steve lives or dies: there’s no average or middle ground and Fred is just out of luck.
However, it might be the case that human beings are similar enough that if you extrapolate everything all humans want, you get something consistent. Extrapolation is a tool to resolve inconsistencies and please both Fred and Steve.
What I said was that the putative role of extrapolation is avoiding optimizing for the wrong thing.
That’s not noticeably more complicated a sentence than “the purpose of calculus is to calculate the area under a parabola”, so I mostly think your question is rhetorically misleading.
Anyway, as I explicitly said, I’m not asserting that extrapolation solves any problem at all. I was answering (EDIT: what I understood to be) your question about what problem it’s meant to solve, and providing some links to further reading if you’re interested, which it sounds like you aren’t, which is fine.
Ah, I see. I was hoping to find an example, about as concrete as the Fred-wants-to-kill-Steve example, that someone believes actually motivates extrapolation. A use-case, as it were.
You gave the general idea behind it. In retrospect, that was a reasonable interpretation of my question.
Okay, so you don’t have a use case. No problem, I don’t either. Does anybody else?
I realize you haven’t been online for a few months, but yes, I do.
Humanity’s desires are not currently consistent. An FAI couldn’t satisfy them all because some of them contradict each other, like Fred’s and Steve’s in your example. There may not even be a way of averaging them out fairly or meaningfully. Either Steve lives or dies: there’s no average or middle ground and Fred is just out of luck.
However, it might be the case that human beings are similar enough that if you extrapolate everything all humans want, you get something consistent. Extrapolation is a tool to resolve inconsistencies and please both Fred and Steve.