They’re wrong—though it would be nice to hear if Eliezer has a solution to the standard objections to idealized preference theories
Interesting… Having done a quick search on those keywords, it seems that some of my own objections to Eliezer’s theory simply mirror standard objections to idealized preference. But you say “they’re wrong” to not pay more attention to Eliezer—what do you think is Eliezer’s advance over the existing idealized preference theories?
Sorry, I just meant that Eliezer’s CEV work has lots of value in general, not that it solves outstanding issues in idealized preference theory. Indeed, Eliezer’s idealized preference theory is more ambitious than any other idealized preference theory I’ve ever seen, and probably more problematic because of it. (But, it might be the only thing that will actually make the future not totally suck.)
Anyway, I don’t know whether Eliezer’s CEV has overcome the standard problems with idealized preference theories. I was one of those people who tried to read the CEV paper a few times and got so confused (by things like the conflation I talked about above) that I didn’t keep at it until I fully understood—but at least I get the basic plan being proposed. Frankly, I’d love to work with Eliezer to write a new update to CEV and write it in the mainstream style and publish it in an AI journal—that way I will fully understand it, and so will others.
Interesting… Having done a quick search on those keywords, it seems that some of my own objections to Eliezer’s theory simply mirror standard objections to idealized preference. But you say “they’re wrong” to not pay more attention to Eliezer—what do you think is Eliezer’s advance over the existing idealized preference theories?
Sorry, I just meant that Eliezer’s CEV work has lots of value in general, not that it solves outstanding issues in idealized preference theory. Indeed, Eliezer’s idealized preference theory is more ambitious than any other idealized preference theory I’ve ever seen, and probably more problematic because of it. (But, it might be the only thing that will actually make the future not totally suck.)
Anyway, I don’t know whether Eliezer’s CEV has overcome the standard problems with idealized preference theories. I was one of those people who tried to read the CEV paper a few times and got so confused (by things like the conflation I talked about above) that I didn’t keep at it until I fully understood—but at least I get the basic plan being proposed. Frankly, I’d love to work with Eliezer to write a new update to CEV and write it in the mainstream style and publish it in an AI journal—that way I will fully understand it, and so will others.