(2) the mainstream community has not caught up with SIAI’s advances because SIAI has not shared them with anyone—at least not in their language, in their journals, to their expectations of clarity and style.
Are you sure that’s the problem? The fact that SIAI wants to build Friendly AI has been mentioned in Time magazine and such places. Surely if mainstream machine ethicists read those stories, they would be as curious as anyone else what SIAI means by “Friendly AI”, which ought to start a chain of events eventually leading to them learning about SIAI’s ideas.
I mean, if you were a mainstream machine ethicist, and you read that or a similar article, wouldn’t you be curious enough to not let language/journals/etc. stop you from finding out what this “institute” is talking about?
Academics always have far more to read than they have time to read. Only reading stuff that has passed peer review is a useful filter. They might be curious enough to begin reading, say, the CEV article from 2004, but after going just a short distance and encountering the kinds of terminology issues I described above, they might not keep reading.
I’m imagining that they, for example, realize that Eliezer is proposing what is called an ‘idealized preference’ theory of value, but does not cite or respond to any of the many objections that have been raised against such theories, and so they doubt that reading further will be enlightening. They’re wrong—though it would be nice to hear if Eliezer has a solution to the standard objections to idealized preference theories—but I sympathize with academics who need a strong filter in order to survive, even if it means they’ll miss out on a few great things.
They’re wrong—though it would be nice to hear if Eliezer has a solution to the standard objections to idealized preference theories
Interesting… Having done a quick search on those keywords, it seems that some of my own objections to Eliezer’s theory simply mirror standard objections to idealized preference. But you say “they’re wrong” to not pay more attention to Eliezer—what do you think is Eliezer’s advance over the existing idealized preference theories?
Sorry, I just meant that Eliezer’s CEV work has lots of value in general, not that it solves outstanding issues in idealized preference theory. Indeed, Eliezer’s idealized preference theory is more ambitious than any other idealized preference theory I’ve ever seen, and probably more problematic because of it. (But, it might be the only thing that will actually make the future not totally suck.)
Anyway, I don’t know whether Eliezer’s CEV has overcome the standard problems with idealized preference theories. I was one of those people who tried to read the CEV paper a few times and got so confused (by things like the conflation I talked about above) that I didn’t keep at it until I fully understood—but at least I get the basic plan being proposed. Frankly, I’d love to work with Eliezer to write a new update to CEV and write it in the mainstream style and publish it in an AI journal—that way I will fully understand it, and so will others.
Are you sure that’s the problem? The fact that SIAI wants to build Friendly AI has been mentioned in Time magazine and such places. Surely if mainstream machine ethicists read those stories, they would be as curious as anyone else what SIAI means by “Friendly AI”, which ought to start a chain of events eventually leading to them learning about SIAI’s ideas.
I mean, if you were a mainstream machine ethicist, and you read that or a similar article, wouldn’t you be curious enough to not let language/journals/etc. stop you from finding out what this “institute” is talking about?
I’m not so sure.
Academics always have far more to read than they have time to read. Only reading stuff that has passed peer review is a useful filter. They might be curious enough to begin reading, say, the CEV article from 2004, but after going just a short distance and encountering the kinds of terminology issues I described above, they might not keep reading.
I’m imagining that they, for example, realize that Eliezer is proposing what is called an ‘idealized preference’ theory of value, but does not cite or respond to any of the many objections that have been raised against such theories, and so they doubt that reading further will be enlightening. They’re wrong—though it would be nice to hear if Eliezer has a solution to the standard objections to idealized preference theories—but I sympathize with academics who need a strong filter in order to survive, even if it means they’ll miss out on a few great things.
Interesting… Having done a quick search on those keywords, it seems that some of my own objections to Eliezer’s theory simply mirror standard objections to idealized preference. But you say “they’re wrong” to not pay more attention to Eliezer—what do you think is Eliezer’s advance over the existing idealized preference theories?
Sorry, I just meant that Eliezer’s CEV work has lots of value in general, not that it solves outstanding issues in idealized preference theory. Indeed, Eliezer’s idealized preference theory is more ambitious than any other idealized preference theory I’ve ever seen, and probably more problematic because of it. (But, it might be the only thing that will actually make the future not totally suck.)
Anyway, I don’t know whether Eliezer’s CEV has overcome the standard problems with idealized preference theories. I was one of those people who tried to read the CEV paper a few times and got so confused (by things like the conflation I talked about above) that I didn’t keep at it until I fully understood—but at least I get the basic plan being proposed. Frankly, I’d love to work with Eliezer to write a new update to CEV and write it in the mainstream style and publish it in an AI journal—that way I will fully understand it, and so will others.