Their mistake (I agree with your impression though). I’ve started working on FAI as soon as I understood the problem (as not having understanding of “fuzzy AGI” as a useful subgoal), about a year ago, and the current blog sequence is intended to help others in understanding the problem.
On the other hand, what do you see as the alternative to this “flypaper”, or an improvement thereof towards more productive modes? Building killer robots as a career is hardly a better road.
Gee, how can I answer this question in a way that doesn’t oblige me to do work?
One thing is, as a community, to motivate Eliezer to tell us more about his ideas on FAI on CEV, and to answer questions about them, by making it apparent that continuing to take these ideas seriously depends on continuing development of them. I very much appreciate his writing out his recent sequence on timeless decision theory, so I don’t want to harp on this at present. And of course Eliezer has no moral obligation to respond to you (unless you’ve given him time or money). But I’m not speaking of moral obligations; I’m speaking of strategy.
Another is to begin working on these ideas ourselves. This is hindered by us lacking a way to talk about, say, “Eliezer’s CEV” vs. CEV in general, and continuing to try to figure out what Eliezer’s opinion is (to get at the “true CEV theory”), instead of trying to figure out CEV theory independently. So a repeated pattern has been
person P (as in, for instance, “Phil”) asks a question about FAI or CEV
Eliezer doesn’t answer
person P gives their interpretation of FAI or CEV on the point, possibly in a “this is what I think Eliezer meant” way, or else in a “these are the implications of Eliezer’s ideas” way
Eliezer responds by saying that person P doesn’t know what they’re talking about, and should stop presuming to know what Eliezer thinks
Their mistake (I agree with your impression though). I’ve started working on FAI as soon as I understood the problem (as not having understanding of “fuzzy AGI” as a useful subgoal), about a year ago, and the current blog sequence is intended to help others in understanding the problem.
On the other hand, what do you see as the alternative to this “flypaper”, or an improvement thereof towards more productive modes? Building killer robots as a career is hardly a better road.
Gee, how can I answer this question in a way that doesn’t oblige me to do work?
One thing is, as a community, to motivate Eliezer to tell us more about his ideas on FAI on CEV, and to answer questions about them, by making it apparent that continuing to take these ideas seriously depends on continuing development of them. I very much appreciate his writing out his recent sequence on timeless decision theory, so I don’t want to harp on this at present. And of course Eliezer has no moral obligation to respond to you (unless you’ve given him time or money). But I’m not speaking of moral obligations; I’m speaking of strategy.
Another is to begin working on these ideas ourselves. This is hindered by us lacking a way to talk about, say, “Eliezer’s CEV” vs. CEV in general, and continuing to try to figure out what Eliezer’s opinion is (to get at the “true CEV theory”), instead of trying to figure out CEV theory independently. So a repeated pattern has been
person P (as in, for instance, “Phil”) asks a question about FAI or CEV
Eliezer doesn’t answer
person P gives their interpretation of FAI or CEV on the point, possibly in a “this is what I think Eliezer meant” way, or else in a “these are the implications of Eliezer’s ideas” way
Eliezer responds by saying that person P doesn’t know what they’re talking about, and should stop presuming to know what Eliezer thinks
end of discussion