It’s an important topic, but I feel that may become an obstacle rather than a help towards the goal of avoiding AI catastrophe. It can be a flypaper that catches people interested in the problem, then leaves them stuck there while they wait for further clarifications from Eliezer that never come, instead of doing original work themselves, because they’ve been led to believe that FAI+CEV theory is more developed than it is.
I don’t think that was the intent, but it might be a welcome side-effect.
EY has little motivation to provide clarification, as long as people here continue to proclaim their faith in FAI+CEV. He’s said repeatedly that he doesn’t believe collaboration has value; he plans to solve the problem himself. Even supposing that he had a complete write-up on FAI+CEV in his hand today, actually publishing it could be a losing proposition in his eyes. It would encourage other people to do AI work and call it FAI (dangerous, I think he would say); it would make FAI no longer be the exclusive property of SIAI (a financial hazard); and it would reveal countless grounds for disagreement with his ideas and with his values.
Because I do believe in the value of collaboration, I would like to see more clarification. And I don’t think it’s forthcoming as long as people already give FAI+CEV the respect they would give a fully-formed theory.
Also, FAI+CEV is causing premature convergence within the transhumanist community. I know the standard FAI+CEV answers to a number of questions, and it dismays me to hear them spoken with more and more self-assurance by more and more smart people, when I know that these answers have weak spots that have been unexamined for far too long. It’s too soon for people to be agreeing this much on something that has been discussed so little.
Their mistake (I agree with your impression though). I’ve started working on FAI as soon as I understood the problem (as not having understanding of “fuzzy AGI” as a useful subgoal), about a year ago, and the current blog sequence is intended to help others in understanding the problem.
On the other hand, what do you see as the alternative to this “flypaper”, or an improvement thereof towards more productive modes? Building killer robots as a career is hardly a better road.
Gee, how can I answer this question in a way that doesn’t oblige me to do work?
One thing is, as a community, to motivate Eliezer to tell us more about his ideas on FAI on CEV, and to answer questions about them, by making it apparent that continuing to take these ideas seriously depends on continuing development of them. I very much appreciate his writing out his recent sequence on timeless decision theory, so I don’t want to harp on this at present. And of course Eliezer has no moral obligation to respond to you (unless you’ve given him time or money). But I’m not speaking of moral obligations; I’m speaking of strategy.
Another is to begin working on these ideas ourselves. This is hindered by us lacking a way to talk about, say, “Eliezer’s CEV” vs. CEV in general, and continuing to try to figure out what Eliezer’s opinion is (to get at the “true CEV theory”), instead of trying to figure out CEV theory independently. So a repeated pattern has been
person P (as in, for instance, “Phil”) asks a question about FAI or CEV
Eliezer doesn’t answer
person P gives their interpretation of FAI or CEV on the point, possibly in a “this is what I think Eliezer meant” way, or else in a “these are the implications of Eliezer’s ideas” way
Eliezer responds by saying that person P doesn’t know what they’re talking about, and should stop presuming to know what Eliezer thinks
It’s an important topic, but I feel that may become an obstacle rather than a help towards the goal of avoiding AI catastrophe. It can be a flypaper that catches people interested in the problem, then leaves them stuck there while they wait for further clarifications from Eliezer that never come, instead of doing original work themselves, because they’ve been led to believe that FAI+CEV theory is more developed than it is.
I don’t think that was the intent, but it might be a welcome side-effect.
EY has little motivation to provide clarification, as long as people here continue to proclaim their faith in FAI+CEV. He’s said repeatedly that he doesn’t believe collaboration has value; he plans to solve the problem himself. Even supposing that he had a complete write-up on FAI+CEV in his hand today, actually publishing it could be a losing proposition in his eyes. It would encourage other people to do AI work and call it FAI (dangerous, I think he would say); it would make FAI no longer be the exclusive property of SIAI (a financial hazard); and it would reveal countless grounds for disagreement with his ideas and with his values.
Because I do believe in the value of collaboration, I would like to see more clarification. And I don’t think it’s forthcoming as long as people already give FAI+CEV the respect they would give a fully-formed theory.
Also, FAI+CEV is causing premature convergence within the transhumanist community. I know the standard FAI+CEV answers to a number of questions, and it dismays me to hear them spoken with more and more self-assurance by more and more smart people, when I know that these answers have weak spots that have been unexamined for far too long. It’s too soon for people to be agreeing this much on something that has been discussed so little.
Their mistake (I agree with your impression though). I’ve started working on FAI as soon as I understood the problem (as not having understanding of “fuzzy AGI” as a useful subgoal), about a year ago, and the current blog sequence is intended to help others in understanding the problem.
On the other hand, what do you see as the alternative to this “flypaper”, or an improvement thereof towards more productive modes? Building killer robots as a career is hardly a better road.
Gee, how can I answer this question in a way that doesn’t oblige me to do work?
One thing is, as a community, to motivate Eliezer to tell us more about his ideas on FAI on CEV, and to answer questions about them, by making it apparent that continuing to take these ideas seriously depends on continuing development of them. I very much appreciate his writing out his recent sequence on timeless decision theory, so I don’t want to harp on this at present. And of course Eliezer has no moral obligation to respond to you (unless you’ve given him time or money). But I’m not speaking of moral obligations; I’m speaking of strategy.
Another is to begin working on these ideas ourselves. This is hindered by us lacking a way to talk about, say, “Eliezer’s CEV” vs. CEV in general, and continuing to try to figure out what Eliezer’s opinion is (to get at the “true CEV theory”), instead of trying to figure out CEV theory independently. So a repeated pattern has been
person P (as in, for instance, “Phil”) asks a question about FAI or CEV
Eliezer doesn’t answer
person P gives their interpretation of FAI or CEV on the point, possibly in a “this is what I think Eliezer meant” way, or else in a “these are the implications of Eliezer’s ideas” way
Eliezer responds by saying that person P doesn’t know what they’re talking about, and should stop presuming to know what Eliezer thinks
end of discussion