[irrationality game comment, please read post before voting]
Eliezer Yudkowsky will not create a AGI, friendly or otherwise: 99%.
My reasoning here is that knowledge representation is an impossible problem. It’s only impossible in the Yudkowskyan sense of the word, but that appears to be enough. Yudkowsky is now doing that thing that people do when they can’t figure something out: doing something else. There is no conceivable way that the rationality book has anything like the utility of a FAI. And then he’s going to take a year to study “math”. What math? Well, if he knew what he needed to learn to build a FAI, he would just learn it. Instead, he’s so confused that he thinks just learning any math will be necessary before he becomes unconfused. Yeah, he’s noticed his confusion, which is far better than 99% of AI researchers. But he’s not fixing it. He’s writing a book. This implies that he believes, ultimately, that he can’t succeed. And he’s much smarter than I am—if he’s given up, why should I keep up hope?
I should note that I hope to be wrong on this one.
About the same, but mostly because I don’t follow it well enough to know whether they have any other smart enough people working there. Although I think thomblake may be right that I have set the probability too low.
Upvoted for disagreement. I definitely disagree on whether writing the book is a rational step toward his goals. I also disagree on whether EY will build an AGI. I doubt that he will build the first one (unless he already has) at something like your 99% level.
[irrationality game comment, please read post before voting] Eliezer Yudkowsky will not create a AGI, friendly or otherwise: 99%.
My reasoning here is that knowledge representation is an impossible problem. It’s only impossible in the Yudkowskyan sense of the word, but that appears to be enough. Yudkowsky is now doing that thing that people do when they can’t figure something out: doing something else. There is no conceivable way that the rationality book has anything like the utility of a FAI. And then he’s going to take a year to study “math”. What math? Well, if he knew what he needed to learn to build a FAI, he would just learn it. Instead, he’s so confused that he thinks just learning any math will be necessary before he becomes unconfused. Yeah, he’s noticed his confusion, which is far better than 99% of AI researchers. But he’s not fixing it. He’s writing a book. This implies that he believes, ultimately, that he can’t succeed. And he’s much smarter than I am—if he’s given up, why should I keep up hope?
I should note that I hope to be wrong on this one.
What would be your probability assessment if you replaced “Eliezer Yudkowsky” with “SIAI”?
About the same, but mostly because I don’t follow it well enough to know whether they have any other smart enough people working there. Although I think thomblake may be right that I have set the probability too low.
Upvoted for disagreement. I definitely disagree on whether writing the book is a rational step toward his goals. I also disagree on whether EY will build an AGI. I doubt that he will build the first one (unless he already has) at something like your 99% level.
Upvoted for underconfidence.
99% is underconfident? Downvoted for agreement.
Agree the chance is >50%, but upvoted for overconfidence.