This is a strawman. Ben Garfinkel never says that Yudkowsky has a bad track record. In fact the only time the phrase “bad track record” comes up in Garfinkel’s post is when you mention it in one of your comments.
The most Ben Garfinkel says about Yudkowsky’s track record is that it’s “at least pretty mixed”, which I think the content of the post supports, especially the clear-cut examples. He even emphasizes that he is deliberately cherry-picking bad examples from Eliezer’s track record in order to make a point, e.g. about Eliezer never having addressed his own bad predictions from the past.
It’s not enough to say “my world model was bad in such and such ways and I’ve changed it” to address your mistakes; you have to say “I made this specific prediction and it later turned out to be wrong”. Can you cite any instance of Eliezer ever doing that?
This is a strawman. Ben Garfinkel never says that Yudkowsky has a bad track record.
In the post, he says “his track record is at best fairly mixed” and “Yudkowsky may have a track record of overestimating or overstating the quality of his insights into AI”; and in the comments, he says “Yudkowsky’s track record suggests a substantial bias toward dramatic and overconfident predictions”.
What makes a track record “bad” is relative, but if Ben objects to my summarizing his view with the imprecise word “bad”, then I’ll avoid doing that. It doesn’t strike me as an important point for anything I said above.
The most Ben Garfinkel says about Yudkowsky’s track record is that it’s “at least pretty mixed”, which I think the content of the post supports, especially the clear-cut examples.
As long as we agree that “track record” includes the kind of stuff Jotto was saying it doesn’t include, I’m happy to say that Eliezer’s track record includes failures as well as successes. Indeed, I think that would make way more sense.
about Eliezer never having addressed his own bad predictions from the past.
Maybe worth mentioning in passing that this is of course false?
It’s not enough to say “my world model was bad in such and such ways and I’ve changed it” to address your mistakes; you have to say “I made this specific prediction and it later turned out to be wrong”. Can you cite any instance of Eliezer ever doing that?
Sure! “I wouldn’t have predicted AlphaGo and lost money betting against the speed of its capability gains”.
In the post, he says “his track record is at best fairly mixed” and “Yudkowsky may have a track record of overestimating or overstating the quality of his insights into AI”; and in the comments, he says “Yudkowsky’s track record suggests a substantial bias toward dramatic and overconfident predictions”.
Yes, I think all of that checks out. It’s hard to say, of course, because Eliezer rarely makes explicit predictions, but insofar as he does make them I think he clearly puts a lot of weight on his inside view into things.
That doesn’t make his track record “bad” but it’s something to keep in mind when he makes predictions.
Sure! “I wouldn’t have predicted AlphaGo and lost money betting against the speed of its capability gains”.
This counts as a mistake but I don’t think it’s important relative to the bad prediction about AI timelines Ben brings up in his post. If Eliezer explained why he had been wrong then it would make his position now more convincing, especially given his condescending attitude towards e.g. Metaculus forecasts.
I still think there’s something about the way Eliezer admits he was wrong that rubs me the wrong way but it’s hard to explain what that is right now. It’s not correct to say he doesn’t admit his mistakes per se, but there’s some other problem with how much he seems to “internalize” the fact that he was wrong.
I’ve retracted my original comment because of your example as it was not correct (despite having the right “vibe”, whatever that means).
This is a strawman. Ben Garfinkel never says that Yudkowsky has a bad track record. In fact the only time the phrase “bad track record” comes up in Garfinkel’s post is when you mention it in one of your comments.
The most Ben Garfinkel says about Yudkowsky’s track record is that it’s “at least pretty mixed”, which I think the content of the post supports, especially the clear-cut examples. He even emphasizes that he is deliberately cherry-picking bad examples from Eliezer’s track record in order to make a point, e.g. about Eliezer never having addressed his own bad predictions from the past.
It’s not enough to say “my world model was bad in such and such ways and I’ve changed it” to address your mistakes; you have to say “I made this specific prediction and it later turned out to be wrong”. Can you cite any instance of Eliezer ever doing that?
In the post, he says “his track record is at best fairly mixed” and “Yudkowsky may have a track record of overestimating or overstating the quality of his insights into AI”; and in the comments, he says “Yudkowsky’s track record suggests a substantial bias toward dramatic and overconfident predictions”.
What makes a track record “bad” is relative, but if Ben objects to my summarizing his view with the imprecise word “bad”, then I’ll avoid doing that. It doesn’t strike me as an important point for anything I said above.
As long as we agree that “track record” includes the kind of stuff Jotto was saying it doesn’t include, I’m happy to say that Eliezer’s track record includes failures as well as successes. Indeed, I think that would make way more sense.
Maybe worth mentioning in passing that this is of course false?
Sure! “I wouldn’t have predicted AlphaGo and lost money betting against the speed of its capability gains”.
Extremely important failures and extremely important successes, no less.
Yes, I think all of that checks out. It’s hard to say, of course, because Eliezer rarely makes explicit predictions, but insofar as he does make them I think he clearly puts a lot of weight on his inside view into things.
That doesn’t make his track record “bad” but it’s something to keep in mind when he makes predictions.
This counts as a mistake but I don’t think it’s important relative to the bad prediction about AI timelines Ben brings up in his post. If Eliezer explained why he had been wrong then it would make his position now more convincing, especially given his condescending attitude towards e.g. Metaculus forecasts.
I still think there’s something about the way Eliezer admits he was wrong that rubs me the wrong way but it’s hard to explain what that is right now. It’s not correct to say he doesn’t admit his mistakes per se, but there’s some other problem with how much he seems to “internalize” the fact that he was wrong.
I’ve retracted my original comment because of your example as it was not correct (despite having the right “vibe”, whatever that means).