My current estimate, as of right now, is that humanity has no more than a 30% chance of making it, probably less. The most realistic estimate for a seed AI transcendence is 2020; nanowar, before 2015.
Hasn’t Eliezer said, on every occasion since the beginning of LW when the opportunity has arisen, that Eliezer-in-1999 was disastrously wrong and confused about lots of important things?
(I don’t know whether present-day-Eliezer thinks 18-years-ago-Eliezer was wrong about this particular thing, but I would be cautious about taking things he said that long ago as strongly indicative of his present opinions.)
Yes, I am aware that this is what Eliezer has said, and I wasn’t implying that those early statements reflect Eliezer’s current thinking. There is a clear difference between “Eliezer believed this in the past, so he must believe it at present” and “Eliezer made some wrong predictions in the past, so we must treat his current predictions with caution”. Eliezer is entitled to ask his readers not to assume that his past beliefs reflect those of his present self, but he is not entitled to ask them not to hold him responsible for having once said stuff that some may think was ill-judged.
I hadn’t realised anyone was arguing for not treating Eliezer’s current predictions with caution. I can’t imagine why anyone wouldn’t treat anyone’s predictions with caution in this field.
My point is that these early pronouncements are (limited) evidence that we should treat Eliezer’s predictions with more caution than we would otherwise.
OK, I guess. I have to say that the main impression I’m getting from this exchange is that you wanted to say “boo Eliezer”; it seems like if you wanted to make an actual usefully constructive point you’d have been somewhat more explicit in your original comment. (“Eliezer wrote this in 1999: [...]. I know that Eliezer has since repudiated a lot of his opinions and thought processes of that period, but if his opinions were that badly wrong in 1999 then we shouldn’t take them too seriously now either.” or whatever.)
I will vigorously defend anyone’s right to say “boo Eliezer” or “yay Eliezer”, but don’t have much optimism about getting a useful outcome from a conversation that begins that way, and will accordingly drop it now.
Well, a nanowar is just a conflict on a very, very small scale—like many orders of magnitude less serious than your average barfight. Perhaps we had one before 2015 and nobody noticed! Now we just have to wait until 2020 for the seed AI to transcend.
Eliezer wrote this in 1999:
Hasn’t Eliezer said, on every occasion since the beginning of LW when the opportunity has arisen, that Eliezer-in-1999 was disastrously wrong and confused about lots of important things?
(I don’t know whether present-day-Eliezer thinks 18-years-ago-Eliezer was wrong about this particular thing, but I would be cautious about taking things he said that long ago as strongly indicative of his present opinions.)
Yes, I am aware that this is what Eliezer has said, and I wasn’t implying that those early statements reflect Eliezer’s current thinking. There is a clear difference between “Eliezer believed this in the past, so he must believe it at present” and “Eliezer made some wrong predictions in the past, so we must treat his current predictions with caution”. Eliezer is entitled to ask his readers not to assume that his past beliefs reflect those of his present self, but he is not entitled to ask them not to hold him responsible for having once said stuff that some may think was ill-judged.
I hadn’t realised anyone was arguing for not treating Eliezer’s current predictions with caution. I can’t imagine why anyone wouldn’t treat anyone’s predictions with caution in this field.
My point is that these early pronouncements are (limited) evidence that we should treat Eliezer’s predictions with more caution than we would otherwise.
OK, I guess. I have to say that the main impression I’m getting from this exchange is that you wanted to say “boo Eliezer”; it seems like if you wanted to make an actual usefully constructive point you’d have been somewhat more explicit in your original comment. (“Eliezer wrote this in 1999: [...]. I know that Eliezer has since repudiated a lot of his opinions and thought processes of that period, but if his opinions were that badly wrong in 1999 then we shouldn’t take them too seriously now either.” or whatever.)
I will vigorously defend anyone’s right to say “boo Eliezer” or “yay Eliezer”, but don’t have much optimism about getting a useful outcome from a conversation that begins that way, and will accordingly drop it now.
Thanks for the feedback. I agree that a comment worded in the manner you suggest would have communicated my point more effectively.
Yudkowsky has changed his views a lot over the last 18 years though. A lot of his earlier writing is extremely optimistic about AI and it’s timeline.
Well, a nanowar is just a conflict on a very, very small scale—like many orders of magnitude less serious than your average barfight. Perhaps we had one before 2015 and nobody noticed! Now we just have to wait until 2020 for the seed AI to transcend.