I don’t think that fair criticism on that point. As far as I understand MIRI did make the biggest survey of AI experts that asked when those experts predict AGI to arrive:
A recent set of surveys of AI researchers produced the following median dates:
for human-level AI with 10% probability: 2022 for human-level AI with 50% probability: 2040 for human-level AI with 90% probability: 2075
When EY says that this news shows that we should put a significant amount of our probability mass before 2050 that doesn’t contradict expert opinions.
Sure, but it’s not just about what experts say on a survey about human level AI. It’s also about what info a good Go program has for this question, and whether MIRI’s program makes any sense (and whether it should take people’s money). People here didn’t say “oh experts said X, I am updating,” they said “EY said X on facebook, time for me to change my opinion.”
I don’t know your mind, you tell me? What exactly is it that you find worrying?
My possibly-incorrect guess is that you’re worried about something like “the community turning into an echo chamber that only promotes Eliezer’s views and makes its members totally ignore expert opinion when forming their views”. But if that was your worry, the presence of highly upvoted criticisms of Eliezer’s views should do a lot to help, since it shows that the community does still take into account (and even actively reward!) well-reasoned opinions that show dissent from the tribal leaders.
So since you still seem to be worried despite the presence of those comments, I’m assuming that your worry is something slightly different, but I’m not entirely sure of what.
One problem is that the community has few people actually engaged enough with cutting edge AI / machine learning / whatever-the-respectable-people-call-it-this-decade research to have opinions that are grounded in where the actual research is right now. So a lot of the discussion is going to consist of people either staying quiet or giving uninformed opinions to keep the conversation going. And what incentive structures there are here mostly work for a social club, so there aren’t really that many checks and balances that keep things from drifting further away from being grounded in actual reality instead of the local social reality.
Ilya actually is working with cutting edge machine learning, so I pay attention to his expressions of frustration and appreciate that he persists in hanging out here.
“EY said X on facebook, time for me to change my opinion.”
Who do you think said that in this case?
Just to be clear about your position, what do you think are reasonable values for human-level AI with 10% probability/
human-level AI with 50% probability and human-level AI with 90% probability?
I think the question in this thread is about how much the deep learning Go program should move my beliefs about this, whatever they may be. My answer is “very little in a sooner direction” (just because it is a successful example of getting a complex thing working). The question wasn’t “what are your belief about how far human level AI is” (mine are centered fairly far out).
I think this debate is quite hard with terms vague terms like “very little” and “far out”. I really do think it would be helpful for other people trying to understand your position if you put down your numbers for those predictions.
When EY says that this news shows that we should put a significant amount of our probability mass before 2050 that doesn’t contradict expert opinions.
The point is how much we should update our AI future timeline beliefs (and associated beliefs about whether it is appropriate to donate to MIRI and how much) based on the current news of DeepMind’s AlphaGo success.
There is a difference between “Gib moni plz because the experts say that there is a 10% probability of human-level AI within 2022” and “Gib moni plz because of AlphaGo”.
I understand IlyaShpitser to claim that there are people who update their AI future timeline beliefs in a way that isn’t appropriate because of EY statements. I don’t think that’s true.
I don’t think that fair criticism on that point. As far as I understand MIRI did make the biggest survey of AI experts that asked when those experts predict AGI to arrive:
When EY says that this news shows that we should put a significant amount of our probability mass before 2050 that doesn’t contradict expert opinions.
Sure, but it’s not just about what experts say on a survey about human level AI. It’s also about what info a good Go program has for this question, and whether MIRI’s program makes any sense (and whether it should take people’s money). People here didn’t say “oh experts said X, I am updating,” they said “EY said X on facebook, time for me to change my opinion.”
My reaction was more “oh, EY made a good argument about why this is a big deal, so I’ll take that argument into account”.
Presumably a lot of others felt the same way; attributing the change in opinion to just a deference for tribal authority seems uncharitable.
Say I am worried about this tribal thing happening a lot—what would put my mind more at ease?
I don’t know your mind, you tell me? What exactly is it that you find worrying?
My possibly-incorrect guess is that you’re worried about something like “the community turning into an echo chamber that only promotes Eliezer’s views and makes its members totally ignore expert opinion when forming their views”. But if that was your worry, the presence of highly upvoted criticisms of Eliezer’s views should do a lot to help, since it shows that the community does still take into account (and even actively reward!) well-reasoned opinions that show dissent from the tribal leaders.
So since you still seem to be worried despite the presence of those comments, I’m assuming that your worry is something slightly different, but I’m not entirely sure of what.
One problem is that the community has few people actually engaged enough with cutting edge AI / machine learning / whatever-the-respectable-people-call-it-this-decade research to have opinions that are grounded in where the actual research is right now. So a lot of the discussion is going to consist of people either staying quiet or giving uninformed opinions to keep the conversation going. And what incentive structures there are here mostly work for a social club, so there aren’t really that many checks and balances that keep things from drifting further away from being grounded in actual reality instead of the local social reality.
Ilya actually is working with cutting edge machine learning, so I pay attention to his expressions of frustration and appreciate that he persists in hanging out here.
Agreed both with this being a real risk, and it being good that Ilya hangs out here.
Who do you think said that in this case?
Just to be clear about your position, what do you think are reasonable values for
human-level AI with 10% probability
/human-level AI with 50% probability
andhuman-level AI with 90% probability
?I think the question in this thread is about how much the deep learning Go program should move my beliefs about this, whatever they may be. My answer is “very little in a sooner direction” (just because it is a successful example of getting a complex thing working). The question wasn’t “what are your belief about how far human level AI is” (mine are centered fairly far out).
I think this debate is quite hard with terms vague terms like “very little” and “far out”. I really do think it would be helpful for other people trying to understand your position if you put down your numbers for those predictions.
The point is how much we should update our AI future timeline beliefs (and associated beliefs about whether it is appropriate to donate to MIRI and how much) based on the current news of DeepMind’s AlphaGo success.
There is a difference between “Gib moni plz because the experts say that there is a 10% probability of human-level AI within 2022” and “Gib moni plz because of AlphaGo”.
I understand IlyaShpitser to claim that there are people who update their AI future timeline beliefs in a way that isn’t appropriate because of EY statements. I don’t think that’s true.