Sure, but it’s not just about what experts say on a survey about human level AI. It’s also about what info a good Go program has for this question, and whether MIRI’s program makes any sense (and whether it should take people’s money). People here didn’t say “oh experts said X, I am updating,” they said “EY said X on facebook, time for me to change my opinion.”
I don’t know your mind, you tell me? What exactly is it that you find worrying?
My possibly-incorrect guess is that you’re worried about something like “the community turning into an echo chamber that only promotes Eliezer’s views and makes its members totally ignore expert opinion when forming their views”. But if that was your worry, the presence of highly upvoted criticisms of Eliezer’s views should do a lot to help, since it shows that the community does still take into account (and even actively reward!) well-reasoned opinions that show dissent from the tribal leaders.
So since you still seem to be worried despite the presence of those comments, I’m assuming that your worry is something slightly different, but I’m not entirely sure of what.
One problem is that the community has few people actually engaged enough with cutting edge AI / machine learning / whatever-the-respectable-people-call-it-this-decade research to have opinions that are grounded in where the actual research is right now. So a lot of the discussion is going to consist of people either staying quiet or giving uninformed opinions to keep the conversation going. And what incentive structures there are here mostly work for a social club, so there aren’t really that many checks and balances that keep things from drifting further away from being grounded in actual reality instead of the local social reality.
Ilya actually is working with cutting edge machine learning, so I pay attention to his expressions of frustration and appreciate that he persists in hanging out here.
“EY said X on facebook, time for me to change my opinion.”
Who do you think said that in this case?
Just to be clear about your position, what do you think are reasonable values for human-level AI with 10% probability/
human-level AI with 50% probability and human-level AI with 90% probability?
I think the question in this thread is about how much the deep learning Go program should move my beliefs about this, whatever they may be. My answer is “very little in a sooner direction” (just because it is a successful example of getting a complex thing working). The question wasn’t “what are your belief about how far human level AI is” (mine are centered fairly far out).
I think this debate is quite hard with terms vague terms like “very little” and “far out”. I really do think it would be helpful for other people trying to understand your position if you put down your numbers for those predictions.
Sure, but it’s not just about what experts say on a survey about human level AI. It’s also about what info a good Go program has for this question, and whether MIRI’s program makes any sense (and whether it should take people’s money). People here didn’t say “oh experts said X, I am updating,” they said “EY said X on facebook, time for me to change my opinion.”
My reaction was more “oh, EY made a good argument about why this is a big deal, so I’ll take that argument into account”.
Presumably a lot of others felt the same way; attributing the change in opinion to just a deference for tribal authority seems uncharitable.
Say I am worried about this tribal thing happening a lot—what would put my mind more at ease?
I don’t know your mind, you tell me? What exactly is it that you find worrying?
My possibly-incorrect guess is that you’re worried about something like “the community turning into an echo chamber that only promotes Eliezer’s views and makes its members totally ignore expert opinion when forming their views”. But if that was your worry, the presence of highly upvoted criticisms of Eliezer’s views should do a lot to help, since it shows that the community does still take into account (and even actively reward!) well-reasoned opinions that show dissent from the tribal leaders.
So since you still seem to be worried despite the presence of those comments, I’m assuming that your worry is something slightly different, but I’m not entirely sure of what.
One problem is that the community has few people actually engaged enough with cutting edge AI / machine learning / whatever-the-respectable-people-call-it-this-decade research to have opinions that are grounded in where the actual research is right now. So a lot of the discussion is going to consist of people either staying quiet or giving uninformed opinions to keep the conversation going. And what incentive structures there are here mostly work for a social club, so there aren’t really that many checks and balances that keep things from drifting further away from being grounded in actual reality instead of the local social reality.
Ilya actually is working with cutting edge machine learning, so I pay attention to his expressions of frustration and appreciate that he persists in hanging out here.
Agreed both with this being a real risk, and it being good that Ilya hangs out here.
Who do you think said that in this case?
Just to be clear about your position, what do you think are reasonable values for
human-level AI with 10% probability
/human-level AI with 50% probability
andhuman-level AI with 90% probability
?I think the question in this thread is about how much the deep learning Go program should move my beliefs about this, whatever they may be. My answer is “very little in a sooner direction” (just because it is a successful example of getting a complex thing working). The question wasn’t “what are your belief about how far human level AI is” (mine are centered fairly far out).
I think this debate is quite hard with terms vague terms like “very little” and “far out”. I really do think it would be helpful for other people trying to understand your position if you put down your numbers for those predictions.