“EY said X on facebook, time for me to change my opinion.”
Who do you think said that in this case?
Just to be clear about your position, what do you think are reasonable values for human-level AI with 10% probability/
human-level AI with 50% probability and human-level AI with 90% probability?
I think the question in this thread is about how much the deep learning Go program should move my beliefs about this, whatever they may be. My answer is “very little in a sooner direction” (just because it is a successful example of getting a complex thing working). The question wasn’t “what are your belief about how far human level AI is” (mine are centered fairly far out).
I think this debate is quite hard with terms vague terms like “very little” and “far out”. I really do think it would be helpful for other people trying to understand your position if you put down your numbers for those predictions.
Who do you think said that in this case?
Just to be clear about your position, what do you think are reasonable values for
human-level AI with 10% probability
/human-level AI with 50% probability
andhuman-level AI with 90% probability
?I think the question in this thread is about how much the deep learning Go program should move my beliefs about this, whatever they may be. My answer is “very little in a sooner direction” (just because it is a successful example of getting a complex thing working). The question wasn’t “what are your belief about how far human level AI is” (mine are centered fairly far out).
I think this debate is quite hard with terms vague terms like “very little” and “far out”. I really do think it would be helpful for other people trying to understand your position if you put down your numbers for those predictions.