When EY says that this news shows that we should put a significant amount of our probability mass before 2050 that doesn’t contradict expert opinions.
The point is how much we should update our AI future timeline beliefs (and associated beliefs about whether it is appropriate to donate to MIRI and how much) based on the current news of DeepMind’s AlphaGo success.
There is a difference between “Gib moni plz because the experts say that there is a 10% probability of human-level AI within 2022” and “Gib moni plz because of AlphaGo”.
I understand IlyaShpitser to claim that there are people who update their AI future timeline beliefs in a way that isn’t appropriate because of EY statements. I don’t think that’s true.
The point is how much we should update our AI future timeline beliefs (and associated beliefs about whether it is appropriate to donate to MIRI and how much) based on the current news of DeepMind’s AlphaGo success.
There is a difference between “Gib moni plz because the experts say that there is a 10% probability of human-level AI within 2022” and “Gib moni plz because of AlphaGo”.
I understand IlyaShpitser to claim that there are people who update their AI future timeline beliefs in a way that isn’t appropriate because of EY statements. I don’t think that’s true.