As a non-expert, I’m confused about what exactly was so surprising in the works which causes a strong update. “The intersection of many independent, semi-likely events is unlikely” could be one answer, but I’m wondering whether there is more to it. In particular, I’m confused why the data is evidence for a fast take-off in contrast to a slow one.
First, I mistitled the post, and as a result your response is very reasonable. This is less clearly evidence for “fast takeoff” and more clearly evidence for “fast timelines”.
In terms of why, the different behaviors captured in the papers constitute a large part of what you’d need to implement something like AlphaGo in a real-world environment. Will stitching them together work immediately? Almost certainly not. Will it work given not-that-much creative iteration, say over 5 years of parallel research? It seems not unlikely, I’d give it >30%.
As a non-expert, I’m confused about what exactly was so surprising in the works which causes a strong update. “The intersection of many independent, semi-likely events is unlikely” could be one answer, but I’m wondering whether there is more to it. In particular, I’m confused why the data is evidence for a fast take-off in contrast to a slow one.
First, I mistitled the post, and as a result your response is very reasonable. This is less clearly evidence for “fast takeoff” and more clearly evidence for “fast timelines”.
In terms of why, the different behaviors captured in the papers constitute a large part of what you’d need to implement something like AlphaGo in a real-world environment. Will stitching them together work immediately? Almost certainly not. Will it work given not-that-much creative iteration, say over 5 years of parallel research? It seems not unlikely, I’d give it >30%.
You can edit the post title.
Done! Do you think it should be edited further?
No, this seems to capture it. No need to make it complicated.