For me, it is evidence for AGI, as it says that we only just one step, may be even one idea, behind it: we need to solve “genuine causal reasoning”. Something like “train a neural net to recognise patterns in in AI’s plans, corresponding to some strategic principles”.
My personal estimate is 10 per cent in 10 years. If it is distributed linearly, it is around 0.2 per cent until the end of 2019, most likely from unknown secret project.
For me, it is evidence for AGI, as it says that we only just one step, may be even one idea, behind it: we need to solve “genuine causal reasoning”. Something like “train a neural net to recognise patterns in in AI’s plans, corresponding to some strategic principles”.
In other words, there is a non-trivial chance we could get to AGI literally this year?
My personal estimate is 10 per cent in 10 years. If it is distributed linearly, it is around 0.2 per cent until the end of 2019, most likely from unknown secret project.