In the original report, there are a number of arguments which try to estimate or bound k using various considerations. However, these are almost all outside-view considerations based either on human evolution, or general algorithms from computer science. This is because at the time (2013), modern ML was in its infancy. However, nearly a decade later, and after stunning successes in ML, I suspect there is a lot more evidence around now that we can use to create a more bounded inside-view model of near-future AGI if it can be built with current ML techniques. Updating the RSI model and our estimates of k seems extremely important given the centrality of RSI to AI risk models as well as to alignment strategy.
I don’t think it’s particularly important, let alone “extremely” so.
I don’t think it’s particularly important, let alone “extremely” so.
I’m sceptical that RSI is particularly relevant to the deep learning paradigm.