About 15 years ago, before I’d started professionally studying and doing machine learning research and development, my timeline had most of its probability mass around 60 − 90 years from then. This was based on my neuroscience studies and thinking about how long I thought it would take to build a sufficiently accurate emulation of the human brain to be functional. About 8 years ago, studying machine learning full time, AlphaGo coming out was inspiration for me to carefully rethink my position, and I realized there were a fair number of shortcuts off my longer figure that made sense, and updated to more like 40 − 60 years. About 3 years ago, GPT-2 gave me another reason to rethink with my then fuller understanding. I updated to 15 − 30 years. In the past couple of years, with the repeated success of various explorations of the scaling law, the apparent willingness of the global community to rapidly scale investments in large compute expenditures, and yet further knowledge of the field, I updated to more like 2 − 15 years as having 80% of my probability mass. I’d put most of that in the 6 − 12 year range, but I wouldn’t be shocked if things turned out to be easier than expected and something really took off next year.
One of the things that makes me think the BioAnchors estimate is a bit too far into the future is that I know from neuroscience that it’s possible for a human to have a sufficient set of brain functions to count as a General Intelligence by a fairly reasonable standard with significant chunks of their brain dead or missing. I mean, they’ll not be in great shape as they’ll be missing some stuff, but if the stuff they’re missing is non-critical they can still function well enough to be a minimal GI. Plenty well enough to be scary if they were a self-improving self-replicating agentive AI.
So anyway, yeah, I’ve been scared for a while now. Latest news has just reinforced my belief we are in a short timeline world, not surprised me. Glad to see more people getting on board with my point of view.
About 15 years ago, before I’d started professionally studying and doing machine learning research and development, my timeline had most of its probability mass around 60 − 90 years from then. This was based on my neuroscience studies and thinking about how long I thought it would take to build a sufficiently accurate emulation of the human brain to be functional. About 8 years ago, studying machine learning full time, AlphaGo coming out was inspiration for me to carefully rethink my position, and I realized there were a fair number of shortcuts off my longer figure that made sense, and updated to more like 40 − 60 years. About 3 years ago, GPT-2 gave me another reason to rethink with my then fuller understanding. I updated to 15 − 30 years. In the past couple of years, with the repeated success of various explorations of the scaling law, the apparent willingness of the global community to rapidly scale investments in large compute expenditures, and yet further knowledge of the field, I updated to more like 2 − 15 years as having 80% of my probability mass. I’d put most of that in the 6 − 12 year range, but I wouldn’t be shocked if things turned out to be easier than expected and something really took off next year.
One of the things that makes me think the BioAnchors estimate is a bit too far into the future is that I know from neuroscience that it’s possible for a human to have a sufficient set of brain functions to count as a General Intelligence by a fairly reasonable standard with significant chunks of their brain dead or missing. I mean, they’ll not be in great shape as they’ll be missing some stuff, but if the stuff they’re missing is non-critical they can still function well enough to be a minimal GI. Plenty well enough to be scary if they were a self-improving self-replicating agentive AI.
So anyway, yeah, I’ve been scared for a while now. Latest news has just reinforced my belief we are in a short timeline world, not surprised me. Glad to see more people getting on board with my point of view.