I agree that if there are many paths to AGI, then the time-to-AGI is the duration of the shortest one, and therefore when I talk about one specific scenario, it’s only an upper bound on time-to-AGI.
(Unless we can marshal strong evidence that one path to AGI would give a better / safer / whatever future than another path, and then do differential tech development including trying to shift energy and funding away from the paths we don’t like. We don’t yet have that kind of strong evidence, unfortunately, in my opinion. Until that changes, yeah, I think we’re just gonna get whatever kind of AGI is easiest for humans to build.)
I guess I’m relatively skeptical about today’s most popular strands of deep ML research leading to AGI, at least compared to the median person on this particular web-forum. See here for that argument. I think I’m less skeptical than the median neuroscientist though. I think it’s just really hard to say that kind of thing with any confidence. And also, even if it turns out that deep neural networks can’t do some important-for-intelligence thing X, well somebody’s just gonna glue together a deep neural network with some other algorithm that does X. And then we can have some utterly pointless semantic debate about whether it’s still fundamentally a deep neural network or not. :-)
I agree that if there are many paths to AGI, then the time-to-AGI is the duration of the shortest one, and therefore when I talk about one specific scenario, it’s only an upper bound on time-to-AGI.
(Unless we can marshal strong evidence that one path to AGI would give a better / safer / whatever future than another path, and then do differential tech development including trying to shift energy and funding away from the paths we don’t like. We don’t yet have that kind of strong evidence, unfortunately, in my opinion. Until that changes, yeah, I think we’re just gonna get whatever kind of AGI is easiest for humans to build.)
I guess I’m relatively skeptical about today’s most popular strands of deep ML research leading to AGI, at least compared to the median person on this particular web-forum. See here for that argument. I think I’m less skeptical than the median neuroscientist though. I think it’s just really hard to say that kind of thing with any confidence. And also, even if it turns out that deep neural networks can’t do some important-for-intelligence thing X, well somebody’s just gonna glue together a deep neural network with some other algorithm that does X. And then we can have some utterly pointless semantic debate about whether it’s still fundamentally a deep neural network or not. :-)