As a neuroscientist-turned-machine-learning-engineer, I have been thinking about this situation in a very similar way to that described in this article. One (perhaps) difference is that I think there are a fair number of possible algorithms/architectures that could successfully generate an agentive general learner sufficient for AGI. I think that a human brain -similar algorithm might be the first developed because of fairly good efficiency and having a working model to study (albeit with difficultly). On the other hand, I think it’s probable that deep learning, scaled up enough, will stumble across a surprisingly effective algorithm all of a sudden with little warning (aka the lottery ticket hypothesis), risking an accidental hard take-off scenario. I kinda hope the human brain-like algorithm actually does turn out to be the first breakthrough, since I feel like we’d have a better chance of understanding and controlling it, and noticing/measuring when we’d gotten quite close.
With the blind groping into unknown solution spaces that deep learning represents, we might find more than we’d bargained for with no warning at all. Just a sudden jump from awkward semi-competent statistical machine to powerful deceitful alien-minded agent.
I agree that if there are many paths to AGI, then the time-to-AGI is the duration of the shortest one, and therefore when I talk about one specific scenario, it’s only an upper bound on time-to-AGI.
(Unless we can marshal strong evidence that one path to AGI would give a better / safer / whatever future than another path, and then do differential tech development including trying to shift energy and funding away from the paths we don’t like. We don’t yet have that kind of strong evidence, unfortunately, in my opinion. Until that changes, yeah, I think we’re just gonna get whatever kind of AGI is easiest for humans to build.)
I guess I’m relatively skeptical about today’s most popular strands of deep ML research leading to AGI, at least compared to the median person on this particular web-forum. See here for that argument. I think I’m less skeptical than the median neuroscientist though. I think it’s just really hard to say that kind of thing with any confidence. And also, even if it turns out that deep neural networks can’t do some important-for-intelligence thing X, well somebody’s just gonna glue together a deep neural network with some other algorithm that does X. And then we can have some utterly pointless semantic debate about whether it’s still fundamentally a deep neural network or not. :-)
As a neuroscientist-turned-machine-learning-engineer, I have been thinking about this situation in a very similar way to that described in this article. One (perhaps) difference is that I think there are a fair number of possible algorithms/architectures that could successfully generate an agentive general learner sufficient for AGI. I think that a human brain -similar algorithm might be the first developed because of fairly good efficiency and having a working model to study (albeit with difficultly). On the other hand, I think it’s probable that deep learning, scaled up enough, will stumble across a surprisingly effective algorithm all of a sudden with little warning (aka the lottery ticket hypothesis), risking an accidental hard take-off scenario.
I kinda hope the human brain-like algorithm actually does turn out to be the first breakthrough, since I feel like we’d have a better chance of understanding and controlling it, and noticing/measuring when we’d gotten quite close.
With the blind groping into unknown solution spaces that deep learning represents, we might find more than we’d bargained for with no warning at all. Just a sudden jump from awkward semi-competent statistical machine to powerful deceitful alien-minded agent.
I agree that if there are many paths to AGI, then the time-to-AGI is the duration of the shortest one, and therefore when I talk about one specific scenario, it’s only an upper bound on time-to-AGI.
(Unless we can marshal strong evidence that one path to AGI would give a better / safer / whatever future than another path, and then do differential tech development including trying to shift energy and funding away from the paths we don’t like. We don’t yet have that kind of strong evidence, unfortunately, in my opinion. Until that changes, yeah, I think we’re just gonna get whatever kind of AGI is easiest for humans to build.)
I guess I’m relatively skeptical about today’s most popular strands of deep ML research leading to AGI, at least compared to the median person on this particular web-forum. See here for that argument. I think I’m less skeptical than the median neuroscientist though. I think it’s just really hard to say that kind of thing with any confidence. And also, even if it turns out that deep neural networks can’t do some important-for-intelligence thing X, well somebody’s just gonna glue together a deep neural network with some other algorithm that does X. And then we can have some utterly pointless semantic debate about whether it’s still fundamentally a deep neural network or not. :-)