I am not an expert by any means, but here are my thoughts: While I find GPT-3 quite impressive, it’s not even close to AGI. All the models you mentioned are still focused on performing specific tasks. This alone will (probably) not be enough to create AGI, even if you try to increase the size of the models even further. I believe AGI is at least decades away, perhaps even a hundred years away. Now, there is a possibility of stuff being developed in secret, which is impossible to account for, but I’d say the probability of these developments being significantly more advanced that the publicly available technologies is pretty low.
A sober opinion (even if quite different from mine). My biggest fear is scaling a transformer + completing it with other “parts”, as in an agent (even if a dumb one), etc. Thanks
Has GPT-3 / large transformers actually led to anything with economic value? Not from what I can tell although anecdotal reports on Twitter are that many SWEs are finding Github Copilot extremely useful (it’s still in private beta though). I think transformers are going to start providing actual value soon, but the fact they haven’t so far despite almost two years of breathless hype is interesting to contemplate. I’ve learned to ignore hype, demos, cool cherry-picked sample outputs, and benchmark chasing and actually look at what is being deployed “in the real world” and bringing value to people. So many systems that looked amazing in academic papers have flopped when deployed—even from top firms—for instance Microsoft’s Tay and Google Health’s system for detecting diabetic retinopathy. Another example is Google’s Duplex. And for how long have we heard about burger flipping robots taking people’s jobs?
There are reasons to be skeptical about about a scaled up GPT leading to AGI. I touched on some of those points here. There’s also an argument that the hardware costs are going to balloon so quickly to make the entire project economically unfeasible, but I’m pretty skeptical about that.
I’m more worried about someone reverse engineering the wiring of cortical columns in the neocortex in the next few years and then replicating it in silicon.
Long story short, is existentially dangerous AI eminent? Not as far as we can see right now knowing what we know right now (we can’t see that far in the future, since it depends on discoveries and scientific knowledge we don’t have). Could that change quickly anytime? Yes. There is Knightian uncertainty here, I think (to use a concept that LessWrongers generally hate lol).
Economic value might not be a perfect measure. Nuclear fission didn’t generate any economic value either until 200.000 in Japan were incinerated. My fear is that a mixture of experts approach can lead to extremely fast progress towards AGI. Perhaps even less—maybe all it takes is an agent AI that can code as well as humans, to start a cascade of recursive self-improvement.
But indeed, a Knightian uncertainty here would already put me at some ease. As long as you can be sure that it won’t happen “just anytime” before some more barriers are crossed, at least you can still sleep at night and have the sanity to try to do something.
I don’t know, I’m not a technical person, that’s why I’m asking questions and hoping to learn more.
“I’m more worried about someone reverse engineering the wiring of cortical columns in the neocortex in the next few years and then replicating it in silicon.”
Personally that’s what worries me the least. We can’t even crack c.elegans! I don’t doubt that in 100-200 years we’d get there but I see many other way faster routes.
I am not an expert by any means, but here are my thoughts: While I find GPT-3 quite impressive, it’s not even close to AGI. All the models you mentioned are still focused on performing specific tasks. This alone will (probably) not be enough to create AGI, even if you try to increase the size of the models even further. I believe AGI is at least decades away, perhaps even a hundred years away. Now, there is a possibility of stuff being developed in secret, which is impossible to account for, but I’d say the probability of these developments being significantly more advanced that the publicly available technologies is pretty low.
A sober opinion (even if quite different from mine). My biggest fear is scaling a transformer + completing it with other “parts”, as in an agent (even if a dumb one), etc. Thanks
Has GPT-3 / large transformers actually led to anything with economic value? Not from what I can tell although anecdotal reports on Twitter are that many SWEs are finding Github Copilot extremely useful (it’s still in private beta though). I think transformers are going to start providing actual value soon, but the fact they haven’t so far despite almost two years of breathless hype is interesting to contemplate. I’ve learned to ignore hype, demos, cool cherry-picked sample outputs, and benchmark chasing and actually look at what is being deployed “in the real world” and bringing value to people. So many systems that looked amazing in academic papers have flopped when deployed—even from top firms—for instance Microsoft’s Tay and Google Health’s system for detecting diabetic retinopathy. Another example is Google’s Duplex. And for how long have we heard about burger flipping robots taking people’s jobs?
There are reasons to be skeptical about about a scaled up GPT leading to AGI. I touched on some of those points here. There’s also an argument that the hardware costs are going to balloon so quickly to make the entire project economically unfeasible, but I’m pretty skeptical about that.
I’m more worried about someone reverse engineering the wiring of cortical columns in the neocortex in the next few years and then replicating it in silicon.
Long story short, is existentially dangerous AI eminent? Not as far as we can see right now knowing what we know right now (we can’t see that far in the future, since it depends on discoveries and scientific knowledge we don’t have). Could that change quickly anytime? Yes. There is Knightian uncertainty here, I think (to use a concept that LessWrongers generally hate lol).
Economic value might not be a perfect measure. Nuclear fission didn’t generate any economic value either until 200.000 in Japan were incinerated. My fear is that a mixture of experts approach can lead to extremely fast progress towards AGI. Perhaps even less—maybe all it takes is an agent AI that can code as well as humans, to start a cascade of recursive self-improvement.
But indeed, a Knightian uncertainty here would already put me at some ease. As long as you can be sure that it won’t happen “just anytime” before some more barriers are crossed, at least you can still sleep at night and have the sanity to try to do something.
I don’t know, I’m not a technical person, that’s why I’m asking questions and hoping to learn more.
“I’m more worried about someone reverse engineering the wiring of cortical columns in the neocortex in the next few years and then replicating it in silicon.”
Personally that’s what worries me the least. We can’t even crack c.elegans! I don’t doubt that in 100-200 years we’d get there but I see many other way faster routes.