If an AI can do most things a human can do (which is achievable using neurons apparently because that’s what we’re made of)
Implies that humans are deep learning algorithms. This assertion is surprising, so I asked for confirmation that that’s what’s being said, and if so, on what basis.
3: I’m not asking what makes intelligent AI dangerous. I’m asking why people expect deep learning specifically to become (far more) intelligent (than they are). Specifically within that question, adding parameters to your model vastly increases use of memory. If I understand the situation correctly, if gpt just keeps increasing the number of parameters, gpt five or six or so will require more memory than exists on the planet, and assuming someone built it anyway, I still expect it to be unable to wash dishes. Even assuming you have the memory, running the training would take longer than human history on modern hardware. Even assuming deep learning “works” in the mathematical sense, that doesn’t make it a viable path to high levels of intelligence in the near future.
Given doom in thirty years, or given that researching deep learning is dangerous, it should be the case that this problem: never existed to begin with and I’m misunderstanding something / is easily bypassed by some cute trick / we’re going to need a lot better hardware in the near future.
2. I don’t think humans are deep learning algorithms. I think human (brains) are made of neurons, which seems like a thing I could simulate in a computer, but not just deep learning.
3. I don’t expect just-deep-learning to become an AGI. Perhaps [in my opinion: probably] parts of the AGI will be written using deep-learning though, it does seem pretty good at some things. [I don’t actually know, I can think out loud with you].
The relevant thing in [sentient / smart / whatever] is “the ability to achieve complex goals”
a. Are you asking if an AI can ever be as “smart” [good at achieving colas] as a human?
b. The dangerous part of the AGI being “smart” are things like “able to manipulate humans” and “able to build an even better AGI”
Does this answer your questions? Feel free to follow up
2: No.
Implies that humans are deep learning algorithms. This assertion is surprising, so I asked for confirmation that that’s what’s being said, and if so, on what basis.
3: I’m not asking what makes intelligent AI dangerous. I’m asking why people expect deep learning specifically to become (far more) intelligent (than they are). Specifically within that question, adding parameters to your model vastly increases use of memory. If I understand the situation correctly, if gpt just keeps increasing the number of parameters, gpt five or six or so will require more memory than exists on the planet, and assuming someone built it anyway, I still expect it to be unable to wash dishes. Even assuming you have the memory, running the training would take longer than human history on modern hardware. Even assuming deep learning “works” in the mathematical sense, that doesn’t make it a viable path to high levels of intelligence in the near future.
Given doom in thirty years, or given that researching deep learning is dangerous, it should be the case that this problem: never existed to begin with and I’m misunderstanding something / is easily bypassed by some cute trick / we’re going to need a lot better hardware in the near future.
2. I don’t think humans are deep learning algorithms. I think human (brains) are made of neurons, which seems like a thing I could simulate in a computer, but not just deep learning.
3. I don’t expect just-deep-learning to become an AGI. Perhaps [in my opinion: probably] parts of the AGI will be written using deep-learning though, it does seem pretty good at some things. [I don’t actually know, I can think out loud with you].