If an AI can do most things a human can do (which is achievable using neurons apparently because that’s what we’re made of), and if that AI can run x10,000 as fast (or if it’s better in some interesting way, which computers sometimes are compared to humans), then it can be dangerous
Does this answer your question? Feel free to follow up
1: This doesn’t sound like what I’m hearing people say? Using the word sentience might have been a mistake. Is it reasonable to expect that the first AI to foom will be no more intelligent than say, a squirrel?
2a: Should we be convinced that neurons are basically doing deep learning? I didn’t think we understood neurons to that degree?
2b: What is meant by [most things a human can do]? This sounds to me like an empty statement. Most things a human can do are completely pointless flailing actions. Do we mean, most jobs in modern America? Do we expect roombas to foom? Self driving cars? Or like, most jobs in modern America still sounds like a really low standard, requiring very little intelligence?
My expected answer was somewhere along the lines of “We can achieve better results than that because of something something.” or “We can provide much better computers in the near future, so this doesn’t matter.”
What I’m hearing here is “Intelligence is unnecessary for AI to be (existentially) dangerous.” This is surprising, and I expect, wrong (in the sense of not being what’s being said/what the other side believes.) (though also in the sense of not being true, but that’s neither here nor there.)
If an AI can do most things a human can do (which is achievable using neurons apparently because that’s what we’re made of)
Implies that humans are deep learning algorithms. This assertion is surprising, so I asked for confirmation that that’s what’s being said, and if so, on what basis.
3: I’m not asking what makes intelligent AI dangerous. I’m asking why people expect deep learning specifically to become (far more) intelligent (than they are). Specifically within that question, adding parameters to your model vastly increases use of memory. If I understand the situation correctly, if gpt just keeps increasing the number of parameters, gpt five or six or so will require more memory than exists on the planet, and assuming someone built it anyway, I still expect it to be unable to wash dishes. Even assuming you have the memory, running the training would take longer than human history on modern hardware. Even assuming deep learning “works” in the mathematical sense, that doesn’t make it a viable path to high levels of intelligence in the near future.
Given doom in thirty years, or given that researching deep learning is dangerous, it should be the case that this problem: never existed to begin with and I’m misunderstanding something / is easily bypassed by some cute trick / we’re going to need a lot better hardware in the near future.
2. I don’t think humans are deep learning algorithms. I think human (brains) are made of neurons, which seems like a thing I could simulate in a computer, but not just deep learning.
3. I don’t expect just-deep-learning to become an AGI. Perhaps [in my opinion: probably] parts of the AGI will be written using deep-learning though, it does seem pretty good at some things. [I don’t actually know, I can think out loud with you].
Is it reasonable to expect that the first AI to foom will be no more intelligent than say, a squirrel?
In a sense, yeah, the algorithm is similar to a squirrel that feels a compulsion to bury nuts. The difference is that in an instrumental sense it can navigate the world much more effectively to follow its imperatives.
Think about intelligence in terms of the ability to map and navigate complex environments to achieve pre-determined goals. You tell DALL-E2 to generate a picture for you, and it navigates a complex space of abstractions to give you a result that corresponds to what you’re asking it to do (because a lot of people worked very hard on aligning it). If you’re dealing with a more general-purpose algorithm that has access to the real world, it would be able to chain together outputs from different conceptual areas to produce results—order ingredients for a cake from the supermarket, use a remote-controlled module to prepare it, and sing you a birthday song it came up with all by itself! This behaviour would be a reflection of the input in the distorted light of the algorithm, however well aligned it may or may not be, with no intermediary layers of reflection on why you want a birthday cake or decision being made as to whether baking it is the right thing to do, or what would be appropriate steps to take for getting from A to B and what isn’t.
You’re looking at something that’s potentially very good at getting complicated results without being a subject in a philosophical sense and being able to reflect into its own value structure.
An AGI can be dangerous even if it isn’t sentient
If an AI can do most things a human can do (which is achievable using neurons apparently because that’s what we’re made of), and if that AI can run x10,000 as fast (or if it’s better in some interesting way, which computers sometimes are compared to humans), then it can be dangerous
Does this answer your question? Feel free to follow up
1: This doesn’t sound like what I’m hearing people say? Using the word sentience might have been a mistake. Is it reasonable to expect that the first AI to foom will be no more intelligent than say, a squirrel?
2a: Should we be convinced that neurons are basically doing deep learning? I didn’t think we understood neurons to that degree?
2b: What is meant by [most things a human can do]? This sounds to me like an empty statement. Most things a human can do are completely pointless flailing actions. Do we mean, most jobs in modern America? Do we expect roombas to foom? Self driving cars? Or like, most jobs in modern America still sounds like a really low standard, requiring very little intelligence?
My expected answer was somewhere along the lines of “We can achieve better results than that because of something something.” or “We can provide much better computers in the near future, so this doesn’t matter.”
What I’m hearing here is “Intelligence is unnecessary for AI to be (existentially) dangerous.” This is surprising, and I expect, wrong (in the sense of not being what’s being said/what the other side believes.) (though also in the sense of not being true, but that’s neither here nor there.)
The relevant thing in [sentient / smart / whatever] is “the ability to achieve complex goals”
a. Are you asking if an AI can ever be as “smart” [good at achieving colas] as a human?
b. The dangerous part of the AGI being “smart” are things like “able to manipulate humans” and “able to build an even better AGI”
Does this answer your questions? Feel free to follow up
2: No.
Implies that humans are deep learning algorithms. This assertion is surprising, so I asked for confirmation that that’s what’s being said, and if so, on what basis.
3: I’m not asking what makes intelligent AI dangerous. I’m asking why people expect deep learning specifically to become (far more) intelligent (than they are). Specifically within that question, adding parameters to your model vastly increases use of memory. If I understand the situation correctly, if gpt just keeps increasing the number of parameters, gpt five or six or so will require more memory than exists on the planet, and assuming someone built it anyway, I still expect it to be unable to wash dishes. Even assuming you have the memory, running the training would take longer than human history on modern hardware. Even assuming deep learning “works” in the mathematical sense, that doesn’t make it a viable path to high levels of intelligence in the near future.
Given doom in thirty years, or given that researching deep learning is dangerous, it should be the case that this problem: never existed to begin with and I’m misunderstanding something / is easily bypassed by some cute trick / we’re going to need a lot better hardware in the near future.
2. I don’t think humans are deep learning algorithms. I think human (brains) are made of neurons, which seems like a thing I could simulate in a computer, but not just deep learning.
3. I don’t expect just-deep-learning to become an AGI. Perhaps [in my opinion: probably] parts of the AGI will be written using deep-learning though, it does seem pretty good at some things. [I don’t actually know, I can think out loud with you].
In a sense, yeah, the algorithm is similar to a squirrel that feels a compulsion to bury nuts. The difference is that in an instrumental sense it can navigate the world much more effectively to follow its imperatives.
Think about intelligence in terms of the ability to map and navigate complex environments to achieve pre-determined goals. You tell DALL-E2 to generate a picture for you, and it navigates a complex space of abstractions to give you a result that corresponds to what you’re asking it to do (because a lot of people worked very hard on aligning it). If you’re dealing with a more general-purpose algorithm that has access to the real world, it would be able to chain together outputs from different conceptual areas to produce results—order ingredients for a cake from the supermarket, use a remote-controlled module to prepare it, and sing you a birthday song it came up with all by itself! This behaviour would be a reflection of the input in the distorted light of the algorithm, however well aligned it may or may not be, with no intermediary layers of reflection on why you want a birthday cake or decision being made as to whether baking it is the right thing to do, or what would be appropriate steps to take for getting from A to B and what isn’t.
You’re looking at something that’s potentially very good at getting complicated results without being a subject in a philosophical sense and being able to reflect into its own value structure.