We need to understand information encoding in the brain before we can achieve full AGI.
Maybe. For many years, I went around saying that we’d never have machines that accurately transcribe natural speech until those machines understood the meaning of the speech. I thought that context was necessary.
What makes you think our latest transcription AI’s don’t understand the meaning of the speech? Also what makes you think they have reached a sufficient level of accuracy that your past self would have claimed that they must understand the meaning of the speech? Maybe they still make mistakes sometimes and maybe your past self would have pointed to those mistakes and said “see, they don’t really understand.”
They do use context, surely—viz. the immediate verbal context, plus training on vast amounts of other text which is background ‘information’. Which though not tantamount to meaning (as it’s not connected to the world), would form a significant part of meaning.
The machine learning stuff comes with preexisting artificial encoding. We label stuff ourselves.
Generally speaking, that’s not so true as it used to be. In particular, a lot of stuff from DeepMind (such as the Atari-playing breakthrough from a while ago) works with raw video inputs. I haven’t looked at the paper from the OP to verify it’s the same.)
Also, I have the impression that DeepMind takes a “copy the brain” approach fairly seriously, and they think of papers like this as relevant to that. But I am not sure of the details.
I think voting should happen primarily based on quality of contribution, not norm violation. For norm violations you can report it to the mods. I think it’s pretty reasonable for some comments that don’t violate any norms but just seem generically low quality to get downvoted (and indeed, most of the internet is full of low-quality contributions that make discussions frustrating and hard to interact with, without violating any clear norms, and our downvote system holds them at least somewhat at bay).
Maybe. For many years, I went around saying that we’d never have machines that accurately transcribe natural speech until those machines understood the meaning of the speech. I thought that context was necessary.
I was wrong.
What makes you think our latest transcription AI’s don’t understand the meaning of the speech? Also what makes you think they have reached a sufficient level of accuracy that your past self would have claimed that they must understand the meaning of the speech? Maybe they still make mistakes sometimes and maybe your past self would have pointed to those mistakes and said “see, they don’t really understand.”
They do use context, surely—viz. the immediate verbal context, plus training on vast amounts of other text which is background ‘information’. Which though not tantamount to meaning (as it’s not connected to the world), would form a significant part of meaning.
Generally speaking, that’s not so true as it used to be. In particular, a lot of stuff from DeepMind (such as the Atari-playing breakthrough from a while ago) works with raw video inputs. I haven’t looked at the paper from the OP to verify it’s the same.)
Also, I have the impression that DeepMind takes a “copy the brain” approach fairly seriously, and they think of papers like this as relevant to that. But I am not sure of the details.
They’re also working with raw RGB input here too.
Did evolution need to understand information encoding in the brain before it achieved full GI?
Upvoted because it seems harmful to downvote comments that are wrong but otherwise adhere to LW norms.
(If this comment actually does violate a norm, I’m curious as to which.)
I think voting should happen primarily based on quality of contribution, not norm violation. For norm violations you can report it to the mods. I think it’s pretty reasonable for some comments that don’t violate any norms but just seem generically low quality to get downvoted (and indeed, most of the internet is full of low-quality contributions that make discussions frustrating and hard to interact with, without violating any clear norms, and our downvote system holds them at least somewhat at bay).