We need to understand information encoding in the brain before we can achieve full AGI.
Maybe. For many years, I went around saying that we’d never have machines that accurately transcribe natural speech until those machines understood the meaning of the speech. I thought that context was necessary.
What makes you think our latest transcription AI’s don’t understand the meaning of the speech? Also what makes you think they have reached a sufficient level of accuracy that your past self would have claimed that they must understand the meaning of the speech? Maybe they still make mistakes sometimes and maybe your past self would have pointed to those mistakes and said “see, they don’t really understand.”
They do use context, surely—viz. the immediate verbal context, plus training on vast amounts of other text which is background ‘information’. Which though not tantamount to meaning (as it’s not connected to the world), would form a significant part of meaning.
Maybe. For many years, I went around saying that we’d never have machines that accurately transcribe natural speech until those machines understood the meaning of the speech. I thought that context was necessary.
I was wrong.
What makes you think our latest transcription AI’s don’t understand the meaning of the speech? Also what makes you think they have reached a sufficient level of accuracy that your past self would have claimed that they must understand the meaning of the speech? Maybe they still make mistakes sometimes and maybe your past self would have pointed to those mistakes and said “see, they don’t really understand.”
They do use context, surely—viz. the immediate verbal context, plus training on vast amounts of other text which is background ‘information’. Which though not tantamount to meaning (as it’s not connected to the world), would form a significant part of meaning.