If he’s not talking about some stable information that is present in all environments that yield intelligent humans, then what’s important is a kind of information that can be mass generated at low complexity cost.
Even language exposure is relatively low complexity, and the key parts might be inferable from brain processes. And we already know how to offer a socially rich environment, so I don’t think it should add to the complexity costs of this problem.
And I think a reverse engineering of a newborn baby brain would be quite sufficient for kurzweil’s goal.
In short: we know intelligent brains get reliably generated. We know it’s very complex. The source of that complexity must be something information rich, stable, and universal. I know of exactly one such source.
Right now I’m reading myers argument as “a big part of human heredity is memetic rather than just genetic, and there is complex interplay between genes and memes, so you’ve got to count the memes as part of the total complexity.”
I say that Kurzweil is trying to create something compatible with human memes in the first plalce, so we can load them the same way we load children (at worst) And even some classes of memes (age appropriate language exposure) do interact tightly with genes, their information content is not all that high.
And I think a reverse engineering of a newborn baby brain would be quite sufficient for kurzweil’s goal.
While doable this seems like a very time consuming project and potentially morally dubious. How do you know when you have succeeded and not got a mildly brain damaged one, because you have missed an important detail for language learning?
We really don’t want to be running multi year experiments, where humans have to interact with infant machines, that would be ruinously expensive. The quicker you can evaluate the capabilities of the machine the better.
Well in Kurzweils’ case, you’d look at the source code and debug it to make sure it’s doinjg everything it’s supposed to, because he’s no dealing with a meat brain.
I guess my real point is that language learning should not be tacked on to the problem of reverse engineering the brain, If he makes something that is as capable of learning, that’s a win for him. (Hopefully he also reverse engineers all of human morality.)
You are assuming the program found via the reverse engineering process is human understandable.… what if it is a strange cellular automata with odd rules. Or an algorithm with parameters you don’t know why they are what they are.
Language is an important part of learning for humans. Imagine trying to learn chess if no one explained the legal moves. Something without the capability for language isn’t such a big win IMHO.
I think we might have different visions of what this reverse engineering would entail, By my concept, if you don’t understand the function of the program you wrote, you’re not done reverse engineering.
I do think that something capable of learning language would be necessary for a win. but the information content of the language does not count towards the complexity estimate of the thing capable of learning langauge.
Yes, I disagree.
If he’s not talking about some stable information that is present in all environments that yield intelligent humans, then what’s important is a kind of information that can be mass generated at low complexity cost.
Even language exposure is relatively low complexity, and the key parts might be inferable from brain processes. And we already know how to offer a socially rich environment, so I don’t think it should add to the complexity costs of this problem.
And I think a reverse engineering of a newborn baby brain would be quite sufficient for kurzweil’s goal.
In short: we know intelligent brains get reliably generated. We know it’s very complex. The source of that complexity must be something information rich, stable, and universal. I know of exactly one such source.
Right now I’m reading myers argument as “a big part of human heredity is memetic rather than just genetic, and there is complex interplay between genes and memes, so you’ve got to count the memes as part of the total complexity.”
I say that Kurzweil is trying to create something compatible with human memes in the first plalce, so we can load them the same way we load children (at worst) And even some classes of memes (age appropriate language exposure) do interact tightly with genes, their information content is not all that high.
While doable this seems like a very time consuming project and potentially morally dubious. How do you know when you have succeeded and not got a mildly brain damaged one, because you have missed an important detail for language learning?
We really don’t want to be running multi year experiments, where humans have to interact with infant machines, that would be ruinously expensive. The quicker you can evaluate the capabilities of the machine the better.
Well in Kurzweils’ case, you’d look at the source code and debug it to make sure it’s doinjg everything it’s supposed to, because he’s no dealing with a meat brain.
I guess my real point is that language learning should not be tacked on to the problem of reverse engineering the brain, If he makes something that is as capable of learning, that’s a win for him. (Hopefully he also reverse engineers all of human morality.)
You are assuming the program found via the reverse engineering process is human understandable.… what if it is a strange cellular automata with odd rules. Or an algorithm with parameters you don’t know why they are what they are.
Language is an important part of learning for humans. Imagine trying to learn chess if no one explained the legal moves. Something without the capability for language isn’t such a big win IMHO.
I think we might have different visions of what this reverse engineering would entail, By my concept, if you don’t understand the function of the program you wrote, you’re not done reverse engineering.
I do think that something capable of learning language would be necessary for a win. but the information content of the language does not count towards the complexity estimate of the thing capable of learning langauge.