Regarding the following passage from the document:
What kind of built-in operations and environments should we use?
In existing work on NPI, the neural net is given outputs that correspond to basic operations on data. This makes it easier to learn algorithms that depend on those basic operations. For IDA, it would be ideal to learn these operations from examples. (If we were learning from human decompositions, we might not know about these “basic operations on data” ahead of time).
Do you have ideas/intuitions about how “basic operations” in the human brain can be learned? Also, how basic are the “basic operations” you’re thinking about here? (Are we talking about something like the activity of an individual biological neuron? How active is a particular area in the prefrontal cortex? Symbolic-level stuff?)
Generally, do you consider imitating human cognition at the level of “basic operations” to be part of the IDA agenda? (As opposed to, say, training a model to “directly” predict the output of a human-working-for-10-minutes).
Regarding the following passage from the document:
Do you have ideas/intuitions about how “basic operations” in the human brain can be learned? Also, how basic are the “basic operations” you’re thinking about here? (Are we talking about something like the activity of an individual biological neuron? How active is a particular area in the prefrontal cortex? Symbolic-level stuff?)
Generally, do you consider imitating human cognition at the level of “basic operations” to be part of the IDA agenda? (As opposed to, say, training a model to “directly” predict the output of a human-working-for-10-minutes).