I agree that these analogies might be superficial, I simply noted that they exist in reply to Eliezer stating “I don’t think humans go wrong in quite the same way...”
The specific details of human thinking and acting are different from the specific details of AIXI functioning.
Do we really know the “specific details of human thinking and acting” to make this statement?
Do we really know the “specific details of human thinking and acting” to make this statement?
I believe we know quite enough to consider is pretty unlikely that human brain stores an infinite number of binary descriptions of Turing machines along with their probabilities which are initialized by Somonoff induction at birth (or perhaps at conception) and later updated on evidence according to the Bayes theorem.
Even if words like “inifinity” or “incomputable” are not convincing enough (okay, perhaps the human brain runs the AIXI algorithm with some unimportant rounding), there are things like human-specific biases generated by evolutionary pressures—which is one of the main points of this whole website.
Even if words like “inifinity” or “incomputable” are not convincing enough
Presumably any realizable version of AIXI, like AIXItl, would have to use a finite amount of computations, so no.
there are things like human-specific biases generated by evolutionary pressures
Right. However some of those could be due to improper weighting of some of the models, or poor priors, etc. I am not sure that the case is as closed as you seem to imply.
I agree that these analogies might be superficial, I simply noted that they exist in reply to Eliezer stating “I don’t think humans go wrong in quite the same way...”
Do we really know the “specific details of human thinking and acting” to make this statement?
I believe we know quite enough to consider is pretty unlikely that human brain stores an infinite number of binary descriptions of Turing machines along with their probabilities which are initialized by Somonoff induction at birth (or perhaps at conception) and later updated on evidence according to the Bayes theorem.
Even if words like “inifinity” or “incomputable” are not convincing enough (okay, perhaps the human brain runs the AIXI algorithm with some unimportant rounding), there are things like human-specific biases generated by evolutionary pressures—which is one of the main points of this whole website.
Seriously, the case is closed.
Presumably any realizable version of AIXI, like AIXItl, would have to use a finite amount of computations, so no.
Right. However some of those could be due to improper weighting of some of the models, or poor priors, etc. I am not sure that the case is as closed as you seem to imply.