I affirm all of this response except the last sentence. I don’t think humans go wrong in quite the same way...
No?
Scenario 1:
would be in favor of actions which lead to its host machine’s destruction.
Soldiers do that when they volunteer to go on suicide missions.
Scenario 2:
actions would optimize the physical parameter corresponding to its reward circuit regardless of what it predicts would happen to its reward circuit.
That’s the reason people write wills.
Scenario 3:
AIXI’s model for its output circuit may be that it influences the physical state even after its host machine no longer implements it. In that case, it would not be reluctant to tamper with its input circuit.
That’s how addicts behave. Or even non-addicts when they choose to imbibe (and possibly drive afterward).
These are some wide analogies. But analogies are not the best approach to reason about something if we already know important details, which happen to be different.
The specific details of human thinking and acting are different from the specific details of AIXI functioning. Sometimes an analogical thing happens. Sometimes not. And the only way to know when the situation is analogical, is when you already know it.
I agree that these analogies might be superficial, I simply noted that they exist in reply to Eliezer stating “I don’t think humans go wrong in quite the same way...”
The specific details of human thinking and acting are different from the specific details of AIXI functioning.
Do we really know the “specific details of human thinking and acting” to make this statement?
Do we really know the “specific details of human thinking and acting” to make this statement?
I believe we know quite enough to consider is pretty unlikely that human brain stores an infinite number of binary descriptions of Turing machines along with their probabilities which are initialized by Somonoff induction at birth (or perhaps at conception) and later updated on evidence according to the Bayes theorem.
Even if words like “inifinity” or “incomputable” are not convincing enough (okay, perhaps the human brain runs the AIXI algorithm with some unimportant rounding), there are things like human-specific biases generated by evolutionary pressures—which is one of the main points of this whole website.
Even if words like “inifinity” or “incomputable” are not convincing enough
Presumably any realizable version of AIXI, like AIXItl, would have to use a finite amount of computations, so no.
there are things like human-specific biases generated by evolutionary pressures
Right. However some of those could be due to improper weighting of some of the models, or poor priors, etc. I am not sure that the case is as closed as you seem to imply.
No?
Scenario 1:
Soldiers do that when they volunteer to go on suicide missions.
Scenario 2:
That’s the reason people write wills.
Scenario 3:
That’s how addicts behave. Or even non-addicts when they choose to imbibe (and possibly drive afterward).
These are some wide analogies. But analogies are not the best approach to reason about something if we already know important details, which happen to be different.
The specific details of human thinking and acting are different from the specific details of AIXI functioning. Sometimes an analogical thing happens. Sometimes not. And the only way to know when the situation is analogical, is when you already know it.
I agree that these analogies might be superficial, I simply noted that they exist in reply to Eliezer stating “I don’t think humans go wrong in quite the same way...”
Do we really know the “specific details of human thinking and acting” to make this statement?
I believe we know quite enough to consider is pretty unlikely that human brain stores an infinite number of binary descriptions of Turing machines along with their probabilities which are initialized by Somonoff induction at birth (or perhaps at conception) and later updated on evidence according to the Bayes theorem.
Even if words like “inifinity” or “incomputable” are not convincing enough (okay, perhaps the human brain runs the AIXI algorithm with some unimportant rounding), there are things like human-specific biases generated by evolutionary pressures—which is one of the main points of this whole website.
Seriously, the case is closed.
Presumably any realizable version of AIXI, like AIXItl, would have to use a finite amount of computations, so no.
Right. However some of those could be due to improper weighting of some of the models, or poor priors, etc. I am not sure that the case is as closed as you seem to imply.