I feel like H2 shouldn’t be true due to the no free lunch theorem. If X + Y is better than X at some task, it must be worse than X for some other task.
This depends on your ontology of course, but just thought I’d point out a case where it fails.
Although we can not rigorously say this yet since we have not chosen a definition of agent, I think this intuitively applies and therefore (H2) can only hold when you are restricted to some set of tasks, perhaps “reasonable tasks”, yea.
I wonder if in the stochastic inteprretation of task this issue disappears because “No Free Lunch” tasks that “diagonalize against a model in a particular fashion have very low probability.
I feel like H2 shouldn’t be true due to the no free lunch theorem. If X + Y is better than X at some task, it must be worse than X for some other task.
This depends on your ontology of course, but just thought I’d point out a case where it fails.
Although we can not rigorously say this yet since we have not chosen a definition of agent, I think this intuitively applies and therefore (H2) can only hold when you are restricted to some set of tasks, perhaps “reasonable tasks”, yea.
I wonder if in the stochastic inteprretation of task this issue disappears because “No Free Lunch” tasks that “diagonalize against a model in a particular fashion have very low probability.