If statements about whether an algorithm exists are not objectively true or false, there is also no objectively correct decision theory, since the existence of agents is not objective in the first place. Of course you might even agree with this but consider it not to be an objection, since you can just say that decision theory is something we want to do, not something objective.
Yes, I share the impression that the BPB problem implies some amount of decision theory relativism. That said, one could argue that decision theories cannot be objectively correct, anyway. In most areas, statements can only be justified relative to some foundation. Probability assignments are correct relative to a prior, the truth of theorems depends on axioms, and whether you should take some action depends on your goals (or meta-goals). Priors, axioms, and goals themselves, on the other hand, cannot be justified (unless you have some meta-priors, meta-axioms, etc., but I think the chain as to end at some point, see https://en.wikipedia.org/wiki/Regress_argument ). Perhaps decision theories are similar to priors, axioms and terminal values?
I agree that any chain of justification will have to come to an end at some point, certainly in practice and presumably in principle. But it does not follow that the thing at the beginning which has no additional justification is not objectively correct or incorrect. The typical realist response in all of these cases, with which I agree, is that your starting point is correct or incorrect by its relationship with reality, not by a relationship to some justification. Of course if it is really your starting point, you will not be able to prove that it is correct or incorrect. That does not mean it is not one or the other, unless you are assuming from the beginning that none of your starting points have any relationship at all with reality. But in that case, it would be equally reasonable to conclude that your starting points are objectively incorrect.
Let me give some examples:
An axiom: a statement cannot be both true and false in the same way. It does not seem possible to prove this, since if it is open to question, anything you say while trying to prove it, even if you think it true, might also be false. But if this is the way reality actually works, then it is objectively correct even though you cannot prove that it is. Saying that it cannot be objectively correct because you cannot prove it, in this case, seems similar to saying that there is no such thing as reality—in other words, again, saying that your axioms have no relationship at all to reality.
A prior: if there are three possibilities and nothing gives me reason to suspect one more than another, then each has a probability of 1⁄3. Mathematically it is possible to prove this, but in another sense there is nothing to prove: it really just says that if there are three equal possibilities, they have to be considered as equal possibilities and not as unequal ones. In that sense it is exactly like the above axiom: if reality is the way the axiom says, it is also the way this prior says, even though no one can prove it.
A terminal goal: continuing to exist. A goal is what something tends towards. Everything tends to exist and does not tend to not exist—and this is necessarily so, exactly because of the above axiom. If a thing exists, it exists and does not not exist—and it is just another way of describing this to say, “Existing things tend to exist.” Again, as with the case of the prior, there is something like an argument here, but not really. Once again, though, even if you cannot establish the goal by reference to some earlier goal, the goal is an objective goal by relationship with reality: this is how tendencies actually work in reality.
If statements about whether an algorithm exists are not objectively true or false, there is also no objectively correct decision theory, since the existence of agents is not objective in the first place. Of course you might even agree with this but consider it not to be an objection, since you can just say that decision theory is something we want to do, not something objective.
Yes, I share the impression that the BPB problem implies some amount of decision theory relativism. That said, one could argue that decision theories cannot be objectively correct, anyway. In most areas, statements can only be justified relative to some foundation. Probability assignments are correct relative to a prior, the truth of theorems depends on axioms, and whether you should take some action depends on your goals (or meta-goals). Priors, axioms, and goals themselves, on the other hand, cannot be justified (unless you have some meta-priors, meta-axioms, etc., but I think the chain as to end at some point, see https://en.wikipedia.org/wiki/Regress_argument ). Perhaps decision theories are similar to priors, axioms and terminal values?
I agree that any chain of justification will have to come to an end at some point, certainly in practice and presumably in principle. But it does not follow that the thing at the beginning which has no additional justification is not objectively correct or incorrect. The typical realist response in all of these cases, with which I agree, is that your starting point is correct or incorrect by its relationship with reality, not by a relationship to some justification. Of course if it is really your starting point, you will not be able to prove that it is correct or incorrect. That does not mean it is not one or the other, unless you are assuming from the beginning that none of your starting points have any relationship at all with reality. But in that case, it would be equally reasonable to conclude that your starting points are objectively incorrect.
Let me give some examples:
An axiom: a statement cannot be both true and false in the same way. It does not seem possible to prove this, since if it is open to question, anything you say while trying to prove it, even if you think it true, might also be false. But if this is the way reality actually works, then it is objectively correct even though you cannot prove that it is. Saying that it cannot be objectively correct because you cannot prove it, in this case, seems similar to saying that there is no such thing as reality—in other words, again, saying that your axioms have no relationship at all to reality.
A prior: if there are three possibilities and nothing gives me reason to suspect one more than another, then each has a probability of 1⁄3. Mathematically it is possible to prove this, but in another sense there is nothing to prove: it really just says that if there are three equal possibilities, they have to be considered as equal possibilities and not as unequal ones. In that sense it is exactly like the above axiom: if reality is the way the axiom says, it is also the way this prior says, even though no one can prove it.
A terminal goal: continuing to exist. A goal is what something tends towards. Everything tends to exist and does not tend to not exist—and this is necessarily so, exactly because of the above axiom. If a thing exists, it exists and does not not exist—and it is just another way of describing this to say, “Existing things tend to exist.” Again, as with the case of the prior, there is something like an argument here, but not really. Once again, though, even if you cannot establish the goal by reference to some earlier goal, the goal is an objective goal by relationship with reality: this is how tendencies actually work in reality.