Looking at your post at http://lesswrong.com/lw/2id/metaphilosophical_mysteries, I can see the sketch of an argument. It goes something like “we know that some decision theories/philosophical processes are ’objectively ’inferior, hence some are objectively superior, hence (wave hands furiously) it is at least possible that some system is objectively best”.
I would counter:
1) The argument is very weak. We know some mathematical axiomatic systems are contradictory, hence inferior. It doesn’t follow from that that there is any “best” system of axioms.
2) A lot of philosophical progress is entirely akin to mathematical progress: showing the consequences of the axioms/assumptions. This is useful progress, but not really relevant to the argument.
3) All the philosophical progress seems to lie on the “how to make better decisions given a goal” side; none of it lies on the “how to have better goals” side. Even the expected utility maximisation result just says “if you are unable to predict effectively over the long term, then to achieve your current goals, it would be more efficient to replace these goals with others compatible with a utility function”.
However, despite my objections, I have to note that the argument is at least an argument, and provides some small evidence in that direction. I’ll try and figure out whether it should be included in the paper.
Looking at your post at http://lesswrong.com/lw/2id/metaphilosophical_mysteries, I can see the sketch of an argument. It goes something like “we know that some decision theories/philosophical processes are ’objectively ’inferior, hence some are objectively superior, hence (wave hands furiously) it is at least possible that some system is objectively best”.
I would counter:
1) The argument is very weak. We know some mathematical axiomatic systems are contradictory, hence inferior. It doesn’t follow from that that there is any “best” system of axioms.
2) A lot of philosophical progress is entirely akin to mathematical progress: showing the consequences of the axioms/assumptions. This is useful progress, but not really relevant to the argument.
3) All the philosophical progress seems to lie on the “how to make better decisions given a goal” side; none of it lies on the “how to have better goals” side. Even the expected utility maximisation result just says “if you are unable to predict effectively over the long term, then to achieve your current goals, it would be more efficient to replace these goals with others compatible with a utility function”.
However, despite my objections, I have to note that the argument is at least an argument, and provides some small evidence in that direction. I’ll try and figure out whether it should be included in the paper.