Seems reasonable to me that it would be useful to have somewhere on LW a derivation of decision theory, an answer to “why this math rather than some other?”
I wanted to base it on dutch book/vulnerability arguments, but then I kept finding things I wanted to generalize and so on. So I decided to do a derivation in that spirit, but with all the things filled in that I felt I had needed to fill in for myself. It’s more “here’s what I needed to think through to really satisfy myself with this.” But yeah, I’m just going for ordinary Bayesian decision theory and epistemic probabilities. That’s all. I’m not trying to do anything really novel here.
I’m not so much axiomatizing as much as working from the “don’t automatically lose” rule.
I wanted to base it on dutch book/vulnerability arguments, but then I kept finding things I wanted to generalize and so on. So I decided to do a derivation in that spirit, but with all the things filled in that I felt I had needed to fill in for myself. It’s more “here’s what I needed to think through to really satisfy myself with this.”
Just a thought, but I wonder whether, it might work better to:
start with the dutch book arguments;
explicitly state the preconditions necessary for them to work;
gradually build backwards, filling in the requirements for the most problematic preconditions first
This would still need to be done well, but it has the advantage that it’s much clearer where you’re going with everything, and what exactly it would be that you’re trying to show at each stage.
At the moment, for example, I’m having difficulty evaluating your claim to have shown that utility “indices actually corresponds in a meaningful way to how much you prefer one thing to another”. One reason for that is that the claim is ambiguous. There’s an interpretation on which it might be true, and at least one interpretation on which I think it’s likely to be false. There may be other interpretations that I’m entirely missing. If I knew what you were going to try to do with it next, it would be much easier to see what version you need.
Taking this approach would also mean that you can focus on the highest value material first, without getting bogged down in potentially less relevant details.
Seems reasonable to me that it would be useful to have somewhere on LW a derivation of decision theory, an answer to “why this math rather than some other?”
That’s only if the derivation is good. I warned you that you are going to shoot your feet off, if you are not really prepared. Even the classical axiomatizations have some problems with convincing people to trust in them.
Seems reasonable to me that it would be useful to have somewhere on LW a derivation of decision theory, an answer to “why this math rather than some other?”
I wanted to base it on dutch book/vulnerability arguments, but then I kept finding things I wanted to generalize and so on. So I decided to do a derivation in that spirit, but with all the things filled in that I felt I had needed to fill in for myself. It’s more “here’s what I needed to think through to really satisfy myself with this.” But yeah, I’m just going for ordinary Bayesian decision theory and epistemic probabilities. That’s all. I’m not trying to do anything really novel here.
I’m not so much axiomatizing as much as working from the “don’t automatically lose” rule.
Just a thought, but I wonder whether, it might work better to:
start with the dutch book arguments;
explicitly state the preconditions necessary for them to work;
gradually build backwards, filling in the requirements for the most problematic preconditions first
This would still need to be done well, but it has the advantage that it’s much clearer where you’re going with everything, and what exactly it would be that you’re trying to show at each stage.
At the moment, for example, I’m having difficulty evaluating your claim to have shown that utility “indices actually corresponds in a meaningful way to how much you prefer one thing to another”. One reason for that is that the claim is ambiguous. There’s an interpretation on which it might be true, and at least one interpretation on which I think it’s likely to be false. There may be other interpretations that I’m entirely missing. If I knew what you were going to try to do with it next, it would be much easier to see what version you need.
Taking this approach would also mean that you can focus on the highest value material first, without getting bogged down in potentially less relevant details.
That’s only if the derivation is good. I warned you that you are going to shoot your feet off, if you are not really prepared. Even the classical axiomatizations have some problems with convincing people to trust in them.