I’m afraid that’s still a little too vague for me to make much sense of. What decision theory are you trying to derive? How does this particular decision theory differ (if at all) from other decision theories out there. If you’re deriving it from different premises/axioms than other people already have, how do these relate to existing axiomatizations?
Perhaps most importantly, why does this need to be done from scratch on OB/LW? I could understand the value of summarizing existing work in a concise and intuitive fashion, but that doesn’t seem to be what you’re doing.
Seems reasonable to me that it would be useful to have somewhere on LW a derivation of decision theory, an answer to “why this math rather than some other?”
I wanted to base it on dutch book/vulnerability arguments, but then I kept finding things I wanted to generalize and so on. So I decided to do a derivation in that spirit, but with all the things filled in that I felt I had needed to fill in for myself. It’s more “here’s what I needed to think through to really satisfy myself with this.” But yeah, I’m just going for ordinary Bayesian decision theory and epistemic probabilities. That’s all. I’m not trying to do anything really novel here.
I’m not so much axiomatizing as much as working from the “don’t automatically lose” rule.
I wanted to base it on dutch book/vulnerability arguments, but then I kept finding things I wanted to generalize and so on. So I decided to do a derivation in that spirit, but with all the things filled in that I felt I had needed to fill in for myself. It’s more “here’s what I needed to think through to really satisfy myself with this.”
Just a thought, but I wonder whether, it might work better to:
start with the dutch book arguments;
explicitly state the preconditions necessary for them to work;
gradually build backwards, filling in the requirements for the most problematic preconditions first
This would still need to be done well, but it has the advantage that it’s much clearer where you’re going with everything, and what exactly it would be that you’re trying to show at each stage.
At the moment, for example, I’m having difficulty evaluating your claim to have shown that utility “indices actually corresponds in a meaningful way to how much you prefer one thing to another”. One reason for that is that the claim is ambiguous. There’s an interpretation on which it might be true, and at least one interpretation on which I think it’s likely to be false. There may be other interpretations that I’m entirely missing. If I knew what you were going to try to do with it next, it would be much easier to see what version you need.
Taking this approach would also mean that you can focus on the highest value material first, without getting bogged down in potentially less relevant details.
Seems reasonable to me that it would be useful to have somewhere on LW a derivation of decision theory, an answer to “why this math rather than some other?”
That’s only if the derivation is good. I warned you that you are going to shoot your feet off, if you are not really prepared. Even the classical axiomatizations have some problems with convincing people to trust in them.
Basically I’m trying to derive decision theory as the approximate unique solution to “don’t automatically lose”
It occurred to me that someone should be doing something like this on OB or LW, so… I’m making a go of it.
I’m afraid that’s still a little too vague for me to make much sense of. What decision theory are you trying to derive? How does this particular decision theory differ (if at all) from other decision theories out there. If you’re deriving it from different premises/axioms than other people already have, how do these relate to existing axiomatizations?
Perhaps most importantly, why does this need to be done from scratch on OB/LW? I could understand the value of summarizing existing work in a concise and intuitive fashion, but that doesn’t seem to be what you’re doing.
Seems reasonable to me that it would be useful to have somewhere on LW a derivation of decision theory, an answer to “why this math rather than some other?”
I wanted to base it on dutch book/vulnerability arguments, but then I kept finding things I wanted to generalize and so on. So I decided to do a derivation in that spirit, but with all the things filled in that I felt I had needed to fill in for myself. It’s more “here’s what I needed to think through to really satisfy myself with this.” But yeah, I’m just going for ordinary Bayesian decision theory and epistemic probabilities. That’s all. I’m not trying to do anything really novel here.
I’m not so much axiomatizing as much as working from the “don’t automatically lose” rule.
Just a thought, but I wonder whether, it might work better to:
start with the dutch book arguments;
explicitly state the preconditions necessary for them to work;
gradually build backwards, filling in the requirements for the most problematic preconditions first
This would still need to be done well, but it has the advantage that it’s much clearer where you’re going with everything, and what exactly it would be that you’re trying to show at each stage.
At the moment, for example, I’m having difficulty evaluating your claim to have shown that utility “indices actually corresponds in a meaningful way to how much you prefer one thing to another”. One reason for that is that the claim is ambiguous. There’s an interpretation on which it might be true, and at least one interpretation on which I think it’s likely to be false. There may be other interpretations that I’m entirely missing. If I knew what you were going to try to do with it next, it would be much easier to see what version you need.
Taking this approach would also mean that you can focus on the highest value material first, without getting bogged down in potentially less relevant details.
That’s only if the derivation is good. I warned you that you are going to shoot your feet off, if you are not really prepared. Even the classical axiomatizations have some problems with convincing people to trust in them.