“statement x is not currently the case and is probably unfeasible” does in fact mean we shouldn’t try to act on it. Maybe we can try to act to make statement x true, but we shouldn’t act as if it already is. For a more concrete example, imagine this: “I’ve never done a backflip. It’s not even clear I can do one”. We know backflips are possible, and with training you’re probably going to be able to do one. But at the time you’re making that statement, saying “doesn’t mean you shouldn’t try” is TERRIBLE advice that could get you a broken neck.
Firstly, that’s kind of an uncharitable reading. If I said “I’m going to try and pass an exam” you’d naturally understand me as planning to do the requisite work first. “Backflip” just pattern-matches to ‘the sort of thing silly people try to do without training’.
However, that said, I’m being disingenuous. What I really truly meant at the time I typed that was moral-should, not practical-should, which come apart if one isn’t a perfect consequentialist. Which I ain’t, which is at least partly the point.
It may well do. Yvain has pointed out on his blog (I recall the post, though I couldn’t find it just now) that in daily life we do actually use something like utilitarianism quite a bit, which carries a presumption of something like a utility function at least in that case. But what works in normal ranges does not necessarily extrapolate: utilitarianism is observably brittle, and routinely reaches conclusions that humans consider absurd.
There’s occasionally LW posts showing that utilitarianism gives some apparently-absurd result or other, and too often the poster seems to be saying “look, absurd result, but the numbers work out so this is important!” rather than “oh, I hit an absurdity, perhaps I’m stretching this way further than it goes.” It’s entirely unclear to me that pretending you’re an agent with a utility function is actually a good idea; it seems to me to be setting yourself up to fall into absurdities.
Below, you claim this is a moral choice; I would suggest that trying to achieve an actually impossible moral code, let alone advocating it, is basically unhealthy.
Firstly, I thought we were just appealing to consequentialism, not utilitarianism?
So I think I agree with you that believing you have a utility function if you in fact don’t might suck, and that baseline humans in fact don’t. I was trying to distinguish that from:
a) believing one ought to have a utility function, in which case I might seek to self-modify appropriately if it became possible; so something a bit stronger than the “pretending” you suggested. b) believing one should strive to act as if one did, while knowing that I’ll fall short because I don’t.
The second you addressed by saying
I would suggest that trying to achieve an actually impossible moral code, let alone advocating it, is basically unhealthy.
I have one group of intuitions here that claim impossibility in a moral code is a feature, not a bug, because it helps avoid deluding youself that you’ve finished the job and are now perfect; and why would I expect the right action to be healthy anyway?
But this seems like a line of thinking that is specific to coping with being an inconsistent human, in the absence of an engineering fix for that.
...too often the poster seems to be saying “look, absurd result, but the numbers work out so this is important!” rather than “oh, I hit an absurdity, perhaps I’m stretching this way further than it goes.”
Yes, I don’t understand this at all. For example, even Yudkowsky writes that he would sooner question his grasp of “rationality” than give five dollars to a Pascal’s Mugger because he thought it was “rational”. Now as far as I can tell, they still use this framework to make decisions, a framework that implies absurd decisions, rather than concentrating on examining the framework itself, and looking for better alternatives.
What I am having problems with is that they seem to teach people to “shut up and multiply”, and approximate EU maximization, yet arbitrarily ignore low probabilities. I say “arbitrarily” because nobody ever told me at what point it is rational to step out of this framework and ignore a calculation.
You could argue that our current grasp of rationality is less wrong. But why then worry about something as dutch booking when any stranger can make you give them all your money simply by conjecturing vast utilities if you don’t? Seems more wrong to me.
Lots of frameworks imply different absurd decisions (especially when viewed from other frameworks) but it’s hard to go about your life without using some sort of framework.
If rationality is on average less wrong but you think your intuition is better in a certain scenario, a mixed strategy makes sense.
If rationality is on average less wrong but you think your intuition is better in a certain scenario, a mixed strategy makes sense.
No, it means your intuition is better than your rationality, and you should fix that. If your rational model is not as good as your intuition at making decisions, then it is flawed and you need to move on.
Let’s say I have 300 situations where I recorded my decision making process. I tried to use rationality to make the right decision in all of them, and kept track of whether I regretted the outcome. In 100 of these situations, my intuitions disagreed with my rational model, and I followed my rational model. If I only regret the outcome in 1 of these 100 situations, in what way does make sense to throw out my model? You can RATIONALLY decide that certain situations are not amenable to your rational framework without deciding the framework is without value.
Let’s say we do 100 physics experiments, and 99% of the results agree with our model. Do we get to ignore / throw out that one “erroneous” result? No, that result if verified shows a flaw in our model.
If afterwards you regretted a choice and wish you had made a better choice even with the information available to you at the time, then this realization should have bolt upright in your chair. If verified, your decision making process needs updating.
it’s still a pretty damn good model. Why can’t you get that point? Newtonian mechanics was still a very useful model and would’ve been ridiculous to replace with intuition just because it gave absurd answers in relativistic situations.
I never contradicted that point. Newtonian physics works quite fine in many situations. It is still wrong.
Edit: to expand on that point when we use physics we know that there a certain circumstances in whichwe use classical physics because it is easier and faster and the results are good enough for the precision we need. Other times we use quantum physics or relativity. the decision of which model to use is itself part of the decision-making frameworks and is what I’m talking about. if you chose to use the wrong framework and get incorrect results then your metamodel of which framework to use use to be updated.
Doesn’t mean we shouldn’t try.
“statement x is not currently the case and is probably unfeasible” does in fact mean we shouldn’t try to act on it. Maybe we can try to act to make statement x true, but we shouldn’t act as if it already is. For a more concrete example, imagine this: “I’ve never done a backflip. It’s not even clear I can do one”. We know backflips are possible, and with training you’re probably going to be able to do one. But at the time you’re making that statement, saying “doesn’t mean you shouldn’t try” is TERRIBLE advice that could get you a broken neck.
Firstly, that’s kind of an uncharitable reading. If I said “I’m going to try and pass an exam” you’d naturally understand me as planning to do the requisite work first. “Backflip” just pattern-matches to ‘the sort of thing silly people try to do without training’.
However, that said, I’m being disingenuous. What I really truly meant at the time I typed that was moral-should, not practical-should, which come apart if one isn’t a perfect consequentialist. Which I ain’t, which is at least partly the point.
It may well do. Yvain has pointed out on his blog (I recall the post, though I couldn’t find it just now) that in daily life we do actually use something like utilitarianism quite a bit, which carries a presumption of something like a utility function at least in that case. But what works in normal ranges does not necessarily extrapolate: utilitarianism is observably brittle, and routinely reaches conclusions that humans consider absurd.
There’s occasionally LW posts showing that utilitarianism gives some apparently-absurd result or other, and too often the poster seems to be saying “look, absurd result, but the numbers work out so this is important!” rather than “oh, I hit an absurdity, perhaps I’m stretching this way further than it goes.” It’s entirely unclear to me that pretending you’re an agent with a utility function is actually a good idea; it seems to me to be setting yourself up to fall into absurdities.
Below, you claim this is a moral choice; I would suggest that trying to achieve an actually impossible moral code, let alone advocating it, is basically unhealthy.
Firstly, I thought we were just appealing to consequentialism, not utilitarianism?
So I think I agree with you that believing you have a utility function if you in fact don’t might suck, and that baseline humans in fact don’t. I was trying to distinguish that from:
a) believing one ought to have a utility function, in which case I might seek to self-modify appropriately if it became possible; so something a bit stronger than the “pretending” you suggested.
b) believing one should strive to act as if one did, while knowing that I’ll fall short because I don’t.
The second you addressed by saying
Did you have the same position re. Trying to Try?
I have one group of intuitions here that claim impossibility in a moral code is a feature, not a bug, because it helps avoid deluding youself that you’ve finished the job and are now perfect; and why would I expect the right action to be healthy anyway? But this seems like a line of thinking that is specific to coping with being an inconsistent human, in the absence of an engineering fix for that.
Yes, I don’t understand this at all. For example, even Yudkowsky writes that he would sooner question his grasp of “rationality” than give five dollars to a Pascal’s Mugger because he thought it was “rational”. Now as far as I can tell, they still use this framework to make decisions, a framework that implies absurd decisions, rather than concentrating on examining the framework itself, and looking for better alternatives.
What I am having problems with is that they seem to teach people to “shut up and multiply”, and approximate EU maximization, yet arbitrarily ignore low probabilities. I say “arbitrarily” because nobody ever told me at what point it is rational to step out of this framework and ignore a calculation.
You could argue that our current grasp of rationality is less wrong. But why then worry about something as dutch booking when any stranger can make you give them all your money simply by conjecturing vast utilities if you don’t? Seems more wrong to me.
Lots of frameworks imply different absurd decisions (especially when viewed from other frameworks) but it’s hard to go about your life without using some sort of framework.
If rationality is on average less wrong but you think your intuition is better in a certain scenario, a mixed strategy makes sense.
No, it means your intuition is better than your rationality, and you should fix that. If your rational model is not as good as your intuition at making decisions, then it is flawed and you need to move on.
You seem to have completely missed my point.
Let’s say I have 300 situations where I recorded my decision making process. I tried to use rationality to make the right decision in all of them, and kept track of whether I regretted the outcome. In 100 of these situations, my intuitions disagreed with my rational model, and I followed my rational model. If I only regret the outcome in 1 of these 100 situations, in what way does make sense to throw out my model? You can RATIONALLY decide that certain situations are not amenable to your rational framework without deciding the framework is without value.
Let’s say we do 100 physics experiments, and 99% of the results agree with our model. Do we get to ignore / throw out that one “erroneous” result? No, that result if verified shows a flaw in our model.
If afterwards you regretted a choice and wish you had made a better choice even with the information available to you at the time, then this realization should have bolt upright in your chair. If verified, your decision making process needs updating.
it’s still a pretty damn good model. Why can’t you get that point? Newtonian mechanics was still a very useful model and would’ve been ridiculous to replace with intuition just because it gave absurd answers in relativistic situations.
I never contradicted that point. Newtonian physics works quite fine in many situations. It is still wrong.
Edit: to expand on that point when we use physics we know that there a certain circumstances in whichwe use classical physics because it is easier and faster and the results are good enough for the precision we need. Other times we use quantum physics or relativity. the decision of which model to use is itself part of the decision-making frameworks and is what I’m talking about. if you chose to use the wrong framework and get incorrect results then your metamodel of which framework to use use to be updated.