Right, but of what use is it if we still rely on our intuitions to come up with a prior probability and a numerical utility assignment?
Just because our brains haven’t evolved to deal with a specific circumstance doesn’t mean that all of our intuitions would be worthless in that circumstance. Me trying to decide what to invest in doesn’t mean that my brain’s claim of me currently sitting in a chair inside my home would suddenly become a worthless hallucination. Even if I’m investing, I can still trust the intuition that I’m at my home and sitting in a chair.
If we apply an intuition Y to situation X, then Y might always produce correct results for that X, or it might always produce wrong results for that X, or it might be somewhere in between. Sometimes we take an intuition that we know to be incorrect, and replace it with another decision-making procedure, such the principle of expected utility. If the intuitions which feed into that decision-making procedure are thought to be correct, then that’s all that we need to do. Our intuitions may be incapable of producing exact numeric estimates, but they can still provide rough magnitudes.
Which intuitions are correct in which situations? When do we need to replace an intuition with learned rules or decision-making procedures? Well, that’s what the heuristics and biases literature tries to find out.
Why wouldn’t I just assign any utility to get the desired result? If you can’t ground utility in something that is physically measurable, then of what use is it other than giving your beliefs and decisions a veneer of respectability?
What? You don’t “assign a utility to get the desired result”, you try to figure out what the desired result is. Of course, if you’ve already made a decision and want to rationalize it, then sure, you can do it by dressing it up in the language expected utility. But that doesn’t change the fact that if you want to know whether you should participate, the principle of expected utility is the way to get the best result.
Here the quantities involved even let you make an explicit calculation, if you want to: you know what the prizes are, you know what you have to give up to participate, and you can find out how many people typically participate in such events. Though you can probably get close enough to the right result even without an explicit calculation.
Sometimes we take an intuition that we know to be incorrect, and replace it with another decision-making procedure, such the principle of expected utility. If the intuitions which feed into that decision-making procedure are thought to be correct, then that’s all that we need to do.
1) What decision-making procedure do you use to replace intuition with another decision-making procedure?
2) What decision-making procedure is used to come up with numerical utility assignments and what evidence do you have that it is correct by a certain probability?
Our intuitions may be incapable of producing exact numeric estimates, but they can still provide rough magnitudes.
3) What method is used to convert those rough estimates provided by our intuition into numeric estimates?
3b) What evidence do you have converting intuitive judgements of the utility of world states into numeric estimates increases the probability of attaining what you really want?
What? You don’t “assign a utility to get the desired result”, you try to figure out what the desired result is.
An example would be FAI research. There is virtually no information to judge the expected utility of it. If you are in favor of it you can cite the positive utility associated with a galactic civilizations, if you are against it you can cite the negative utility associated with getting it wrong or making UFAI more likely by solving decision theory.
The desired outcome is found by calculating how much it satisfies your utility-function, e.g. how much utils you assign to an hour of awesome sex and how much negative utility you assign to an hour of horrible torture.
Humans do not have stable utility functions and can simply change the weighting of various factors and thereby the action that maximizes expected utility.
What evidence do you have that the whole business of expected utility maximization isn’t just a perfect tool to rationalize biases?
(Note that I am not talking about the technically ideal case of a perfectly rational (whatever that means in this context) computationally unbounded agent.)
Here the quantities involved even let you make an explicit calculation, if you want to: you know what the prizes are, you know what you have to give up to participate, and you can find out how many people typically participate in such events. Though you can probably get close enough to the right result even without an explicit calculation.
Sure, but if attaining an event is dangerous because the crime rate in that area is very high due to recent riots, what prevents you from adjusting your utility function to attain anyway? In other words, what difference is there between just doing what you want based on naive introspection versus using expected utility calculations? If utility is completely subjective and arbitrary then it won’t help you to evaluate different actions objectively. Winning is then just a label you can assign to any world state you like best at any given moment.
What would be irrational about playing the lottery all the day as long as I assign huge amounts of utility to money won by means of playing the lottery and therefore world states where I am rich by means of playing the lottery?
Just because our brains haven’t evolved to deal with a specific circumstance doesn’t mean that all of our intuitions would be worthless in that circumstance. Me trying to decide what to invest in doesn’t mean that my brain’s claim of me currently sitting in a chair inside my home would suddenly become a worthless hallucination. Even if I’m investing, I can still trust the intuition that I’m at my home and sitting in a chair.
If we apply an intuition Y to situation X, then Y might always produce correct results for that X, or it might always produce wrong results for that X, or it might be somewhere in between. Sometimes we take an intuition that we know to be incorrect, and replace it with another decision-making procedure, such the principle of expected utility. If the intuitions which feed into that decision-making procedure are thought to be correct, then that’s all that we need to do. Our intuitions may be incapable of producing exact numeric estimates, but they can still provide rough magnitudes.
Which intuitions are correct in which situations? When do we need to replace an intuition with learned rules or decision-making procedures? Well, that’s what the heuristics and biases literature tries to find out.
What? You don’t “assign a utility to get the desired result”, you try to figure out what the desired result is. Of course, if you’ve already made a decision and want to rationalize it, then sure, you can do it by dressing it up in the language expected utility. But that doesn’t change the fact that if you want to know whether you should participate, the principle of expected utility is the way to get the best result.
Here the quantities involved even let you make an explicit calculation, if you want to: you know what the prizes are, you know what you have to give up to participate, and you can find out how many people typically participate in such events. Though you can probably get close enough to the right result even without an explicit calculation.
1) What decision-making procedure do you use to replace intuition with another decision-making procedure?
2) What decision-making procedure is used to come up with numerical utility assignments and what evidence do you have that it is correct by a certain probability?
3) What method is used to convert those rough estimates provided by our intuition into numeric estimates?
3b) What evidence do you have converting intuitive judgements of the utility of world states into numeric estimates increases the probability of attaining what you really want?
An example would be FAI research. There is virtually no information to judge the expected utility of it. If you are in favor of it you can cite the positive utility associated with a galactic civilizations, if you are against it you can cite the negative utility associated with getting it wrong or making UFAI more likely by solving decision theory.
The desired outcome is found by calculating how much it satisfies your utility-function, e.g. how much utils you assign to an hour of awesome sex and how much negative utility you assign to an hour of horrible torture.
Humans do not have stable utility functions and can simply change the weighting of various factors and thereby the action that maximizes expected utility.
What evidence do you have that the whole business of expected utility maximization isn’t just a perfect tool to rationalize biases?
(Note that I am not talking about the technically ideal case of a perfectly rational (whatever that means in this context) computationally unbounded agent.)
Sure, but if attaining an event is dangerous because the crime rate in that area is very high due to recent riots, what prevents you from adjusting your utility function to attain anyway? In other words, what difference is there between just doing what you want based on naive introspection versus using expected utility calculations? If utility is completely subjective and arbitrary then it won’t help you to evaluate different actions objectively. Winning is then just a label you can assign to any world state you like best at any given moment.
What would be irrational about playing the lottery all the day as long as I assign huge amounts of utility to money won by means of playing the lottery and therefore world states where I am rich by means of playing the lottery?