You try to maximize your expected utility. Perhaps having done your calculations, you think that action X has a 5⁄6 chance of earning you £1 and a 1⁄6 chance of killing you (perhaps someone’s promised you £1 if you play Russian Roulette).
Presumably you don’t base your decision entirely on the most likely outcome.
So in this scenario you have to decide how much your life is worth in money. You can go home and not take any chance of dying or risk a 1⁄6 chance to earn X amount of money. Its an extension on the risk/reward problem basically, and you have to decide how much risk is worth in money before you can complete it. Thats a problem, because as far as I know, bayesianism doesn’t cover that.
It’s not the job of ‘Bayesianism’ to tell you what your utility function is.
This [by which I mean, “the question of where the agent’s utility function comes from”] doesn’t have anything to do with the question of whether Bayesian decision-making takes account of more than just the most probable hypothesis.
You try to maximize your expected utility. Perhaps having done your calculations, you think that action X has a 5⁄6 chance of earning you £1 and a 1⁄6 chance of killing you (perhaps someone’s promised you £1 if you play Russian Roulette).
Presumably you don’t base your decision entirely on the most likely outcome.
So in this scenario you have to decide how much your life is worth in money. You can go home and not take any chance of dying or risk a 1⁄6 chance to earn X amount of money. Its an extension on the risk/reward problem basically, and you have to decide how much risk is worth in money before you can complete it. Thats a problem, because as far as I know, bayesianism doesn’t cover that.
It’s not the job of ‘Bayesianism’ to tell you what your utility function is.
This [by which I mean, “the question of where the agent’s utility function comes from”] doesn’t have anything to do with the question of whether Bayesian decision-making takes account of more than just the most probable hypothesis.