So in this scenario you have to decide how much your life is worth in money. You can go home and not take any chance of dying or risk a 1⁄6 chance to earn X amount of money. Its an extension on the risk/reward problem basically, and you have to decide how much risk is worth in money before you can complete it. Thats a problem, because as far as I know, bayesianism doesn’t cover that.
It’s not the job of ‘Bayesianism’ to tell you what your utility function is.
This [by which I mean, “the question of where the agent’s utility function comes from”] doesn’t have anything to do with the question of whether Bayesian decision-making takes account of more than just the most probable hypothesis.
So in this scenario you have to decide how much your life is worth in money. You can go home and not take any chance of dying or risk a 1⁄6 chance to earn X amount of money. Its an extension on the risk/reward problem basically, and you have to decide how much risk is worth in money before you can complete it. Thats a problem, because as far as I know, bayesianism doesn’t cover that.
It’s not the job of ‘Bayesianism’ to tell you what your utility function is.
This [by which I mean, “the question of where the agent’s utility function comes from”] doesn’t have anything to do with the question of whether Bayesian decision-making takes account of more than just the most probable hypothesis.