What would you want from an unbounded utility function that you couldn’t get if the math turned out so that only bounded utility functions can be used in a decision procedure?
An actual description of my preferences. I am unsure whether my utility function is actually unbounded but I find it probable that, for example, my utility function is linear in people. I don’t want to rule this out just because that current framework is insufficient for it.
Predicting your preferences requires specifying both the utility function and the framework, so offering a utility function without the framework as an explanation for your preferences does not actually explain them. I actually don’t know if my question was hypothetical or not. Do we have a decision procedure that gives reasonable results for an unbounded utility function?
The phrase “rule this out” seems interesting here. At any given time, you’ll have a set of explanations for your behavior. That doesn’t rule out coming up with better explanations later. Does the best explanation you have for your preferences that works with a known decision theory have bounded utility?
Perhaps I see what’s going on here—people who want unbounded utility are feeling loss when they imagine giving that up that unbounded goodness in order to avoid bugs like the one described in the OP. I, on the other hand, feel loss when people dither over difficult math problems when the actual issues confronting us have nothing to do with difficult math. Specifically, dealing effectively with the default future, in which one or more corporations make AI’s that optimize for something having no connection to the preferences of any individual human.
Do we have a decision procedure that gives reasonable results for an unbounded utility function?
Not one compatible with a Solomonoff prior. I agree that a utility function alone is not a full description of preferences.
Does the best explanation you have for your preferences that works with a known decision theory have bounded utility?
The best explanation that I have for my preferences does not, AFAICT, work with any known decision theory. However, I know enough of what such a decision theory would look like if it were possible to say that it would not have bounded utility.
I, on the other hand, feel loss when people dither over difficult math problems when the actual issues confronting us have nothing to do with difficult math.
I disagree that I am doing such. Whether or not the math is relevant to the issue is a question of values, not fact. Your estimates of your values do not find the math relevant; my estimates of my values do.
Predicting your preferences requires specifying both the utility function and the framework, so offering a utility function without the framework as an explanation for your preferences does not actually explain them. I actually don’t know if my question was hypothetical or not. Do we have a decision procedure that gives reasonable results for an unbounded utility function?
The phrase “rule this out” seems interesting here. At any given time, you’ll have a set of explanations for your behavior. That doesn’t rule out coming up with better explanations later. Does the best explanation you have for your preferences that works with a known decision theory have bounded utility?
Perhaps I see what’s going on here—people who want unbounded utility are feeling loss when they imagine giving that up that unbounded goodness in order to avoid bugs like the one described in the OP. I, on the other hand, feel loss when people dither over difficult math problems when the actual issues confronting us have nothing to do with difficult math. Specifically, dealing effectively with the default future, in which one or more corporations make AI’s that optimize for something having no connection to the preferences of any individual human.
Not one compatible with a Solomonoff prior. I agree that a utility function alone is not a full description of preferences.
The best explanation that I have for my preferences does not, AFAICT, work with any known decision theory. However, I know enough of what such a decision theory would look like if it were possible to say that it would not have bounded utility.
I disagree that I am doing such. Whether or not the math is relevant to the issue is a question of values, not fact. Your estimates of your values do not find the math relevant; my estimates of my values do.