an estimated utility function is a practical abstraction that obscures the lower-level machinery/implementational details
I agree that this is what’s happening. I probably have different intuitions regarding how big of a problem it is.
The main questions here might be something like:
Is there any more information about the underlying system, besides its various utility function, useful for decision-making?
If (1) is false, can we calibrate for that error when trying to approximate things with the utility function? If we just use the utility function, will be be over-confident, or just extra (and reasonably) cautious?
In situations where we don’t have models of the underlying system, can utility function estimates be better than alternatives we might have?
My quick expected answers to this:
I think for many things, utility functions are fine. I think these are far more precise and accurate than other existing approaches that we have today (like, people intuitively guessing what’s good for others) .
I think if we do a decent job, we can just add extra uncertainty/caution to the system. I like to trust future actors here to not be obviously stupid in ways we could expect.
As I stated before, I don’t think we have better tools yet. I’m happy to see research into more work in understanding the underlying systems, but in the meantime, utility functions seem about as precise and information-rich as anything else we have.
is that different “deliberation/idealization procedures” may produce very different results and never converge in the limit.
Agreed. This is a pretty large topic, I was trying to keep this essay limited. My main recommendation here was to highlight the importance of deliberation and potential deliberation levels, in part to better discuss issues like these.
I agree that this is what’s happening. I probably have different intuitions regarding how big of a problem it is.
The main questions here might be something like:
Is there any more information about the underlying system, besides its various utility function, useful for decision-making?
If (1) is false, can we calibrate for that error when trying to approximate things with the utility function? If we just use the utility function, will be be over-confident, or just extra (and reasonably) cautious?
In situations where we don’t have models of the underlying system, can utility function estimates be better than alternatives we might have?
My quick expected answers to this:
I think for many things, utility functions are fine. I think these are far more precise and accurate than other existing approaches that we have today (like, people intuitively guessing what’s good for others) .
I think if we do a decent job, we can just add extra uncertainty/caution to the system. I like to trust future actors here to not be obviously stupid in ways we could expect.
As I stated before, I don’t think we have better tools yet. I’m happy to see research into more work in understanding the underlying systems, but in the meantime, utility functions seem about as precise and information-rich as anything else we have.
Agreed. This is a pretty large topic, I was trying to keep this essay limited. My main recommendation here was to highlight the importance of deliberation and potential deliberation levels, in part to better discuss issues like these.