I think there are a couple of situations where trying to build FAI by specifying a utility function can make sense (which don’t include things like “get me a coffee”).
We can determine with some certainty that just maximizing some simple utility function can get us most of the potential value of the universe/multiverse. See this post of mine.
We can specify a utility function using “indirect normativity”. See this post by Paul Christiano (which doesn’t work but gives an idea of what I mean here).
I’m not sure if the papers that you’re puzzled about or criticizing have one of these in mind or something else. It might be helpful if you cited a few of them.
I think there are a couple of situations where trying to build FAI by specifying a utility function can make sense (which don’t include things like “get me a coffee”).
We can determine with some certainty that just maximizing some simple utility function can get us most of the potential value of the universe/multiverse. See this post of mine.
We can specify a utility function using “indirect normativity”. See this post by Paul Christiano (which doesn’t work but gives an idea of what I mean here).
I’m not sure if the papers that you’re puzzled about or criticizing have one of these in mind or something else. It might be helpful if you cited a few of them.