here are fewer details required than many strawman versions would have it; and often what seems like a specific detail is actually just an antiprediction, i.e., UFAI is not about a special utility function but about the whole class of non-Friendly utility functions.
If by “utility function” you mean “a computable function, expressible using lambda calculus” (or Turing machine tape or python code, that’s equivalent), then the arguing that majority of such functions lead to a model-based utility-based agent killing you, is a huge stretch, as such functions are not grounded and the correspondence of model with the real world is not a sub-goal to finding maximum of such function.
If by “utility function” you mean “a computable function, expressible using lambda calculus” (or Turing machine tape or python code, that’s equivalent), then the arguing that majority of such functions lead to a model-based utility-based agent killing you, is a huge stretch, as such functions are not grounded and the correspondence of model with the real world is not a sub-goal to finding maximum of such function.