I was trying to get you to clarify what you meant.
As far as I can tell, your reply makes no attempt to clarify :-(
“Utility function” does not normally mean:
“a function which an optimizing agent will behave risk-neutrally towards”.
It means the function which, when maximised, explains an agent’s goal-directed actions.
Apart from the issue of “why-redefine”, the proposed redefinition appears incomprehensible—at least to me.
I have concluded to my satisfaction that it would not be an efficient expenditure of our time to continue attempting to understand each other in this matter.
I was trying to get you to clarify what you meant.
As far as I can tell, your reply makes no attempt to clarify :-(
“Utility function” does not normally mean:
“a function which an optimizing agent will behave risk-neutrally towards”.
It means the function which, when maximised, explains an agent’s goal-directed actions.
Apart from the issue of “why-redefine”, the proposed redefinition appears incomprehensible—at least to me.
I have concluded to my satisfaction that it would not be an efficient expenditure of our time to continue attempting to understand each other in this matter.