“Utility function” means the same thing as “decision function”
This contradicts my knowledge. By “utility function”, I mean that thing which VNM proves exists; a mapping from possible worlds to real numbers.
Where are the references for “utility function” being interchangable with “decision algorithm”? I have never seen that stated in any technical discussion of decisions.
If we wished to regard a thing as deterministic rather than as an agent with free will, we would call its decision function a probability density function instead of a utility function.
I’m confused.
Do you just mean the difference between modeling a thing as an agent, vs modeling it as a causal system?
Can you elaborate on how this relates here?
Underneath both these questions is the tricky question, “Which me is me?” Are you asking about the utility function enacted by the set of SNPs in your DNA, by your body, or by your conscious mind? These are not the same utility functions.
Agree. Moral philosophy is hard. I’m working on it.
One common use of terminal values on LW is to try to divine a set of terminal values for humans that can be used to guide an AI. So a specific, meaningful, useful question would be, “Can I discover and describe my terminal values in enough detail that I can be confident that an AI, controlled by these values, will enact the coherent extrapolated volition of these values?” … I believe the answer is no
Can you elaborate on why you think it is impossible for a machine to do good things? Or why such a question is meaningless?
This contradicts my knowledge. By “utility function”, I mean that thing which VNM proves exists; a mapping from possible worlds to real numbers.
Where are the references for “utility function” being interchangable with “decision algorithm”? I have never seen that stated in any technical discussion of decisions.
I’m confused.
Do you just mean the difference between modeling a thing as an agent, vs modeling it as a causal system?
Can you elaborate on how this relates here?
Agree. Moral philosophy is hard. I’m working on it.
Can you elaborate on why you think it is impossible for a machine to do good things? Or why such a question is meaningless?
Tricky question indeed. Again, working on it.
I have a utility function, but it is not time-invariant, and is often not continuous on the time axis.
And I’m a universe. Just a bit stochastic around the edges...
Universes are like that. Are you deterministic, purely stochastic, or do you make decisions?