Our behaviour and preferences are not consistent and sane enough to be VNM, and we are generally quite confused about what we even want, never mind having reduced it to a utility function.
The thermostat in my room doesn’t know what it wants either. However, a utility function models its behaviour pretty well.
That’s a silly assertion. A thermostat can be trivially modeled using a utility function. Positive utility for decresing temperatue when it is too high, and increasing it when it is too low. Zero utility for other behaviours. This is not a difficult case to understand.
You can also trivially model a thermostat using lego bricks. However, you don’t need a lego-based model to understand a thermostat, it doesn’t lend itself to the task, just like you don’t arbitrarily choose a programming language regardless of your task just because it’s Turing complete.
Nothing about a simple finite state machine like a thermostat that would cause a model-er to go “how can I drag utility functions into this”, even if it is, of course, possible. I’d go so far as to assert that you could (but shouldn’t) model anything that is computable in a way involving a utility function.
You can also trivially model a thermostat using lego bricks. However, you don’t need a lego-based model to understand a thermostat
That’s a complete straw man. I never claimed that you did. What I said was: “a utility function models its behaviour pretty well”—which is perfectly true.
I’d go so far as to assert that you could (but shouldn’t) model anything that is computable in a way involving a utility function.
Any computable agent. If it iisn’t clear how to decompose a system into sensors and actuators, representation in terms of a utility function is not so useful—because it is not unique. It is convenient to use utility functions when you want to compare the values of different agents. If that’s what you are doing, utility functions seem like a suitable tool.
That’s an Occam’s razor fail, though. Explanations need to be concise to be satisfying. You’ll find that, if you compress that utility function, you will be onto something interesting.
The thermostat in my room doesn’t know what it wants either. However, a utility function models its behaviour pretty well.
Consciousness is the brain’s PR department. If it’s evasive about what it wants, that could be part of an attempt to manipulate others—e.g. see The Folly of Fools: The Logic of Deceit and Self-Deception in Human Life.
Oh, not again! The thermostat is not modelled by a utility function at all. Utility functions are completely irrelevant to understanding a thermostat.
That’s a silly assertion. A thermostat can be trivially modeled using a utility function. Positive utility for decresing temperatue when it is too high, and increasing it when it is too low. Zero utility for other behaviours. This is not a difficult case to understand.
You can also trivially model a thermostat using lego bricks. However, you don’t need a lego-based model to understand a thermostat, it doesn’t lend itself to the task, just like you don’t arbitrarily choose a programming language regardless of your task just because it’s Turing complete.
Nothing about a simple finite state machine like a thermostat that would cause a model-er to go “how can I drag utility functions into this”, even if it is, of course, possible. I’d go so far as to assert that you could (but shouldn’t) model anything that is computable in a way involving a utility function.
That’s a complete straw man. I never claimed that you did. What I said was: “a utility function models its behaviour pretty well”—which is perfectly true.
Any computable agent. If it iisn’t clear how to decompose a system into sensors and actuators, representation in terms of a utility function is not so useful—because it is not unique. It is convenient to use utility functions when you want to compare the values of different agents. If that’s what you are doing, utility functions seem like a suitable tool.
Trivially. Quite. So trivially that anything at all can be “modelled” by a utility function at that level of triviality.
I’ve a great new utility-based model of the universe! The universe as it is has utility 1. Every other hypothetical universe has utility 0.
That’s an Occam’s razor fail, though. Explanations need to be concise to be satisfying. You’ll find that, if you compress that utility function, you will be onto something interesting.
My fake explanation was precisely as concise as yours.