(I endorse sunwillrise’s comment as a general response to this post; it’s an unusually excellent comment. This comment is just me harping on a pet peeve of mine.)
So, within the ratosphere, it’s well-known that every physical object or set of objects is mathematically equivalent to some expected utility maximizer
This is a wildly misleading idea which refuses to die.
As a meme within the ratosphere, the usual source cited is this old post by Rohin, which has a section titled “All behavior can be rationalized as EU maximization”. When I complained to Rohin that “All behavior can be rationalized as EU maximization” was wildly misleading, he replied:
I tried to be clear that my argument was “you need more assumptions beyond just coherence arguments on universe-histories; if you have literally no other assumptions then all behavior can be rationalized as EU maximization”. I think the phrase “all behavior can be rationalized as EU maximization” or something like it was basically necessary to get across the argument that I was making. I agree that taken in isolation it is misleading; I don’t really see what I could have done differently to prevent there from being something that in isolation was misleading, while still being able to point out the-thing-that-I-believe-is-fallacious. Nuance is hard.
Point is: even the guy who’s usually cited on this (at least on LW) agrees it’s misleading.
Why is it misleading? Because coherence arguments do, in fact, involve a notion of “utility maximization” narrower than just a system’s behavior maximizing some function of universe-trajectory. There are substantive notions of “utility maximizer”, those notions are a decent match to our intuitions in many ways, and they involve more than just behavior maximizing some function of universe-trajectory. When we talk about “utility maximizers” in a substantive sense, we’re talking about a phenomenon which is narrower than behavior maximizing some function of universe-trajectory.
If you want to see a notion of “utility maximizer” which is nontrivial, Coherence of Caches and Agents gives IMO a pretty illustrative and simple example.
(I endorse sunwillrise’s comment as a general response to this post; it’s an unusually excellent comment. This comment is just me harping on a pet peeve of mine.)
This is a wildly misleading idea which refuses to die.
As a meme within the ratosphere, the usual source cited is this old post by Rohin, which has a section titled “All behavior can be rationalized as EU maximization”. When I complained to Rohin that “All behavior can be rationalized as EU maximization” was wildly misleading, he replied:
Point is: even the guy who’s usually cited on this (at least on LW) agrees it’s misleading.
Why is it misleading? Because coherence arguments do, in fact, involve a notion of “utility maximization” narrower than just a system’s behavior maximizing some function of universe-trajectory. There are substantive notions of “utility maximizer”, those notions are a decent match to our intuitions in many ways, and they involve more than just behavior maximizing some function of universe-trajectory. When we talk about “utility maximizers” in a substantive sense, we’re talking about a phenomenon which is narrower than behavior maximizing some function of universe-trajectory.
If you want to see a notion of “utility maximizer” which is nontrivial, Coherence of Caches and Agents gives IMO a pretty illustrative and simple example.