Can you please link me to more on this? I was under the impression that pascal’s mugging happens for any utility function that grows at least as fast as the probabilities shrink, and the probabilities shrink exponentially for normal probability functions. (For example: In the toy model of the St. Petersburg problem, the utility function grows exactly as fast as the probability function shrinks, resulting in infinite expected utility for playing the game.)
The Complete Class Theorem says that bounded cost/utility functions are isomorphic to posterior probabilities optimizing their expected values. In that sense, it’s almost a trivial result.
In practice, this just means that we can exchange the two whenever we please: we can take a probability and get an entropy to minimize, or we can take a bounded utility/cost function and bung it through a Boltzmann Distribution.
Also: As I understand them, utility functions aren’t of the form “I want to see X P often and Y 1-P often.” They are more like “X has utility 200, Y has utility 150, Z has utility 24...” Maybe the form you are talking about is a special case of the form I am talking about, but I don’t yet see how it could be the other way around. As I’m thinking of them, utility functions aren’t about what you see at all. They are just about the world. The point is, I’m confused by your explanation & would love to read more about this.
I was speaking loosely, so “I want to see X” can be taken as, “I want X to happen”. The details remain an open research problem of how the brain (or probabilistic AI) can or should cash out, “X happens” into “here are all the things I expect to observe when X happens, and I use them to gather evidence for whether X has happened, and to control whether X happens and how often”.
For a metaphor of why you’d have “probabilistic” utility functions, consider it as Bayesian uncertainty: “I have degree of belief P that X should happen, and degree of belief 1-P that something else should happen.”
One of the deep philosophical differences is that both Fristonian neurosci and Tenenbaumian cocosci assume that stochasticity is “real enough for government work”, and so there’s no point in specifying “utility functions” over “states” of the world in which all variables are clamped to fully determined values. After all, you yourself as a physically implemented agent have to generate waste heat, so there’s inevitably going to be some stochasticity (call it uncertainty that you’re mathematically required to have) about whatever physical heat bath you dumped your own waste heat into.
(That was supposed to be a reference to Eliezer’s writing on minds doing thermodynamic work (which free-energy minds absolutely do!), not a poop joke.)
The Complete Class Theorem says that bounded cost/utility functions are isomorphic to posterior probabilities optimizing their expected values. In that sense, it’s almost a trivial result.
In practice, this just means that we can exchange the two whenever we please: we can take a probability and get an entropy to minimize, or we can take a bounded utility/cost function and bung it through a Boltzmann Distribution.
I was speaking loosely, so “I want to see X” can be taken as, “I want X to happen”. The details remain an open research problem of how the brain (or probabilistic AI) can or should cash out, “X happens” into “here are all the things I expect to observe when X happens, and I use them to gather evidence for whether X has happened, and to control whether X happens and how often”.
For a metaphor of why you’d have “probabilistic” utility functions, consider it as Bayesian uncertainty: “I have degree of belief P that X should happen, and degree of belief 1-P that something else should happen.”
One of the deep philosophical differences is that both Fristonian neurosci and Tenenbaumian cocosci assume that stochasticity is “real enough for government work”, and so there’s no point in specifying “utility functions” over “states” of the world in which all variables are clamped to fully determined values. After all, you yourself as a physically implemented agent have to generate waste heat, so there’s inevitably going to be some stochasticity (call it uncertainty that you’re mathematically required to have) about whatever physical heat bath you dumped your own waste heat into.
(That was supposed to be a reference to Eliezer’s writing on minds doing thermodynamic work (which free-energy minds absolutely do!), not a poop joke.)