A classical probability distribution over with a utility function understood as a random variable can easily be converted to the Jeffrey-Bolker framework, by taking the JB algebra as the sigma-algebra, and V as the expected value of U.
Ok, you’re saying that JB is just a set of axioms, and U already satisfies those axioms. And in this construction “event” really is a subset of Omega, and “updates” are just updates of P, right? Then of course U is not more general, I had the impression that JB is a more distinct and specific thing.
Regarding the other direction, my sense is that you will have a very hard time writing down these updates, and when it works, the code will look a lot like one with an utility function. But, again, the example in “Updates Are Computable” isn’t detailed enough for me to argue anything. Although now that I look at it, it does look a lot like the U(p)=1-p(“never press the button”).
events (ie, propositions in the agent’s internal language)
I think you should include this explanation of events in the post.
construct ‘worlds’ as maximal specifications of which propositions are true/false
It remains totally unclear to me why you demand the world to be such a thing.
I’m not sure why you say Omega can be the domain of U but not the entire ontology.
My point is that if U has two output values, then it only needs two possible inputs. Maybe you’re saying that if |dom(U)|=2, then there is no point in having |dom(P)|>2, and maybe you’re right, but I feel no need to make such claims. Even if the domains are different, they are not unrelated, Omega is still in some way contained in the ontology.
I agree that we can put even more stringent (and realistic) requirements on the computational power of the agent
We could and I think we should. I have no idea why we’re talking math, and not writing code for some toy agents in some toy simulation. Math has a tendency to sweep all kinds of infinite and intractable problems under the rug.
I’m looking at the Savage theory from your own https://plato.stanford.edu/entries/decision-theory/ and I see U(f)=∑u(f(si))P(si), so at least they have no problem with the domains (O and S) being different. Now I see the confusion is that to you Omega=S (and also O=S), but to me Omega=dom(u)=O.
Furthermore, if O={o0,o1}, then I can group the terms into u(o0)P(“we’re in a state where f evaluates to o0”) + u(o1)P(“we’re in a state where f evaluates to o1″), I’m just moving all of the complexity out of EU and into P, which I assume to work by some magic (e.g. LI), that doesn’t involve literally iterating over every possible S.
That’s just math speak, you can define a lot of things as a lot of other things, but that doesn’t mean that the agent is going to be literally iterating over infinite sets of infinite bit strings and evaluating something on each of them.
By the way, I might not see any more replies to this.