Here is an analogy. Classical utility theory, as developed by VNM, Savage, and others, the theory of which Eliezer made the searchlight comment, is like propositional calculus. The propositional calculus exists, it’s useful, you cannot ever go against it without falling into contradiction, but there’s not enough there to do much mathematics. For that you need to invent at least first-order logic, and use that to axiomatise arithmetic and eventually all of mathematics, while fending off the paradoxes of self-reference. And all through that, there is the propositional calculus, as valid and necessary as ever, but mathematics requires a great deal more.
The theory that would deal with the “monsters” that I listed does not yet exist. The idea of expected utility may thread its way through all of that greater theory when we have it, but we do not have it. Until we do, talk of the utility function of a person or of an AI is at best sensing what Eliezer has called the rhythm of the situation. To place over-much reliance on its letter will fail.
But propositional calculus and first-order logic exist to support mathematics, which was developed before formal logix. What’s your mathematics-of-value, rather than your logic-of-value?
That was an analogy, a similarity between two things, not an isomorphism.
The mathematics of value that you are asking for is the thing that does not exist yet. People, including me, muddle along as best they can; sometimes at less than that level. Post-rationalists like David Chapman valorise this as “nebulosity”, but I don’t think 19th century mathematicians would have been well served by that attitude.
Richard Jeffrey has a nice utility theory which applies to a Boolean algebra of propositions (instead of e.g. to Savage’s acts/outcomes/states of the world), similar to probability theory.
In fact, it consists of just two axioms plus the three probability axioms.
The theory doesn’t involve time, like probability theory. It also applies to just one agent, again like probability theory.
It doesn’t solve all problems, but neither does probability theory, which e.g. doesn’t solve the sleeping beauty paradox.
Do you nonetheless think utility theory is significantly more problematic than probability theory? Or do you reject both?
Utility theory is significantly more problematic than probability theory.
In both cases, from certain axioms, certain conclusions follow. The difference is in the applicability of those axioms in the real world. Utility theory is supposedly about agents making decisions, but as I remarked earlier in the thread, these are “agents” that make just one decision and stop, with no other agents in the picture.
I have read that Morgenstern was surprised that so much significance was read into the VNM theorem on its publication, when he and von Neumann had considered it to be a rather obvious and minor thing, relegated to the appendix of their book. I have come to agree with that assessment.
[Jeffrey’s] theory doesn’t involve time, like probability theory. It also applies to just one agent, again like probability theory.
Probability theory is not about agents. It is about probability. It applies to many things, including processes in time.
That people fail to solve the Sleeping Beauty paradox does not mean that probability theory fails. I have never paid the problem much attention, but Ape in the coat’s analysis seems convincing to me.
I mean in a subjective interpretation, a probability function represents the beliefs of one person at one point in time. Equally, a (Jeffrey) utility function can represent the desires of one person at one particular point in time. As such it is a theory of what an agent believes and wants.
Decisions can come into play insofar individual actions can be described by propositions (“I do A”, “I do B”) and each of those propositions is equivalent to a disjunction of the form “I do A and X happens or I do A and not-X happens”, which is subject to the axioms. But decisions is not something which is baked into the theory, much like probability theory isn’t necessarily about urns and gambles.
I can still be interested, even if I don’t have the answers.
Right, but I’m asking why. Like even if you don’t have a complete framework, I’d think you’d have a general motive for your interest or something.
It’s an interesting open problem.
Here is an analogy. Classical utility theory, as developed by VNM, Savage, and others, the theory of which Eliezer made the searchlight comment, is like propositional calculus. The propositional calculus exists, it’s useful, you cannot ever go against it without falling into contradiction, but there’s not enough there to do much mathematics. For that you need to invent at least first-order logic, and use that to axiomatise arithmetic and eventually all of mathematics, while fending off the paradoxes of self-reference. And all through that, there is the propositional calculus, as valid and necessary as ever, but mathematics requires a great deal more.
The theory that would deal with the “monsters” that I listed does not yet exist. The idea of expected utility may thread its way through all of that greater theory when we have it, but we do not have it. Until we do, talk of the utility function of a person or of an AI is at best sensing what Eliezer has called the rhythm of the situation. To place over-much reliance on its letter will fail.
But propositional calculus and first-order logic exist to support mathematics, which was developed before formal logix. What’s your mathematics-of-value, rather than your logic-of-value?
That was an analogy, a similarity between two things, not an isomorphism.
The mathematics of value that you are asking for is the thing that does not exist yet. People, including me, muddle along as best they can; sometimes at less than that level. Post-rationalists like David Chapman valorise this as “nebulosity”, but I don’t think 19th century mathematicians would have been well served by that attitude.
Richard Jeffrey has a nice utility theory which applies to a Boolean algebra of propositions (instead of e.g. to Savage’s acts/outcomes/states of the world), similar to probability theory.
In fact, it consists of just two axioms plus the three probability axioms.
The theory doesn’t involve time, like probability theory. It also applies to just one agent, again like probability theory.
It doesn’t solve all problems, but neither does probability theory, which e.g. doesn’t solve the sleeping beauty paradox.
Do you nonetheless think utility theory is significantly more problematic than probability theory? Or do you reject both?
Utility theory is significantly more problematic than probability theory.
In both cases, from certain axioms, certain conclusions follow. The difference is in the applicability of those axioms in the real world. Utility theory is supposedly about agents making decisions, but as I remarked earlier in the thread, these are “agents” that make just one decision and stop, with no other agents in the picture.
I have read that Morgenstern was surprised that so much significance was read into the VNM theorem on its publication, when he and von Neumann had considered it to be a rather obvious and minor thing, relegated to the appendix of their book. I have come to agree with that assessment.
Probability theory is not about agents. It is about probability. It applies to many things, including processes in time.
That people fail to solve the Sleeping Beauty paradox does not mean that probability theory fails. I have never paid the problem much attention, but Ape in the coat’s analysis seems convincing to me.
I mean in a subjective interpretation, a probability function represents the beliefs of one person at one point in time. Equally, a (Jeffrey) utility function can represent the desires of one person at one particular point in time. As such it is a theory of what an agent believes and wants.
Decisions can come into play insofar individual actions can be described by propositions (“I do A”, “I do B”) and each of those propositions is equivalent to a disjunction of the form “I do A and X happens or I do A and not-X happens”, which is subject to the axioms. But decisions is not something which is baked into the theory, much like probability theory isn’t necessarily about urns and gambles.