The sticking point for me is Axiom 1, the totality of the preference relation. Why should an ideal rational agent, whatever that is, have a preference—even one of indifference—between every possible pair of alternatives?
How would it act if asked to choose between two options that it does not have a preference between?
An ideal rational agent, as described by the VNM axioms, cannot change its utility function. It cannot change its ultimate priors.
It can, it just would not want to, ceteris paribus.
What is this concept useful for?
It is a starting point (well, a middle point). I see no reason to change my utility function or my priors; I do not desire those almost by definition. Infinite computational ability is an approximation to be correct in the future, as is, IMO, VNM axiom 3. This is what we have so far and we are working on improving it.
How would it act if asked to choose between two options that it does not have a preference between?
The point is that there will be options that it could never be asked to choose between.
What is this concept useful for?
It is a starting point (well, a middle point).
I become less and less convinced that utility maximisation is a useful place to start. An ideal rational agent must be an idealisation of real, imperfectly rational agents—of us, that is. What can I do with a preference between steak and ice cream? Sometimes one of those will satisfy a purpose for me and sometimes the other; most of the time neither is in my awareness at all. I do not need to have a preference, even between such everyday things, because I will never be faced with a choice between them. So I find the idea of a universal preference uncompelling.
When faced with practical trolley problems, the practical rational first response is not to weigh the two offered courses of action, but to look for other alternatives. They don’t always exist, but they have to be looked for. Hard-core Bayesian utility maximisation requires a universal prior that automatically thinks of all possible alternatives. I am not yet persuaded (e.g. by AIXI) that a practical implementation of such a prior is possible.
How would it act if asked to choose between two options that it does not have a preference between?
The point is that there will be options that it could never be asked to choose between.
Does this involve probabilities of zero or just ignoring sufficiently unlikely events?
What can I do with a preference between steak and ice cream? Sometimes one of those will satisfy a purpose for me and sometimes the other; most of the time neither is in my awareness at all. I do not need to have a preference, even between such everyday things, because I will never be faced with a choice between them.
I’m not sure I understand this; is this a choice between objects or between outcomes? If it is between outcomes, it can occur. If it is between objects, it is not the kind of thing described by the frameworks that we are discussing since it is not actually a choice that anyone makes; one may choose for an object to existed or to be possessed, but it is a category error to choose an object (though that phrase can be used as a shorthand for a different type of choice, I think it is clear what it means).
Does this involve probabilities of zero or just ignoring sufficiently unlikely events?
I don’t think there’s any way to avoid probabilities of zero. Even the Solomonoff universal prior assigns zero probability to uncomputable hypotheses. And you never have probabilities at the meta-level, which is always conducted in the language of plain old logic.
What can I do with a preference between steak and ice cream? …
I’m not sure I understand this; is this a choice between objects or between outcomes? If it is between outcomes, it can occur.
Between outcomes. How is this choice going to occur?
More generally, what is an outcome? In large-world reasoning, it seems to me that an outcome cannot be anything less than the entire history of one’s forward light-cone, or in TDT something even larger. Those are the things you are choosing between, when you make a choice. Decision theory on that scale is very much a work in progress, which I’m not going to scoff at, but I have low expectations of AGI being developed on that basis.
However, that is somewhat tangential. Are you proposing that decision making should be significantly altered by ignoring certain computable hypotheses—since Solomonoff induction, despite its limits, does manifest this problem—in order to make utility functions converge? That sounds horribly ad-hoc (see second paragraph of this).
In large-world reasoning, it seems to me that an outcome cannot be anything less than the entire history of one’s forward light-cone, or in TDT something even larger. Those are the things you are choosing between, when you make a choice.
I agree.
Decision theory on that scale is very much a work in progress, which I’m not going to scoff at, but I have low expectations of AGI being developed on that basis.
Any decision process that does not explicitly mention outcomes is only useful insofar as its outputs are correlated with our actual desires, which are about outcomes. If outcomes are not part of an AGI’s decision process, they are therefore still necessary for the design of the AGI. They are probably also necessary for the AGI to know which self-modifications are justified, since we cannot foresee which modifications could at some point be considered.
Are you proposing that decision making should be significantly altered by ignoring certain computable hypotheses—since Solomonoff induction, despite its limits, does manifest this problem—in order to make utility functions converge? That sounds horribly ad-hoc (see second paragraph of this).
If I was working on that, I could say it was being worked on. I agree that an ad-hoc hack is not what’s called for. It needs to be a principled hack. :-)
Any decision process that does not explicitly mention outcomes is only useful insofar as its outputs are correlated with our actual desires, which are about outcomes.
Are they really? That is, about outcomes in the large-world sense we just agreed on. Ask people what they want, and few will talk about the entire future history of the universe, even if you press them to go farther than what they want right now. I’m sure Eliezer would, and others operating in that sphere of thought, including many on LessWrong, but that is a rather limited sense of “us”.
Can you come up with a historical example of a mathematical or scientific problem being solved—not made to work for some specific purpose, but solved completely—with a principled hack?
I’m sure Eliezer would, and others operating in that sphere of thought, including many on LessWrong, but that is a rather limited sense of “us”.
I don’t see your point. Other people don’t care about outcomes but a) their extrapolated volitions probably do and b) if people’s extrapolated volitions don’t care about outcomes, I don’t think I’d want to use them as the basis of a FAI.
Can you come up with a historical example of a mathematical or scientific problem being solved—not made to work for some specific purpose, but solved completely—with a principled hack?
Limited comprehension in ZF set theory is the example I had in mind in coining the term “principled hack”. Russell said to Frege, “what about the set of sets not members of themselves?”, whereupon Frege was embarrassed, and eventually a way was found of limiting self-reference enough to avoid the contradiction. There’s a principle there—unrestricted self-reference can’t be done—but all the methods of limiting self-reference that have yet been devised look like hacks. They work, though. ZF appears to be consistent, and all of mathematics can be expressed in it. As a universal language, it completely solves the problem of formalising mathematics.
(I am aware that there are mathematicians who would disagree with that triumphalist claim, but as far as I know none of them are mainstream.)
Being a mathematician who at least considers himself mainstream, I would think that ZFC and the existence of a large cardinal is probably the minimum one would need to express a reasonable fragment of mathematics.
If you can’t talk about the set of all subsets of the set of all subsets of the real numbers, I think analysis would become a bit… bondage and discipline.
Ok, ZFC is a more convenient background theory than ZF (although I’m not sure where it becomes awkward to do without choice). That’s still short of needing large cardinal axioms.
The idea of programming ZF into an AGI horrifies my aesthetics, but that is no reason not to use it (well it is an indication that it might not be a good idea but in this specific case ZF does have the evidence on its side). If expected utility, or anything else necessary for an AGI, could benefit from a principled hack as well-tested as limited comprehension, I would accept it.
How would it act if asked to choose between two options that it does not have a preference between?
It can, it just would not want to, ceteris paribus.
It is a starting point (well, a middle point). I see no reason to change my utility function or my priors; I do not desire those almost by definition. Infinite computational ability is an approximation to be correct in the future, as is, IMO, VNM axiom 3. This is what we have so far and we are working on improving it.
The point is that there will be options that it could never be asked to choose between.
I become less and less convinced that utility maximisation is a useful place to start. An ideal rational agent must be an idealisation of real, imperfectly rational agents—of us, that is. What can I do with a preference between steak and ice cream? Sometimes one of those will satisfy a purpose for me and sometimes the other; most of the time neither is in my awareness at all. I do not need to have a preference, even between such everyday things, because I will never be faced with a choice between them. So I find the idea of a universal preference uncompelling.
When faced with practical trolley problems, the practical rational first response is not to weigh the two offered courses of action, but to look for other alternatives. They don’t always exist, but they have to be looked for. Hard-core Bayesian utility maximisation requires a universal prior that automatically thinks of all possible alternatives. I am not yet persuaded (e.g. by AIXI) that a practical implementation of such a prior is possible.
Does this involve probabilities of zero or just ignoring sufficiently unlikely events?
I’m not sure I understand this; is this a choice between objects or between outcomes? If it is between outcomes, it can occur. If it is between objects, it is not the kind of thing described by the frameworks that we are discussing since it is not actually a choice that anyone makes; one may choose for an object to existed or to be possessed, but it is a category error to choose an object (though that phrase can be used as a shorthand for a different type of choice, I think it is clear what it means).
I don’t think there’s any way to avoid probabilities of zero. Even the Solomonoff universal prior assigns zero probability to uncomputable hypotheses. And you never have probabilities at the meta-level, which is always conducted in the language of plain old logic.
Between outcomes. How is this choice going to occur?
More generally, what is an outcome? In large-world reasoning, it seems to me that an outcome cannot be anything less than the entire history of one’s forward light-cone, or in TDT something even larger. Those are the things you are choosing between, when you make a choice. Decision theory on that scale is very much a work in progress, which I’m not going to scoff at, but I have low expectations of AGI being developed on that basis.
There are people working on this. EY explained his position here.
However, that is somewhat tangential. Are you proposing that decision making should be significantly altered by ignoring certain computable hypotheses—since Solomonoff induction, despite its limits, does manifest this problem—in order to make utility functions converge? That sounds horribly ad-hoc (see second paragraph of this).
I agree.
Any decision process that does not explicitly mention outcomes is only useful insofar as its outputs are correlated with our actual desires, which are about outcomes. If outcomes are not part of an AGI’s decision process, they are therefore still necessary for the design of the AGI. They are probably also necessary for the AGI to know which self-modifications are justified, since we cannot foresee which modifications could at some point be considered.
If I was working on that, I could say it was being worked on. I agree that an ad-hoc hack is not what’s called for. It needs to be a principled hack. :-)
Are they really? That is, about outcomes in the large-world sense we just agreed on. Ask people what they want, and few will talk about the entire future history of the universe, even if you press them to go farther than what they want right now. I’m sure Eliezer would, and others operating in that sphere of thought, including many on LessWrong, but that is a rather limited sense of “us”.
Can you come up with a historical example of a mathematical or scientific problem being solved—not made to work for some specific purpose, but solved completely—with a principled hack?
I don’t see your point. Other people don’t care about outcomes but a) their extrapolated volitions probably do and b) if people’s extrapolated volitions don’t care about outcomes, I don’t think I’d want to use them as the basis of a FAI.
Limited comprehension in ZF set theory is the example I had in mind in coining the term “principled hack”. Russell said to Frege, “what about the set of sets not members of themselves?”, whereupon Frege was embarrassed, and eventually a way was found of limiting self-reference enough to avoid the contradiction. There’s a principle there—unrestricted self-reference can’t be done—but all the methods of limiting self-reference that have yet been devised look like hacks. They work, though. ZF appears to be consistent, and all of mathematics can be expressed in it. As a universal language, it completely solves the problem of formalising mathematics.
(I am aware that there are mathematicians who would disagree with that triumphalist claim, but as far as I know none of them are mainstream.)
Being a mathematician who at least considers himself mainstream, I would think that ZFC and the existence of a large cardinal is probably the minimum one would need to express a reasonable fragment of mathematics.
If you can’t talk about the set of all subsets of the set of all subsets of the real numbers, I think analysis would become a bit… bondage and discipline.
Surely the power set axiom gets you that?
That it exists, yes. But what good is that without choice?
Ok, ZFC is a more convenient background theory than ZF (although I’m not sure where it becomes awkward to do without choice). That’s still short of needing large cardinal axioms.
The idea of programming ZF into an AGI horrifies my aesthetics, but that is no reason not to use it (well it is an indication that it might not be a good idea but in this specific case ZF does have the evidence on its side). If expected utility, or anything else necessary for an AGI, could benefit from a principled hack as well-tested as limited comprehension, I would accept it.